00:00:00.001 Started by upstream project "autotest-per-patch" build number 132322 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.083 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.084 The recommended git tool is: git 00:00:00.084 using credential 00000000-0000-0000-0000-000000000002 00:00:00.086 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.148 Fetching changes from the remote Git repository 00:00:00.149 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.216 Using shallow fetch with depth 1 00:00:00.216 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.216 > git --version # timeout=10 00:00:00.276 > git --version # 'git version 2.39.2' 00:00:00.276 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.309 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.309 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.795 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.807 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.819 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.819 > git config core.sparsecheckout # timeout=10 00:00:04.830 > git read-tree -mu HEAD # timeout=10 00:00:04.844 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.866 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.866 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.951 [Pipeline] Start of Pipeline 00:00:04.966 [Pipeline] library 00:00:04.968 Loading library shm_lib@master 00:00:04.968 Library shm_lib@master is cached. Copying from home. 00:00:04.987 [Pipeline] node 00:00:04.996 Running on GP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.998 [Pipeline] { 00:00:05.007 [Pipeline] catchError 00:00:05.009 [Pipeline] { 00:00:05.024 [Pipeline] wrap 00:00:05.034 [Pipeline] { 00:00:05.042 [Pipeline] stage 00:00:05.045 [Pipeline] { (Prologue) 00:00:05.265 [Pipeline] sh 00:00:05.549 + logger -p user.info -t JENKINS-CI 00:00:05.563 [Pipeline] echo 00:00:05.565 Node: GP12 00:00:05.572 [Pipeline] sh 00:00:05.870 [Pipeline] setCustomBuildProperty 00:00:05.878 [Pipeline] echo 00:00:05.879 Cleanup processes 00:00:05.884 [Pipeline] sh 00:00:06.165 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.165 2393370 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.178 [Pipeline] sh 00:00:06.461 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.461 ++ grep -v 'sudo pgrep' 00:00:06.461 ++ awk '{print $1}' 00:00:06.461 + sudo kill -9 00:00:06.461 + true 00:00:06.478 [Pipeline] cleanWs 00:00:06.490 [WS-CLEANUP] Deleting project workspace... 00:00:06.490 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.498 [WS-CLEANUP] done 00:00:06.501 [Pipeline] setCustomBuildProperty 00:00:06.513 [Pipeline] sh 00:00:06.798 + sudo git config --global --replace-all safe.directory '*' 00:00:06.878 [Pipeline] httpRequest 00:00:07.218 [Pipeline] echo 00:00:07.220 Sorcerer 10.211.164.20 is alive 00:00:07.226 [Pipeline] retry 00:00:07.227 [Pipeline] { 00:00:07.239 [Pipeline] httpRequest 00:00:07.242 HttpMethod: GET 00:00:07.243 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.244 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.247 Response Code: HTTP/1.1 200 OK 00:00:07.247 Success: Status code 200 is in the accepted range: 200,404 00:00:07.247 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.225 [Pipeline] } 00:00:08.244 [Pipeline] // retry 00:00:08.251 [Pipeline] sh 00:00:08.537 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.554 [Pipeline] httpRequest 00:00:08.996 [Pipeline] echo 00:00:08.998 Sorcerer 10.211.164.20 is alive 00:00:09.048 [Pipeline] retry 00:00:09.051 [Pipeline] { 00:00:09.065 [Pipeline] httpRequest 00:00:09.069 HttpMethod: GET 00:00:09.070 URL: http://10.211.164.20/packages/spdk_73f18e8900ffebcb00e30c334c755860c93cc18b.tar.gz 00:00:09.070 Sending request to url: http://10.211.164.20/packages/spdk_73f18e8900ffebcb00e30c334c755860c93cc18b.tar.gz 00:00:09.074 Response Code: HTTP/1.1 200 OK 00:00:09.074 Success: Status code 200 is in the accepted range: 200,404 00:00:09.074 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_73f18e8900ffebcb00e30c334c755860c93cc18b.tar.gz 00:00:29.333 [Pipeline] } 00:00:29.351 [Pipeline] // retry 00:00:29.359 [Pipeline] sh 00:00:29.649 + tar --no-same-owner -xf spdk_73f18e8900ffebcb00e30c334c755860c93cc18b.tar.gz 00:00:32.952 [Pipeline] sh 00:00:33.239 + git -C spdk log --oneline -n5 00:00:33.239 73f18e890 lib/reduce: fix the magic number of empty mapping detection. 00:00:33.239 029355612 bdev_ut: add manual examine bdev unit test case 00:00:33.239 fc96810c2 bdev: remove bdev from examine allow list on unregister 00:00:33.239 a0c128549 bdev/nvme: Make bdev nvme get and set opts APIs public 00:00:33.239 53ca6a885 bdev/nvme: Rearrange fields in spdk_bdev_nvme_opts to reduce holes. 00:00:33.252 [Pipeline] } 00:00:33.266 [Pipeline] // stage 00:00:33.275 [Pipeline] stage 00:00:33.277 [Pipeline] { (Prepare) 00:00:33.295 [Pipeline] writeFile 00:00:33.311 [Pipeline] sh 00:00:33.599 + logger -p user.info -t JENKINS-CI 00:00:33.613 [Pipeline] sh 00:00:33.900 + logger -p user.info -t JENKINS-CI 00:00:33.914 [Pipeline] sh 00:00:34.203 + cat autorun-spdk.conf 00:00:34.203 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:34.203 SPDK_TEST_NVMF=1 00:00:34.203 SPDK_TEST_NVME_CLI=1 00:00:34.203 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:34.203 SPDK_TEST_NVMF_NICS=e810 00:00:34.203 SPDK_TEST_VFIOUSER=1 00:00:34.203 SPDK_RUN_UBSAN=1 00:00:34.203 NET_TYPE=phy 00:00:34.212 RUN_NIGHTLY=0 00:00:34.216 [Pipeline] readFile 00:00:34.244 [Pipeline] withEnv 00:00:34.247 [Pipeline] { 00:00:34.259 [Pipeline] sh 00:00:34.548 + set -ex 00:00:34.548 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:34.548 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:34.548 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:34.548 ++ SPDK_TEST_NVMF=1 00:00:34.548 ++ SPDK_TEST_NVME_CLI=1 00:00:34.548 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:34.548 ++ SPDK_TEST_NVMF_NICS=e810 00:00:34.548 ++ SPDK_TEST_VFIOUSER=1 00:00:34.548 ++ SPDK_RUN_UBSAN=1 00:00:34.548 ++ NET_TYPE=phy 00:00:34.548 ++ RUN_NIGHTLY=0 00:00:34.548 + case $SPDK_TEST_NVMF_NICS in 00:00:34.548 + DRIVERS=ice 00:00:34.548 + [[ tcp == \r\d\m\a ]] 00:00:34.548 + [[ -n ice ]] 00:00:34.548 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:34.548 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:34.548 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:34.548 rmmod: ERROR: Module irdma is not currently loaded 00:00:34.548 rmmod: ERROR: Module i40iw is not currently loaded 00:00:34.548 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:34.548 + true 00:00:34.548 + for D in $DRIVERS 00:00:34.548 + sudo modprobe ice 00:00:34.548 + exit 0 00:00:34.559 [Pipeline] } 00:00:34.575 [Pipeline] // withEnv 00:00:34.579 [Pipeline] } 00:00:34.593 [Pipeline] // stage 00:00:34.602 [Pipeline] catchError 00:00:34.604 [Pipeline] { 00:00:34.619 [Pipeline] timeout 00:00:34.620 Timeout set to expire in 1 hr 0 min 00:00:34.622 [Pipeline] { 00:00:34.637 [Pipeline] stage 00:00:34.639 [Pipeline] { (Tests) 00:00:34.655 [Pipeline] sh 00:00:34.948 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:34.948 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:34.948 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:34.948 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:34.948 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:34.948 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:34.948 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:34.948 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:34.948 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:34.948 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:34.948 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:34.948 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:34.948 + source /etc/os-release 00:00:34.948 ++ NAME='Fedora Linux' 00:00:34.948 ++ VERSION='39 (Cloud Edition)' 00:00:34.948 ++ ID=fedora 00:00:34.948 ++ VERSION_ID=39 00:00:34.948 ++ VERSION_CODENAME= 00:00:34.948 ++ PLATFORM_ID=platform:f39 00:00:34.948 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:00:34.948 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:34.948 ++ LOGO=fedora-logo-icon 00:00:34.948 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:00:34.948 ++ HOME_URL=https://fedoraproject.org/ 00:00:34.948 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:00:34.948 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:34.948 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:34.948 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:34.948 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:00:34.948 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:34.948 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:00:34.948 ++ SUPPORT_END=2024-11-12 00:00:34.948 ++ VARIANT='Cloud Edition' 00:00:34.948 ++ VARIANT_ID=cloud 00:00:34.948 + uname -a 00:00:34.948 Linux spdk-gp-12 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:00:34.948 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:36.328 Hugepages 00:00:36.328 node hugesize free / total 00:00:36.328 node0 1048576kB 0 / 0 00:00:36.328 node0 2048kB 0 / 0 00:00:36.328 node1 1048576kB 0 / 0 00:00:36.328 node1 2048kB 0 / 0 00:00:36.328 00:00:36.328 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:36.328 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:00:36.328 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:00:36.328 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:00:36.328 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:00:36.328 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:00:36.328 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:00:36.328 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:00:36.328 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:00:36.328 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:00:36.328 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:00:36.328 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:00:36.328 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:00:36.328 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:00:36.328 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:00:36.328 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:00:36.328 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:00:36.328 NVMe 0000:81:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:36.328 + rm -f /tmp/spdk-ld-path 00:00:36.328 + source autorun-spdk.conf 00:00:36.328 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:36.328 ++ SPDK_TEST_NVMF=1 00:00:36.328 ++ SPDK_TEST_NVME_CLI=1 00:00:36.328 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:36.329 ++ SPDK_TEST_NVMF_NICS=e810 00:00:36.329 ++ SPDK_TEST_VFIOUSER=1 00:00:36.329 ++ SPDK_RUN_UBSAN=1 00:00:36.329 ++ NET_TYPE=phy 00:00:36.329 ++ RUN_NIGHTLY=0 00:00:36.329 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:36.329 + [[ -n '' ]] 00:00:36.329 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:36.329 + for M in /var/spdk/build-*-manifest.txt 00:00:36.329 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:00:36.329 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:36.329 + for M in /var/spdk/build-*-manifest.txt 00:00:36.329 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:36.329 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:36.329 + for M in /var/spdk/build-*-manifest.txt 00:00:36.329 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:36.329 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:36.329 ++ uname 00:00:36.329 + [[ Linux == \L\i\n\u\x ]] 00:00:36.329 + sudo dmesg -T 00:00:36.329 + sudo dmesg --clear 00:00:36.329 + dmesg_pid=2394142 00:00:36.329 + [[ Fedora Linux == FreeBSD ]] 00:00:36.329 + sudo dmesg -Tw 00:00:36.329 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:36.329 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:36.329 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:36.329 + [[ -x /usr/src/fio-static/fio ]] 00:00:36.329 + export FIO_BIN=/usr/src/fio-static/fio 00:00:36.329 + FIO_BIN=/usr/src/fio-static/fio 00:00:36.329 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:36.329 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:36.329 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:36.329 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:36.329 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:36.329 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:36.329 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:36.329 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:36.329 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:36.329 11:02:31 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:00:36.329 11:02:31 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:36.329 11:02:31 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:36.329 11:02:31 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:00:36.329 11:02:31 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:00:36.329 11:02:31 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:36.329 11:02:31 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:00:36.329 11:02:31 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:00:36.329 11:02:31 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:00:36.329 11:02:31 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:00:36.329 11:02:31 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:00:36.329 11:02:31 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:00:36.329 11:02:31 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:36.590 11:02:31 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:00:36.590 11:02:31 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:36.590 11:02:31 -- scripts/common.sh@15 -- $ shopt -s extglob 00:00:36.590 11:02:31 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:36.590 11:02:31 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:36.590 11:02:31 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:36.590 11:02:31 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:36.590 11:02:31 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:36.590 11:02:31 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:36.590 11:02:31 -- paths/export.sh@5 -- $ export PATH 00:00:36.590 11:02:31 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:36.590 11:02:31 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:36.590 11:02:31 -- common/autobuild_common.sh@486 -- $ date +%s 00:00:36.590 11:02:31 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1732010551.XXXXXX 00:00:36.590 11:02:31 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1732010551.JCS5Ac 00:00:36.590 11:02:31 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:00:36.590 11:02:31 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:00:36.590 11:02:31 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:36.590 11:02:31 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:36.590 11:02:31 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:36.590 11:02:31 -- common/autobuild_common.sh@502 -- $ get_config_params 00:00:36.590 11:02:31 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:00:36.590 11:02:31 -- common/autotest_common.sh@10 -- $ set +x 00:00:36.590 11:02:31 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:36.590 11:02:31 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:00:36.590 11:02:31 -- pm/common@17 -- $ local monitor 00:00:36.590 11:02:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:36.590 11:02:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:36.590 11:02:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:36.590 11:02:31 -- pm/common@21 -- $ date +%s 00:00:36.590 11:02:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:36.590 11:02:31 -- pm/common@21 -- $ date +%s 00:00:36.590 11:02:31 -- pm/common@25 -- $ sleep 1 00:00:36.590 11:02:31 -- pm/common@21 -- $ date +%s 00:00:36.590 11:02:31 -- pm/common@21 -- $ date +%s 00:00:36.590 11:02:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732010551 00:00:36.590 11:02:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732010551 00:00:36.590 11:02:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732010551 00:00:36.590 11:02:31 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732010551 00:00:36.590 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732010551_collect-vmstat.pm.log 00:00:36.590 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732010551_collect-cpu-load.pm.log 00:00:36.590 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732010551_collect-cpu-temp.pm.log 00:00:36.590 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732010551_collect-bmc-pm.bmc.pm.log 00:00:37.537 11:02:32 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:00:37.537 11:02:32 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:37.537 11:02:32 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:37.537 11:02:32 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:37.537 11:02:32 -- spdk/autobuild.sh@16 -- $ date -u 00:00:37.537 Tue Nov 19 10:02:32 AM UTC 2024 00:00:37.537 11:02:32 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:37.537 v25.01-pre-196-g73f18e890 00:00:37.537 11:02:32 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:37.537 11:02:32 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:37.537 11:02:32 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:37.537 11:02:32 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:00:37.537 11:02:32 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:00:37.537 11:02:32 -- common/autotest_common.sh@10 -- $ set +x 00:00:37.537 ************************************ 00:00:37.537 START TEST ubsan 00:00:37.537 ************************************ 00:00:37.537 11:02:32 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:00:37.537 using ubsan 00:00:37.537 00:00:37.537 real 0m0.000s 00:00:37.537 user 0m0.000s 00:00:37.537 sys 0m0.000s 00:00:37.537 11:02:32 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:00:37.537 11:02:32 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:37.537 ************************************ 00:00:37.537 END TEST ubsan 00:00:37.537 ************************************ 00:00:37.537 11:02:32 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:37.537 11:02:32 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:37.537 11:02:32 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:37.537 11:02:32 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:37.537 11:02:32 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:37.537 11:02:32 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:37.537 11:02:32 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:37.537 11:02:32 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:37.537 11:02:32 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:37.537 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:37.537 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:38.104 Using 'verbs' RDMA provider 00:00:48.667 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:00:58.656 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:00:58.656 Creating mk/config.mk...done. 00:00:58.656 Creating mk/cc.flags.mk...done. 00:00:58.656 Type 'make' to build. 00:00:58.656 11:02:53 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:00:58.656 11:02:53 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:00:58.656 11:02:53 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:00:58.656 11:02:53 -- common/autotest_common.sh@10 -- $ set +x 00:00:58.656 ************************************ 00:00:58.656 START TEST make 00:00:58.656 ************************************ 00:00:58.656 11:02:53 make -- common/autotest_common.sh@1129 -- $ make -j48 00:00:58.656 make[1]: Nothing to be done for 'all'. 00:01:00.579 The Meson build system 00:01:00.579 Version: 1.5.0 00:01:00.579 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:00.579 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:00.579 Build type: native build 00:01:00.579 Project name: libvfio-user 00:01:00.579 Project version: 0.0.1 00:01:00.579 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:00.579 C linker for the host machine: cc ld.bfd 2.40-14 00:01:00.579 Host machine cpu family: x86_64 00:01:00.579 Host machine cpu: x86_64 00:01:00.579 Run-time dependency threads found: YES 00:01:00.579 Library dl found: YES 00:01:00.579 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:00.579 Run-time dependency json-c found: YES 0.17 00:01:00.579 Run-time dependency cmocka found: YES 1.1.7 00:01:00.579 Program pytest-3 found: NO 00:01:00.579 Program flake8 found: NO 00:01:00.579 Program misspell-fixer found: NO 00:01:00.579 Program restructuredtext-lint found: NO 00:01:00.579 Program valgrind found: YES (/usr/bin/valgrind) 00:01:00.579 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:00.579 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:00.579 Compiler for C supports arguments -Wwrite-strings: YES 00:01:00.579 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:00.579 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:00.579 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:00.579 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:00.579 Build targets in project: 8 00:01:00.579 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:00.579 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:00.579 00:01:00.579 libvfio-user 0.0.1 00:01:00.579 00:01:00.579 User defined options 00:01:00.579 buildtype : debug 00:01:00.579 default_library: shared 00:01:00.579 libdir : /usr/local/lib 00:01:00.579 00:01:00.579 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:01.155 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:01.417 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:01.417 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:01.417 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:01.417 [4/37] Compiling C object samples/null.p/null.c.o 00:01:01.679 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:01.679 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:01.679 [7/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:01.679 [8/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:01.679 [9/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:01.679 [10/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:01.679 [11/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:01.679 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:01.679 [13/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:01.679 [14/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:01.679 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:01.679 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:01.679 [17/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:01.679 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:01.679 [19/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:01.679 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:01.679 [21/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:01.679 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:01.679 [23/37] Compiling C object samples/server.p/server.c.o 00:01:01.679 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:01.679 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:01.679 [26/37] Compiling C object samples/client.p/client.c.o 00:01:01.679 [27/37] Linking target samples/client 00:01:01.679 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:01.943 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:01:01.943 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:01.943 [31/37] Linking target test/unit_tests 00:01:01.943 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:02.207 [33/37] Linking target samples/null 00:01:02.207 [34/37] Linking target samples/server 00:01:02.207 [35/37] Linking target samples/lspci 00:01:02.207 [36/37] Linking target samples/shadow_ioeventfd_server 00:01:02.207 [37/37] Linking target samples/gpio-pci-idio-16 00:01:02.207 INFO: autodetecting backend as ninja 00:01:02.207 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:02.207 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:03.193 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:03.193 ninja: no work to do. 00:01:08.462 The Meson build system 00:01:08.462 Version: 1.5.0 00:01:08.462 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:08.462 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:08.463 Build type: native build 00:01:08.463 Program cat found: YES (/usr/bin/cat) 00:01:08.463 Project name: DPDK 00:01:08.463 Project version: 24.03.0 00:01:08.463 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:08.463 C linker for the host machine: cc ld.bfd 2.40-14 00:01:08.463 Host machine cpu family: x86_64 00:01:08.463 Host machine cpu: x86_64 00:01:08.463 Message: ## Building in Developer Mode ## 00:01:08.463 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:08.463 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:08.463 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:08.463 Program python3 found: YES (/usr/bin/python3) 00:01:08.463 Program cat found: YES (/usr/bin/cat) 00:01:08.463 Compiler for C supports arguments -march=native: YES 00:01:08.463 Checking for size of "void *" : 8 00:01:08.463 Checking for size of "void *" : 8 (cached) 00:01:08.463 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:08.463 Library m found: YES 00:01:08.463 Library numa found: YES 00:01:08.463 Has header "numaif.h" : YES 00:01:08.463 Library fdt found: NO 00:01:08.463 Library execinfo found: NO 00:01:08.463 Has header "execinfo.h" : YES 00:01:08.463 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:08.463 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:08.463 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:08.463 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:08.463 Run-time dependency openssl found: YES 3.1.1 00:01:08.463 Run-time dependency libpcap found: YES 1.10.4 00:01:08.463 Has header "pcap.h" with dependency libpcap: YES 00:01:08.463 Compiler for C supports arguments -Wcast-qual: YES 00:01:08.463 Compiler for C supports arguments -Wdeprecated: YES 00:01:08.463 Compiler for C supports arguments -Wformat: YES 00:01:08.463 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:08.463 Compiler for C supports arguments -Wformat-security: NO 00:01:08.463 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:08.463 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:08.463 Compiler for C supports arguments -Wnested-externs: YES 00:01:08.463 Compiler for C supports arguments -Wold-style-definition: YES 00:01:08.463 Compiler for C supports arguments -Wpointer-arith: YES 00:01:08.463 Compiler for C supports arguments -Wsign-compare: YES 00:01:08.463 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:08.463 Compiler for C supports arguments -Wundef: YES 00:01:08.463 Compiler for C supports arguments -Wwrite-strings: YES 00:01:08.463 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:08.463 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:08.463 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:08.463 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:08.463 Program objdump found: YES (/usr/bin/objdump) 00:01:08.463 Compiler for C supports arguments -mavx512f: YES 00:01:08.463 Checking if "AVX512 checking" compiles: YES 00:01:08.463 Fetching value of define "__SSE4_2__" : 1 00:01:08.463 Fetching value of define "__AES__" : 1 00:01:08.463 Fetching value of define "__AVX__" : 1 00:01:08.463 Fetching value of define "__AVX2__" : (undefined) 00:01:08.463 Fetching value of define "__AVX512BW__" : (undefined) 00:01:08.463 Fetching value of define "__AVX512CD__" : (undefined) 00:01:08.463 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:08.463 Fetching value of define "__AVX512F__" : (undefined) 00:01:08.463 Fetching value of define "__AVX512VL__" : (undefined) 00:01:08.463 Fetching value of define "__PCLMUL__" : 1 00:01:08.463 Fetching value of define "__RDRND__" : 1 00:01:08.463 Fetching value of define "__RDSEED__" : (undefined) 00:01:08.463 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:08.463 Fetching value of define "__znver1__" : (undefined) 00:01:08.463 Fetching value of define "__znver2__" : (undefined) 00:01:08.463 Fetching value of define "__znver3__" : (undefined) 00:01:08.463 Fetching value of define "__znver4__" : (undefined) 00:01:08.463 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:08.463 Message: lib/log: Defining dependency "log" 00:01:08.463 Message: lib/kvargs: Defining dependency "kvargs" 00:01:08.463 Message: lib/telemetry: Defining dependency "telemetry" 00:01:08.463 Checking for function "getentropy" : NO 00:01:08.463 Message: lib/eal: Defining dependency "eal" 00:01:08.463 Message: lib/ring: Defining dependency "ring" 00:01:08.463 Message: lib/rcu: Defining dependency "rcu" 00:01:08.463 Message: lib/mempool: Defining dependency "mempool" 00:01:08.463 Message: lib/mbuf: Defining dependency "mbuf" 00:01:08.463 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:08.463 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:08.463 Compiler for C supports arguments -mpclmul: YES 00:01:08.463 Compiler for C supports arguments -maes: YES 00:01:08.463 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:08.463 Compiler for C supports arguments -mavx512bw: YES 00:01:08.463 Compiler for C supports arguments -mavx512dq: YES 00:01:08.463 Compiler for C supports arguments -mavx512vl: YES 00:01:08.463 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:08.463 Compiler for C supports arguments -mavx2: YES 00:01:08.463 Compiler for C supports arguments -mavx: YES 00:01:08.463 Message: lib/net: Defining dependency "net" 00:01:08.463 Message: lib/meter: Defining dependency "meter" 00:01:08.463 Message: lib/ethdev: Defining dependency "ethdev" 00:01:08.463 Message: lib/pci: Defining dependency "pci" 00:01:08.463 Message: lib/cmdline: Defining dependency "cmdline" 00:01:08.463 Message: lib/hash: Defining dependency "hash" 00:01:08.463 Message: lib/timer: Defining dependency "timer" 00:01:08.463 Message: lib/compressdev: Defining dependency "compressdev" 00:01:08.463 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:08.463 Message: lib/dmadev: Defining dependency "dmadev" 00:01:08.463 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:08.463 Message: lib/power: Defining dependency "power" 00:01:08.463 Message: lib/reorder: Defining dependency "reorder" 00:01:08.463 Message: lib/security: Defining dependency "security" 00:01:08.463 Has header "linux/userfaultfd.h" : YES 00:01:08.463 Has header "linux/vduse.h" : YES 00:01:08.463 Message: lib/vhost: Defining dependency "vhost" 00:01:08.463 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:08.463 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:08.463 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:08.463 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:08.463 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:08.463 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:08.463 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:08.463 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:08.463 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:08.463 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:08.463 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:08.463 Configuring doxy-api-html.conf using configuration 00:01:08.463 Configuring doxy-api-man.conf using configuration 00:01:08.463 Program mandb found: YES (/usr/bin/mandb) 00:01:08.463 Program sphinx-build found: NO 00:01:08.463 Configuring rte_build_config.h using configuration 00:01:08.463 Message: 00:01:08.463 ================= 00:01:08.463 Applications Enabled 00:01:08.463 ================= 00:01:08.463 00:01:08.463 apps: 00:01:08.463 00:01:08.463 00:01:08.463 Message: 00:01:08.463 ================= 00:01:08.463 Libraries Enabled 00:01:08.463 ================= 00:01:08.463 00:01:08.463 libs: 00:01:08.463 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:08.463 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:08.463 cryptodev, dmadev, power, reorder, security, vhost, 00:01:08.463 00:01:08.463 Message: 00:01:08.463 =============== 00:01:08.463 Drivers Enabled 00:01:08.463 =============== 00:01:08.463 00:01:08.463 common: 00:01:08.463 00:01:08.463 bus: 00:01:08.463 pci, vdev, 00:01:08.463 mempool: 00:01:08.463 ring, 00:01:08.463 dma: 00:01:08.463 00:01:08.463 net: 00:01:08.463 00:01:08.463 crypto: 00:01:08.463 00:01:08.463 compress: 00:01:08.463 00:01:08.463 vdpa: 00:01:08.463 00:01:08.463 00:01:08.463 Message: 00:01:08.463 ================= 00:01:08.463 Content Skipped 00:01:08.463 ================= 00:01:08.463 00:01:08.463 apps: 00:01:08.463 dumpcap: explicitly disabled via build config 00:01:08.463 graph: explicitly disabled via build config 00:01:08.463 pdump: explicitly disabled via build config 00:01:08.463 proc-info: explicitly disabled via build config 00:01:08.463 test-acl: explicitly disabled via build config 00:01:08.463 test-bbdev: explicitly disabled via build config 00:01:08.463 test-cmdline: explicitly disabled via build config 00:01:08.463 test-compress-perf: explicitly disabled via build config 00:01:08.463 test-crypto-perf: explicitly disabled via build config 00:01:08.463 test-dma-perf: explicitly disabled via build config 00:01:08.463 test-eventdev: explicitly disabled via build config 00:01:08.463 test-fib: explicitly disabled via build config 00:01:08.463 test-flow-perf: explicitly disabled via build config 00:01:08.463 test-gpudev: explicitly disabled via build config 00:01:08.463 test-mldev: explicitly disabled via build config 00:01:08.463 test-pipeline: explicitly disabled via build config 00:01:08.463 test-pmd: explicitly disabled via build config 00:01:08.463 test-regex: explicitly disabled via build config 00:01:08.463 test-sad: explicitly disabled via build config 00:01:08.464 test-security-perf: explicitly disabled via build config 00:01:08.464 00:01:08.464 libs: 00:01:08.464 argparse: explicitly disabled via build config 00:01:08.464 metrics: explicitly disabled via build config 00:01:08.464 acl: explicitly disabled via build config 00:01:08.464 bbdev: explicitly disabled via build config 00:01:08.464 bitratestats: explicitly disabled via build config 00:01:08.464 bpf: explicitly disabled via build config 00:01:08.464 cfgfile: explicitly disabled via build config 00:01:08.464 distributor: explicitly disabled via build config 00:01:08.464 efd: explicitly disabled via build config 00:01:08.464 eventdev: explicitly disabled via build config 00:01:08.464 dispatcher: explicitly disabled via build config 00:01:08.464 gpudev: explicitly disabled via build config 00:01:08.464 gro: explicitly disabled via build config 00:01:08.464 gso: explicitly disabled via build config 00:01:08.464 ip_frag: explicitly disabled via build config 00:01:08.464 jobstats: explicitly disabled via build config 00:01:08.464 latencystats: explicitly disabled via build config 00:01:08.464 lpm: explicitly disabled via build config 00:01:08.464 member: explicitly disabled via build config 00:01:08.464 pcapng: explicitly disabled via build config 00:01:08.464 rawdev: explicitly disabled via build config 00:01:08.464 regexdev: explicitly disabled via build config 00:01:08.464 mldev: explicitly disabled via build config 00:01:08.464 rib: explicitly disabled via build config 00:01:08.464 sched: explicitly disabled via build config 00:01:08.464 stack: explicitly disabled via build config 00:01:08.464 ipsec: explicitly disabled via build config 00:01:08.464 pdcp: explicitly disabled via build config 00:01:08.464 fib: explicitly disabled via build config 00:01:08.464 port: explicitly disabled via build config 00:01:08.464 pdump: explicitly disabled via build config 00:01:08.464 table: explicitly disabled via build config 00:01:08.464 pipeline: explicitly disabled via build config 00:01:08.464 graph: explicitly disabled via build config 00:01:08.464 node: explicitly disabled via build config 00:01:08.464 00:01:08.464 drivers: 00:01:08.464 common/cpt: not in enabled drivers build config 00:01:08.464 common/dpaax: not in enabled drivers build config 00:01:08.464 common/iavf: not in enabled drivers build config 00:01:08.464 common/idpf: not in enabled drivers build config 00:01:08.464 common/ionic: not in enabled drivers build config 00:01:08.464 common/mvep: not in enabled drivers build config 00:01:08.464 common/octeontx: not in enabled drivers build config 00:01:08.464 bus/auxiliary: not in enabled drivers build config 00:01:08.464 bus/cdx: not in enabled drivers build config 00:01:08.464 bus/dpaa: not in enabled drivers build config 00:01:08.464 bus/fslmc: not in enabled drivers build config 00:01:08.464 bus/ifpga: not in enabled drivers build config 00:01:08.464 bus/platform: not in enabled drivers build config 00:01:08.464 bus/uacce: not in enabled drivers build config 00:01:08.464 bus/vmbus: not in enabled drivers build config 00:01:08.464 common/cnxk: not in enabled drivers build config 00:01:08.464 common/mlx5: not in enabled drivers build config 00:01:08.464 common/nfp: not in enabled drivers build config 00:01:08.464 common/nitrox: not in enabled drivers build config 00:01:08.464 common/qat: not in enabled drivers build config 00:01:08.464 common/sfc_efx: not in enabled drivers build config 00:01:08.464 mempool/bucket: not in enabled drivers build config 00:01:08.464 mempool/cnxk: not in enabled drivers build config 00:01:08.464 mempool/dpaa: not in enabled drivers build config 00:01:08.464 mempool/dpaa2: not in enabled drivers build config 00:01:08.464 mempool/octeontx: not in enabled drivers build config 00:01:08.464 mempool/stack: not in enabled drivers build config 00:01:08.464 dma/cnxk: not in enabled drivers build config 00:01:08.464 dma/dpaa: not in enabled drivers build config 00:01:08.464 dma/dpaa2: not in enabled drivers build config 00:01:08.464 dma/hisilicon: not in enabled drivers build config 00:01:08.464 dma/idxd: not in enabled drivers build config 00:01:08.464 dma/ioat: not in enabled drivers build config 00:01:08.464 dma/skeleton: not in enabled drivers build config 00:01:08.464 net/af_packet: not in enabled drivers build config 00:01:08.464 net/af_xdp: not in enabled drivers build config 00:01:08.464 net/ark: not in enabled drivers build config 00:01:08.464 net/atlantic: not in enabled drivers build config 00:01:08.464 net/avp: not in enabled drivers build config 00:01:08.464 net/axgbe: not in enabled drivers build config 00:01:08.464 net/bnx2x: not in enabled drivers build config 00:01:08.464 net/bnxt: not in enabled drivers build config 00:01:08.464 net/bonding: not in enabled drivers build config 00:01:08.464 net/cnxk: not in enabled drivers build config 00:01:08.464 net/cpfl: not in enabled drivers build config 00:01:08.464 net/cxgbe: not in enabled drivers build config 00:01:08.464 net/dpaa: not in enabled drivers build config 00:01:08.464 net/dpaa2: not in enabled drivers build config 00:01:08.464 net/e1000: not in enabled drivers build config 00:01:08.464 net/ena: not in enabled drivers build config 00:01:08.464 net/enetc: not in enabled drivers build config 00:01:08.464 net/enetfec: not in enabled drivers build config 00:01:08.464 net/enic: not in enabled drivers build config 00:01:08.464 net/failsafe: not in enabled drivers build config 00:01:08.464 net/fm10k: not in enabled drivers build config 00:01:08.464 net/gve: not in enabled drivers build config 00:01:08.464 net/hinic: not in enabled drivers build config 00:01:08.464 net/hns3: not in enabled drivers build config 00:01:08.464 net/i40e: not in enabled drivers build config 00:01:08.464 net/iavf: not in enabled drivers build config 00:01:08.464 net/ice: not in enabled drivers build config 00:01:08.464 net/idpf: not in enabled drivers build config 00:01:08.464 net/igc: not in enabled drivers build config 00:01:08.464 net/ionic: not in enabled drivers build config 00:01:08.464 net/ipn3ke: not in enabled drivers build config 00:01:08.464 net/ixgbe: not in enabled drivers build config 00:01:08.464 net/mana: not in enabled drivers build config 00:01:08.464 net/memif: not in enabled drivers build config 00:01:08.464 net/mlx4: not in enabled drivers build config 00:01:08.464 net/mlx5: not in enabled drivers build config 00:01:08.464 net/mvneta: not in enabled drivers build config 00:01:08.464 net/mvpp2: not in enabled drivers build config 00:01:08.464 net/netvsc: not in enabled drivers build config 00:01:08.464 net/nfb: not in enabled drivers build config 00:01:08.464 net/nfp: not in enabled drivers build config 00:01:08.464 net/ngbe: not in enabled drivers build config 00:01:08.464 net/null: not in enabled drivers build config 00:01:08.464 net/octeontx: not in enabled drivers build config 00:01:08.464 net/octeon_ep: not in enabled drivers build config 00:01:08.464 net/pcap: not in enabled drivers build config 00:01:08.464 net/pfe: not in enabled drivers build config 00:01:08.464 net/qede: not in enabled drivers build config 00:01:08.464 net/ring: not in enabled drivers build config 00:01:08.464 net/sfc: not in enabled drivers build config 00:01:08.464 net/softnic: not in enabled drivers build config 00:01:08.464 net/tap: not in enabled drivers build config 00:01:08.464 net/thunderx: not in enabled drivers build config 00:01:08.464 net/txgbe: not in enabled drivers build config 00:01:08.464 net/vdev_netvsc: not in enabled drivers build config 00:01:08.464 net/vhost: not in enabled drivers build config 00:01:08.464 net/virtio: not in enabled drivers build config 00:01:08.464 net/vmxnet3: not in enabled drivers build config 00:01:08.464 raw/*: missing internal dependency, "rawdev" 00:01:08.464 crypto/armv8: not in enabled drivers build config 00:01:08.464 crypto/bcmfs: not in enabled drivers build config 00:01:08.464 crypto/caam_jr: not in enabled drivers build config 00:01:08.464 crypto/ccp: not in enabled drivers build config 00:01:08.464 crypto/cnxk: not in enabled drivers build config 00:01:08.464 crypto/dpaa_sec: not in enabled drivers build config 00:01:08.464 crypto/dpaa2_sec: not in enabled drivers build config 00:01:08.464 crypto/ipsec_mb: not in enabled drivers build config 00:01:08.464 crypto/mlx5: not in enabled drivers build config 00:01:08.464 crypto/mvsam: not in enabled drivers build config 00:01:08.464 crypto/nitrox: not in enabled drivers build config 00:01:08.464 crypto/null: not in enabled drivers build config 00:01:08.464 crypto/octeontx: not in enabled drivers build config 00:01:08.464 crypto/openssl: not in enabled drivers build config 00:01:08.464 crypto/scheduler: not in enabled drivers build config 00:01:08.464 crypto/uadk: not in enabled drivers build config 00:01:08.464 crypto/virtio: not in enabled drivers build config 00:01:08.464 compress/isal: not in enabled drivers build config 00:01:08.464 compress/mlx5: not in enabled drivers build config 00:01:08.464 compress/nitrox: not in enabled drivers build config 00:01:08.464 compress/octeontx: not in enabled drivers build config 00:01:08.464 compress/zlib: not in enabled drivers build config 00:01:08.464 regex/*: missing internal dependency, "regexdev" 00:01:08.464 ml/*: missing internal dependency, "mldev" 00:01:08.464 vdpa/ifc: not in enabled drivers build config 00:01:08.464 vdpa/mlx5: not in enabled drivers build config 00:01:08.464 vdpa/nfp: not in enabled drivers build config 00:01:08.464 vdpa/sfc: not in enabled drivers build config 00:01:08.464 event/*: missing internal dependency, "eventdev" 00:01:08.464 baseband/*: missing internal dependency, "bbdev" 00:01:08.464 gpu/*: missing internal dependency, "gpudev" 00:01:08.464 00:01:08.464 00:01:08.464 Build targets in project: 85 00:01:08.464 00:01:08.464 DPDK 24.03.0 00:01:08.464 00:01:08.464 User defined options 00:01:08.464 buildtype : debug 00:01:08.464 default_library : shared 00:01:08.464 libdir : lib 00:01:08.464 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:08.464 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:08.464 c_link_args : 00:01:08.464 cpu_instruction_set: native 00:01:08.464 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:08.465 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:01:08.465 enable_docs : false 00:01:08.465 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:08.465 enable_kmods : false 00:01:08.465 max_lcores : 128 00:01:08.465 tests : false 00:01:08.465 00:01:08.465 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:08.465 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:08.465 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:08.465 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:08.724 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:08.724 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:08.724 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:08.725 [6/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:08.725 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:08.725 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:08.725 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:08.725 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:08.725 [11/268] Linking static target lib/librte_kvargs.a 00:01:08.725 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:08.725 [13/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:08.725 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:08.725 [15/268] Linking static target lib/librte_log.a 00:01:08.725 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:09.297 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.557 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:09.557 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:09.557 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:09.557 [21/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:09.557 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:09.557 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:09.557 [24/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:09.557 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:09.557 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:09.557 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:09.557 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:09.557 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:09.557 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:09.557 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:09.557 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:09.557 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:09.557 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:09.557 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:09.557 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:09.557 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:09.557 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:09.557 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:09.557 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:09.557 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:09.557 [42/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:09.557 [43/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:09.557 [44/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:09.557 [45/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:09.557 [46/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:09.557 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:09.557 [48/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:09.557 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:09.557 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:09.557 [51/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:09.557 [52/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:09.557 [53/268] Linking static target lib/librte_telemetry.a 00:01:09.557 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:09.557 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:09.557 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:09.557 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:09.820 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:09.820 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:09.820 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:09.820 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:09.820 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:09.820 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:10.082 [64/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.082 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:10.082 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:10.082 [67/268] Linking target lib/librte_log.so.24.1 00:01:10.082 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:10.082 [69/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:10.082 [70/268] Linking static target lib/librte_pci.a 00:01:10.344 [71/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:10.344 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:10.344 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:10.344 [74/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:10.344 [75/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:10.344 [76/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:10.344 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:10.344 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:10.344 [79/268] Linking target lib/librte_kvargs.so.24.1 00:01:10.344 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:10.344 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:10.344 [82/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:10.344 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:10.610 [84/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:10.610 [85/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:10.610 [86/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:10.610 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:10.610 [88/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:10.610 [89/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:10.610 [90/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:10.610 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:10.610 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:10.610 [93/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:10.610 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:10.610 [95/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:10.610 [96/268] Linking static target lib/librte_meter.a 00:01:10.610 [97/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:10.610 [98/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:10.610 [99/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:10.610 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:10.610 [101/268] Linking static target lib/librte_ring.a 00:01:10.610 [102/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:10.610 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:10.610 [104/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:10.610 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:10.610 [106/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.610 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:10.610 [108/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:10.610 [109/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:10.610 [110/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:10.610 [111/268] Linking target lib/librte_telemetry.so.24.1 00:01:10.872 [112/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:10.872 [113/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:10.872 [114/268] Linking static target lib/librte_eal.a 00:01:10.872 [115/268] Linking static target lib/librte_mempool.a 00:01:10.872 [116/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:10.872 [117/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.872 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:10.872 [119/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:10.872 [120/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:10.872 [121/268] Linking static target lib/librte_rcu.a 00:01:10.872 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:10.872 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:10.872 [124/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:10.872 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:10.872 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:10.872 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:11.136 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:11.136 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:11.136 [130/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:11.136 [131/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:11.136 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:11.136 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:11.136 [134/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.136 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:11.136 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:11.136 [137/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.136 [138/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:11.136 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:11.136 [140/268] Linking static target lib/librte_net.a 00:01:11.394 [141/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:11.394 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:11.394 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:11.394 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:11.394 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:11.394 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:11.394 [147/268] Linking static target lib/librte_cmdline.a 00:01:11.394 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:11.653 [149/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:11.653 [150/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:11.653 [151/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.653 [152/268] Linking static target lib/librte_timer.a 00:01:11.653 [153/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:11.653 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:11.653 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:11.653 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:11.653 [157/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:11.653 [158/268] Linking static target lib/librte_dmadev.a 00:01:11.653 [159/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:11.653 [160/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:11.653 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:11.653 [162/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.913 [163/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:11.913 [164/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:11.913 [165/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:11.913 [166/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:11.913 [167/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:11.913 [168/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:11.913 [169/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.913 [170/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.913 [171/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:11.913 [172/268] Linking static target lib/librte_power.a 00:01:11.913 [173/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:12.170 [174/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:12.170 [175/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:12.170 [176/268] Linking static target lib/librte_hash.a 00:01:12.170 [177/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:12.170 [178/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:12.170 [179/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:12.170 [180/268] Linking static target lib/librte_compressdev.a 00:01:12.170 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:12.170 [182/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:12.170 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:12.170 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:12.170 [185/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:12.170 [186/268] Linking static target lib/librte_reorder.a 00:01:12.170 [187/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:12.170 [188/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:12.170 [189/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:12.170 [190/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.426 [191/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:12.427 [192/268] Linking static target lib/librte_mbuf.a 00:01:12.427 [193/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:12.427 [194/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:12.427 [195/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:12.427 [196/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:12.427 [197/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.427 [198/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:12.427 [199/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:12.427 [200/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:12.427 [201/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:12.427 [202/268] Linking static target drivers/librte_bus_vdev.a 00:01:12.427 [203/268] Linking static target lib/librte_security.a 00:01:12.427 [204/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.684 [205/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.684 [206/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.684 [207/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:12.684 [208/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:12.684 [209/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:12.684 [210/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:12.684 [211/268] Linking static target drivers/librte_mempool_ring.a 00:01:12.684 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:12.684 [213/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:12.684 [214/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:12.684 [215/268] Linking static target drivers/librte_bus_pci.a 00:01:12.684 [216/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.684 [217/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:12.684 [218/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.684 [219/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.943 [220/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.943 [221/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:12.943 [222/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:12.943 [223/268] Linking static target lib/librte_ethdev.a 00:01:13.201 [224/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.201 [225/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:13.201 [226/268] Linking static target lib/librte_cryptodev.a 00:01:14.576 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.202 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:17.127 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.127 [230/268] Linking target lib/librte_eal.so.24.1 00:01:17.127 [231/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.127 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:17.127 [233/268] Linking target lib/librte_ring.so.24.1 00:01:17.127 [234/268] Linking target lib/librte_meter.so.24.1 00:01:17.127 [235/268] Linking target lib/librte_timer.so.24.1 00:01:17.127 [236/268] Linking target lib/librte_pci.so.24.1 00:01:17.127 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:17.127 [238/268] Linking target lib/librte_dmadev.so.24.1 00:01:17.385 [239/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:17.385 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:17.385 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:17.385 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:17.385 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:17.385 [244/268] Linking target lib/librte_rcu.so.24.1 00:01:17.385 [245/268] Linking target lib/librte_mempool.so.24.1 00:01:17.385 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:17.385 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:17.385 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:17.643 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:17.643 [250/268] Linking target lib/librte_mbuf.so.24.1 00:01:17.643 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:17.643 [252/268] Linking target lib/librte_reorder.so.24.1 00:01:17.643 [253/268] Linking target lib/librte_compressdev.so.24.1 00:01:17.643 [254/268] Linking target lib/librte_net.so.24.1 00:01:17.643 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:01:17.901 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:17.901 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:17.901 [258/268] Linking target lib/librte_hash.so.24.1 00:01:17.901 [259/268] Linking target lib/librte_security.so.24.1 00:01:17.901 [260/268] Linking target lib/librte_cmdline.so.24.1 00:01:17.901 [261/268] Linking target lib/librte_ethdev.so.24.1 00:01:17.901 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:18.159 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:18.159 [264/268] Linking target lib/librte_power.so.24.1 00:01:21.443 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:21.443 [266/268] Linking static target lib/librte_vhost.a 00:01:22.009 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.009 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:22.009 INFO: autodetecting backend as ninja 00:01:22.009 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:01:43.932 CC lib/log/log.o 00:01:43.932 CC lib/ut_mock/mock.o 00:01:43.932 CC lib/log/log_flags.o 00:01:43.932 CC lib/log/log_deprecated.o 00:01:43.932 CC lib/ut/ut.o 00:01:43.932 LIB libspdk_ut_mock.a 00:01:43.932 LIB libspdk_log.a 00:01:43.932 LIB libspdk_ut.a 00:01:43.932 SO libspdk_ut_mock.so.6.0 00:01:43.932 SO libspdk_ut.so.2.0 00:01:43.932 SO libspdk_log.so.7.1 00:01:43.932 SYMLINK libspdk_ut_mock.so 00:01:43.932 SYMLINK libspdk_ut.so 00:01:43.932 SYMLINK libspdk_log.so 00:01:43.932 CC lib/dma/dma.o 00:01:43.932 CC lib/ioat/ioat.o 00:01:43.932 CXX lib/trace_parser/trace.o 00:01:43.932 CC lib/util/base64.o 00:01:43.932 CC lib/util/bit_array.o 00:01:43.932 CC lib/util/cpuset.o 00:01:43.932 CC lib/util/crc16.o 00:01:43.932 CC lib/util/crc32.o 00:01:43.932 CC lib/util/crc32c.o 00:01:43.932 CC lib/util/crc32_ieee.o 00:01:43.932 CC lib/util/crc64.o 00:01:43.932 CC lib/util/dif.o 00:01:43.932 CC lib/util/fd.o 00:01:43.932 CC lib/util/fd_group.o 00:01:43.932 CC lib/util/file.o 00:01:43.932 CC lib/util/hexlify.o 00:01:43.932 CC lib/util/iov.o 00:01:43.932 CC lib/util/math.o 00:01:43.932 CC lib/util/net.o 00:01:43.932 CC lib/util/pipe.o 00:01:43.932 CC lib/util/strerror_tls.o 00:01:43.932 CC lib/util/string.o 00:01:43.932 CC lib/util/uuid.o 00:01:43.932 CC lib/util/xor.o 00:01:43.932 CC lib/util/zipf.o 00:01:43.932 CC lib/util/md5.o 00:01:43.932 CC lib/vfio_user/host/vfio_user_pci.o 00:01:43.932 CC lib/vfio_user/host/vfio_user.o 00:01:43.932 LIB libspdk_dma.a 00:01:43.932 SO libspdk_dma.so.5.0 00:01:43.932 LIB libspdk_ioat.a 00:01:43.932 SYMLINK libspdk_dma.so 00:01:43.932 SO libspdk_ioat.so.7.0 00:01:43.932 LIB libspdk_vfio_user.a 00:01:43.932 SYMLINK libspdk_ioat.so 00:01:43.932 SO libspdk_vfio_user.so.5.0 00:01:43.932 SYMLINK libspdk_vfio_user.so 00:01:43.932 LIB libspdk_util.a 00:01:43.932 SO libspdk_util.so.10.1 00:01:43.932 SYMLINK libspdk_util.so 00:01:43.932 CC lib/vmd/vmd.o 00:01:43.932 CC lib/rdma_utils/rdma_utils.o 00:01:43.932 CC lib/idxd/idxd.o 00:01:43.932 CC lib/conf/conf.o 00:01:43.932 CC lib/json/json_parse.o 00:01:43.932 CC lib/env_dpdk/env.o 00:01:43.932 CC lib/vmd/led.o 00:01:43.932 CC lib/idxd/idxd_user.o 00:01:43.932 CC lib/json/json_util.o 00:01:43.932 CC lib/env_dpdk/memory.o 00:01:43.932 CC lib/idxd/idxd_kernel.o 00:01:43.932 CC lib/json/json_write.o 00:01:43.932 CC lib/env_dpdk/pci.o 00:01:43.932 CC lib/env_dpdk/init.o 00:01:43.932 CC lib/env_dpdk/threads.o 00:01:43.932 CC lib/env_dpdk/pci_ioat.o 00:01:43.932 CC lib/env_dpdk/pci_virtio.o 00:01:43.932 CC lib/env_dpdk/pci_vmd.o 00:01:43.932 CC lib/env_dpdk/pci_idxd.o 00:01:43.932 CC lib/env_dpdk/pci_event.o 00:01:43.932 CC lib/env_dpdk/sigbus_handler.o 00:01:43.932 CC lib/env_dpdk/pci_dpdk.o 00:01:43.932 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:43.932 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:43.932 LIB libspdk_trace_parser.a 00:01:43.932 SO libspdk_trace_parser.so.6.0 00:01:43.932 SYMLINK libspdk_trace_parser.so 00:01:43.932 LIB libspdk_conf.a 00:01:43.932 SO libspdk_conf.so.6.0 00:01:43.932 LIB libspdk_rdma_utils.a 00:01:43.932 LIB libspdk_json.a 00:01:43.932 SYMLINK libspdk_conf.so 00:01:43.932 SO libspdk_rdma_utils.so.1.0 00:01:43.932 SO libspdk_json.so.6.0 00:01:43.932 SYMLINK libspdk_rdma_utils.so 00:01:43.932 SYMLINK libspdk_json.so 00:01:43.932 CC lib/rdma_provider/common.o 00:01:43.932 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:43.932 CC lib/jsonrpc/jsonrpc_server.o 00:01:43.932 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:43.932 CC lib/jsonrpc/jsonrpc_client.o 00:01:43.932 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:43.932 LIB libspdk_idxd.a 00:01:43.932 SO libspdk_idxd.so.12.1 00:01:43.932 LIB libspdk_vmd.a 00:01:43.932 SYMLINK libspdk_idxd.so 00:01:43.932 SO libspdk_vmd.so.6.0 00:01:43.932 SYMLINK libspdk_vmd.so 00:01:43.932 LIB libspdk_rdma_provider.a 00:01:43.932 SO libspdk_rdma_provider.so.7.0 00:01:43.932 LIB libspdk_jsonrpc.a 00:01:43.932 SO libspdk_jsonrpc.so.6.0 00:01:43.932 SYMLINK libspdk_rdma_provider.so 00:01:43.932 SYMLINK libspdk_jsonrpc.so 00:01:43.932 CC lib/rpc/rpc.o 00:01:43.932 LIB libspdk_rpc.a 00:01:43.932 SO libspdk_rpc.so.6.0 00:01:43.932 SYMLINK libspdk_rpc.so 00:01:43.932 CC lib/trace/trace.o 00:01:43.932 CC lib/keyring/keyring.o 00:01:43.932 CC lib/keyring/keyring_rpc.o 00:01:43.932 CC lib/notify/notify.o 00:01:43.932 CC lib/trace/trace_flags.o 00:01:43.932 CC lib/notify/notify_rpc.o 00:01:43.932 CC lib/trace/trace_rpc.o 00:01:44.190 LIB libspdk_notify.a 00:01:44.190 SO libspdk_notify.so.6.0 00:01:44.190 SYMLINK libspdk_notify.so 00:01:44.190 LIB libspdk_keyring.a 00:01:44.190 LIB libspdk_trace.a 00:01:44.190 SO libspdk_keyring.so.2.0 00:01:44.190 SO libspdk_trace.so.11.0 00:01:44.190 SYMLINK libspdk_keyring.so 00:01:44.190 SYMLINK libspdk_trace.so 00:01:44.448 CC lib/thread/thread.o 00:01:44.448 CC lib/sock/sock.o 00:01:44.448 CC lib/thread/iobuf.o 00:01:44.448 CC lib/sock/sock_rpc.o 00:01:44.448 LIB libspdk_env_dpdk.a 00:01:44.448 SO libspdk_env_dpdk.so.15.1 00:01:44.706 SYMLINK libspdk_env_dpdk.so 00:01:44.964 LIB libspdk_sock.a 00:01:44.964 SO libspdk_sock.so.10.0 00:01:44.964 SYMLINK libspdk_sock.so 00:01:44.964 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:44.964 CC lib/nvme/nvme_ctrlr.o 00:01:44.964 CC lib/nvme/nvme_fabric.o 00:01:44.964 CC lib/nvme/nvme_ns_cmd.o 00:01:44.964 CC lib/nvme/nvme_ns.o 00:01:44.964 CC lib/nvme/nvme_pcie_common.o 00:01:44.964 CC lib/nvme/nvme_pcie.o 00:01:44.964 CC lib/nvme/nvme_qpair.o 00:01:44.964 CC lib/nvme/nvme.o 00:01:44.964 CC lib/nvme/nvme_quirks.o 00:01:44.964 CC lib/nvme/nvme_transport.o 00:01:44.964 CC lib/nvme/nvme_discovery.o 00:01:44.964 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:44.964 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:44.964 CC lib/nvme/nvme_tcp.o 00:01:44.964 CC lib/nvme/nvme_opal.o 00:01:44.964 CC lib/nvme/nvme_io_msg.o 00:01:44.964 CC lib/nvme/nvme_poll_group.o 00:01:44.964 CC lib/nvme/nvme_zns.o 00:01:44.964 CC lib/nvme/nvme_stubs.o 00:01:44.964 CC lib/nvme/nvme_auth.o 00:01:44.964 CC lib/nvme/nvme_cuse.o 00:01:44.964 CC lib/nvme/nvme_vfio_user.o 00:01:45.223 CC lib/nvme/nvme_rdma.o 00:01:46.160 LIB libspdk_thread.a 00:01:46.160 SO libspdk_thread.so.11.0 00:01:46.160 SYMLINK libspdk_thread.so 00:01:46.419 CC lib/accel/accel.o 00:01:46.419 CC lib/accel/accel_rpc.o 00:01:46.419 CC lib/virtio/virtio.o 00:01:46.419 CC lib/fsdev/fsdev.o 00:01:46.419 CC lib/accel/accel_sw.o 00:01:46.419 CC lib/virtio/virtio_vhost_user.o 00:01:46.419 CC lib/vfu_tgt/tgt_endpoint.o 00:01:46.419 CC lib/init/json_config.o 00:01:46.419 CC lib/blob/blobstore.o 00:01:46.419 CC lib/fsdev/fsdev_io.o 00:01:46.419 CC lib/virtio/virtio_vfio_user.o 00:01:46.419 CC lib/blob/request.o 00:01:46.419 CC lib/init/subsystem.o 00:01:46.419 CC lib/fsdev/fsdev_rpc.o 00:01:46.419 CC lib/vfu_tgt/tgt_rpc.o 00:01:46.419 CC lib/virtio/virtio_pci.o 00:01:46.419 CC lib/init/subsystem_rpc.o 00:01:46.419 CC lib/blob/zeroes.o 00:01:46.419 CC lib/init/rpc.o 00:01:46.419 CC lib/blob/blob_bs_dev.o 00:01:46.677 LIB libspdk_init.a 00:01:46.677 SO libspdk_init.so.6.0 00:01:46.677 SYMLINK libspdk_init.so 00:01:46.677 LIB libspdk_virtio.a 00:01:46.677 LIB libspdk_vfu_tgt.a 00:01:46.677 SO libspdk_vfu_tgt.so.3.0 00:01:46.677 SO libspdk_virtio.so.7.0 00:01:46.677 SYMLINK libspdk_vfu_tgt.so 00:01:46.677 SYMLINK libspdk_virtio.so 00:01:46.936 CC lib/event/app.o 00:01:46.936 CC lib/event/reactor.o 00:01:46.936 CC lib/event/log_rpc.o 00:01:46.936 CC lib/event/app_rpc.o 00:01:46.936 CC lib/event/scheduler_static.o 00:01:46.936 LIB libspdk_fsdev.a 00:01:47.194 SO libspdk_fsdev.so.2.0 00:01:47.194 SYMLINK libspdk_fsdev.so 00:01:47.194 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:01:47.194 LIB libspdk_event.a 00:01:47.452 SO libspdk_event.so.14.0 00:01:47.452 SYMLINK libspdk_event.so 00:01:47.452 LIB libspdk_accel.a 00:01:47.452 SO libspdk_accel.so.16.0 00:01:47.452 LIB libspdk_nvme.a 00:01:47.710 SYMLINK libspdk_accel.so 00:01:47.710 SO libspdk_nvme.so.15.0 00:01:47.710 CC lib/bdev/bdev.o 00:01:47.710 CC lib/bdev/bdev_rpc.o 00:01:47.710 CC lib/bdev/bdev_zone.o 00:01:47.710 CC lib/bdev/part.o 00:01:47.710 CC lib/bdev/scsi_nvme.o 00:01:47.968 SYMLINK libspdk_nvme.so 00:01:47.968 LIB libspdk_fuse_dispatcher.a 00:01:47.968 SO libspdk_fuse_dispatcher.so.1.0 00:01:47.968 SYMLINK libspdk_fuse_dispatcher.so 00:01:49.344 LIB libspdk_blob.a 00:01:49.602 SO libspdk_blob.so.11.0 00:01:49.602 SYMLINK libspdk_blob.so 00:01:49.602 CC lib/lvol/lvol.o 00:01:49.602 CC lib/blobfs/blobfs.o 00:01:49.602 CC lib/blobfs/tree.o 00:01:50.543 LIB libspdk_bdev.a 00:01:50.543 SO libspdk_bdev.so.17.0 00:01:50.543 SYMLINK libspdk_bdev.so 00:01:50.543 LIB libspdk_blobfs.a 00:01:50.543 SO libspdk_blobfs.so.10.0 00:01:50.543 CC lib/scsi/dev.o 00:01:50.543 CC lib/ublk/ublk.o 00:01:50.543 CC lib/nbd/nbd.o 00:01:50.543 CC lib/nvmf/ctrlr.o 00:01:50.543 CC lib/scsi/lun.o 00:01:50.543 CC lib/ublk/ublk_rpc.o 00:01:50.543 CC lib/nbd/nbd_rpc.o 00:01:50.543 CC lib/nvmf/ctrlr_discovery.o 00:01:50.543 CC lib/scsi/port.o 00:01:50.543 CC lib/ftl/ftl_core.o 00:01:50.543 CC lib/nvmf/ctrlr_bdev.o 00:01:50.543 CC lib/scsi/scsi.o 00:01:50.543 CC lib/ftl/ftl_init.o 00:01:50.543 CC lib/nvmf/subsystem.o 00:01:50.543 CC lib/scsi/scsi_bdev.o 00:01:50.543 CC lib/ftl/ftl_layout.o 00:01:50.543 CC lib/nvmf/nvmf.o 00:01:50.543 CC lib/scsi/scsi_pr.o 00:01:50.543 CC lib/ftl/ftl_debug.o 00:01:50.543 CC lib/nvmf/nvmf_rpc.o 00:01:50.543 CC lib/scsi/scsi_rpc.o 00:01:50.543 CC lib/scsi/task.o 00:01:50.543 CC lib/nvmf/transport.o 00:01:50.543 CC lib/nvmf/tcp.o 00:01:50.543 CC lib/ftl/ftl_io.o 00:01:50.543 CC lib/nvmf/stubs.o 00:01:50.543 CC lib/ftl/ftl_l2p.o 00:01:50.543 CC lib/nvmf/mdns_server.o 00:01:50.543 CC lib/ftl/ftl_sb.o 00:01:50.543 CC lib/nvmf/vfio_user.o 00:01:50.543 CC lib/nvmf/rdma.o 00:01:50.543 CC lib/ftl/ftl_l2p_flat.o 00:01:50.543 CC lib/ftl/ftl_nv_cache.o 00:01:50.543 CC lib/nvmf/auth.o 00:01:50.543 CC lib/ftl/ftl_band.o 00:01:50.543 CC lib/ftl/ftl_band_ops.o 00:01:50.543 CC lib/ftl/ftl_writer.o 00:01:50.543 CC lib/ftl/ftl_rq.o 00:01:50.543 CC lib/ftl/ftl_reloc.o 00:01:50.543 CC lib/ftl/ftl_l2p_cache.o 00:01:50.543 CC lib/ftl/ftl_p2l.o 00:01:50.543 CC lib/ftl/ftl_p2l_log.o 00:01:50.543 CC lib/ftl/mngt/ftl_mngt.o 00:01:50.543 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:50.543 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:50.543 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:50.543 SYMLINK libspdk_blobfs.so 00:01:50.543 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:50.805 LIB libspdk_lvol.a 00:01:50.805 SO libspdk_lvol.so.10.0 00:01:51.068 SYMLINK libspdk_lvol.so 00:01:51.068 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:51.068 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:51.068 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:51.068 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:51.068 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:51.068 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:51.068 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:51.068 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:51.068 CC lib/ftl/utils/ftl_conf.o 00:01:51.068 CC lib/ftl/utils/ftl_md.o 00:01:51.068 CC lib/ftl/utils/ftl_mempool.o 00:01:51.068 CC lib/ftl/utils/ftl_bitmap.o 00:01:51.068 CC lib/ftl/utils/ftl_property.o 00:01:51.068 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:51.068 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:51.068 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:51.068 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:51.068 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:51.068 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:51.330 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:51.330 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:51.330 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:51.330 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:51.330 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:51.330 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:01:51.330 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:01:51.330 CC lib/ftl/base/ftl_base_dev.o 00:01:51.330 CC lib/ftl/base/ftl_base_bdev.o 00:01:51.330 CC lib/ftl/ftl_trace.o 00:01:51.330 LIB libspdk_nbd.a 00:01:51.589 SO libspdk_nbd.so.7.0 00:01:51.589 SYMLINK libspdk_nbd.so 00:01:51.589 LIB libspdk_scsi.a 00:01:51.589 SO libspdk_scsi.so.9.0 00:01:51.589 SYMLINK libspdk_scsi.so 00:01:51.589 LIB libspdk_ublk.a 00:01:51.848 SO libspdk_ublk.so.3.0 00:01:51.848 SYMLINK libspdk_ublk.so 00:01:51.848 CC lib/vhost/vhost.o 00:01:51.848 CC lib/iscsi/conn.o 00:01:51.848 CC lib/iscsi/init_grp.o 00:01:51.848 CC lib/vhost/vhost_rpc.o 00:01:51.848 CC lib/vhost/vhost_scsi.o 00:01:51.848 CC lib/iscsi/iscsi.o 00:01:51.848 CC lib/iscsi/param.o 00:01:51.848 CC lib/vhost/vhost_blk.o 00:01:51.848 CC lib/iscsi/portal_grp.o 00:01:51.848 CC lib/vhost/rte_vhost_user.o 00:01:51.848 CC lib/iscsi/tgt_node.o 00:01:51.848 CC lib/iscsi/iscsi_subsystem.o 00:01:51.848 CC lib/iscsi/task.o 00:01:51.848 CC lib/iscsi/iscsi_rpc.o 00:01:52.106 LIB libspdk_ftl.a 00:01:52.365 SO libspdk_ftl.so.9.0 00:01:52.624 SYMLINK libspdk_ftl.so 00:01:53.190 LIB libspdk_vhost.a 00:01:53.190 SO libspdk_vhost.so.8.0 00:01:53.190 SYMLINK libspdk_vhost.so 00:01:53.190 LIB libspdk_nvmf.a 00:01:53.190 LIB libspdk_iscsi.a 00:01:53.449 SO libspdk_nvmf.so.20.0 00:01:53.449 SO libspdk_iscsi.so.8.0 00:01:53.449 SYMLINK libspdk_iscsi.so 00:01:53.449 SYMLINK libspdk_nvmf.so 00:01:53.744 CC module/env_dpdk/env_dpdk_rpc.o 00:01:53.744 CC module/vfu_device/vfu_virtio.o 00:01:53.744 CC module/vfu_device/vfu_virtio_blk.o 00:01:53.744 CC module/vfu_device/vfu_virtio_scsi.o 00:01:53.744 CC module/vfu_device/vfu_virtio_rpc.o 00:01:53.744 CC module/vfu_device/vfu_virtio_fs.o 00:01:54.002 CC module/keyring/linux/keyring.o 00:01:54.002 CC module/keyring/linux/keyring_rpc.o 00:01:54.002 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:54.002 CC module/scheduler/gscheduler/gscheduler.o 00:01:54.002 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:54.002 CC module/fsdev/aio/fsdev_aio.o 00:01:54.002 CC module/accel/ioat/accel_ioat.o 00:01:54.002 CC module/fsdev/aio/fsdev_aio_rpc.o 00:01:54.002 CC module/accel/dsa/accel_dsa.o 00:01:54.002 CC module/accel/iaa/accel_iaa.o 00:01:54.002 CC module/fsdev/aio/linux_aio_mgr.o 00:01:54.002 CC module/accel/ioat/accel_ioat_rpc.o 00:01:54.002 CC module/accel/dsa/accel_dsa_rpc.o 00:01:54.002 CC module/accel/iaa/accel_iaa_rpc.o 00:01:54.002 CC module/sock/posix/posix.o 00:01:54.002 CC module/keyring/file/keyring.o 00:01:54.002 CC module/accel/error/accel_error.o 00:01:54.002 CC module/accel/error/accel_error_rpc.o 00:01:54.002 CC module/blob/bdev/blob_bdev.o 00:01:54.002 CC module/keyring/file/keyring_rpc.o 00:01:54.002 LIB libspdk_env_dpdk_rpc.a 00:01:54.002 SO libspdk_env_dpdk_rpc.so.6.0 00:01:54.002 LIB libspdk_keyring_linux.a 00:01:54.002 SYMLINK libspdk_env_dpdk_rpc.so 00:01:54.002 LIB libspdk_scheduler_gscheduler.a 00:01:54.002 LIB libspdk_keyring_file.a 00:01:54.002 LIB libspdk_scheduler_dpdk_governor.a 00:01:54.002 SO libspdk_keyring_linux.so.1.0 00:01:54.002 SO libspdk_scheduler_gscheduler.so.4.0 00:01:54.002 SO libspdk_keyring_file.so.2.0 00:01:54.002 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:54.002 LIB libspdk_accel_ioat.a 00:01:54.261 LIB libspdk_scheduler_dynamic.a 00:01:54.261 SYMLINK libspdk_keyring_linux.so 00:01:54.261 SO libspdk_accel_ioat.so.6.0 00:01:54.261 LIB libspdk_accel_error.a 00:01:54.261 SO libspdk_scheduler_dynamic.so.4.0 00:01:54.261 SYMLINK libspdk_scheduler_gscheduler.so 00:01:54.261 SYMLINK libspdk_keyring_file.so 00:01:54.261 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:54.261 SO libspdk_accel_error.so.2.0 00:01:54.261 SYMLINK libspdk_accel_ioat.so 00:01:54.261 SYMLINK libspdk_scheduler_dynamic.so 00:01:54.261 LIB libspdk_blob_bdev.a 00:01:54.261 LIB libspdk_accel_dsa.a 00:01:54.261 LIB libspdk_accel_iaa.a 00:01:54.261 SYMLINK libspdk_accel_error.so 00:01:54.261 SO libspdk_blob_bdev.so.11.0 00:01:54.261 SO libspdk_accel_dsa.so.5.0 00:01:54.261 SO libspdk_accel_iaa.so.3.0 00:01:54.261 SYMLINK libspdk_blob_bdev.so 00:01:54.261 SYMLINK libspdk_accel_dsa.so 00:01:54.261 SYMLINK libspdk_accel_iaa.so 00:01:54.527 CC module/bdev/error/vbdev_error.o 00:01:54.527 CC module/bdev/error/vbdev_error_rpc.o 00:01:54.527 CC module/bdev/lvol/vbdev_lvol.o 00:01:54.527 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:54.527 CC module/bdev/null/bdev_null_rpc.o 00:01:54.527 CC module/bdev/gpt/gpt.o 00:01:54.527 CC module/bdev/null/bdev_null.o 00:01:54.527 CC module/bdev/split/vbdev_split.o 00:01:54.527 CC module/bdev/gpt/vbdev_gpt.o 00:01:54.527 CC module/bdev/split/vbdev_split_rpc.o 00:01:54.527 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:54.527 CC module/bdev/delay/vbdev_delay.o 00:01:54.527 CC module/blobfs/bdev/blobfs_bdev.o 00:01:54.527 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:54.527 CC module/bdev/nvme/bdev_nvme.o 00:01:54.527 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:54.527 CC module/bdev/malloc/bdev_malloc.o 00:01:54.527 CC module/bdev/passthru/vbdev_passthru.o 00:01:54.527 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:54.527 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:54.527 CC module/bdev/aio/bdev_aio.o 00:01:54.527 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:54.527 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:54.527 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:54.527 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:54.527 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:54.527 CC module/bdev/nvme/nvme_rpc.o 00:01:54.527 CC module/bdev/raid/bdev_raid.o 00:01:54.527 CC module/bdev/aio/bdev_aio_rpc.o 00:01:54.527 CC module/bdev/iscsi/bdev_iscsi.o 00:01:54.527 CC module/bdev/nvme/bdev_mdns_client.o 00:01:54.527 CC module/bdev/raid/bdev_raid_rpc.o 00:01:54.527 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:54.527 CC module/bdev/nvme/vbdev_opal.o 00:01:54.527 CC module/bdev/raid/bdev_raid_sb.o 00:01:54.527 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:54.527 CC module/bdev/raid/raid0.o 00:01:54.527 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:54.527 CC module/bdev/raid/raid1.o 00:01:54.527 CC module/bdev/ftl/bdev_ftl.o 00:01:54.527 CC module/bdev/raid/concat.o 00:01:54.527 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:54.527 LIB libspdk_vfu_device.a 00:01:54.785 SO libspdk_vfu_device.so.3.0 00:01:54.785 LIB libspdk_fsdev_aio.a 00:01:54.785 SO libspdk_fsdev_aio.so.1.0 00:01:54.785 LIB libspdk_sock_posix.a 00:01:54.785 SYMLINK libspdk_vfu_device.so 00:01:54.785 SO libspdk_sock_posix.so.6.0 00:01:54.786 SYMLINK libspdk_fsdev_aio.so 00:01:55.044 LIB libspdk_blobfs_bdev.a 00:01:55.044 SYMLINK libspdk_sock_posix.so 00:01:55.044 SO libspdk_blobfs_bdev.so.6.0 00:01:55.044 LIB libspdk_bdev_split.a 00:01:55.044 SYMLINK libspdk_blobfs_bdev.so 00:01:55.044 SO libspdk_bdev_split.so.6.0 00:01:55.044 LIB libspdk_bdev_passthru.a 00:01:55.044 LIB libspdk_bdev_ftl.a 00:01:55.044 LIB libspdk_bdev_error.a 00:01:55.044 LIB libspdk_bdev_gpt.a 00:01:55.044 LIB libspdk_bdev_null.a 00:01:55.044 SYMLINK libspdk_bdev_split.so 00:01:55.044 SO libspdk_bdev_passthru.so.6.0 00:01:55.044 SO libspdk_bdev_ftl.so.6.0 00:01:55.044 SO libspdk_bdev_error.so.6.0 00:01:55.044 SO libspdk_bdev_gpt.so.6.0 00:01:55.044 SO libspdk_bdev_null.so.6.0 00:01:55.044 LIB libspdk_bdev_zone_block.a 00:01:55.044 LIB libspdk_bdev_delay.a 00:01:55.044 SYMLINK libspdk_bdev_passthru.so 00:01:55.044 SYMLINK libspdk_bdev_ftl.so 00:01:55.044 SYMLINK libspdk_bdev_error.so 00:01:55.044 SYMLINK libspdk_bdev_gpt.so 00:01:55.044 LIB libspdk_bdev_aio.a 00:01:55.044 SYMLINK libspdk_bdev_null.so 00:01:55.044 SO libspdk_bdev_zone_block.so.6.0 00:01:55.304 SO libspdk_bdev_delay.so.6.0 00:01:55.304 LIB libspdk_bdev_iscsi.a 00:01:55.304 SO libspdk_bdev_aio.so.6.0 00:01:55.304 LIB libspdk_bdev_malloc.a 00:01:55.304 SO libspdk_bdev_iscsi.so.6.0 00:01:55.304 SO libspdk_bdev_malloc.so.6.0 00:01:55.304 SYMLINK libspdk_bdev_zone_block.so 00:01:55.304 SYMLINK libspdk_bdev_delay.so 00:01:55.304 SYMLINK libspdk_bdev_aio.so 00:01:55.304 SYMLINK libspdk_bdev_iscsi.so 00:01:55.304 SYMLINK libspdk_bdev_malloc.so 00:01:55.304 LIB libspdk_bdev_virtio.a 00:01:55.304 SO libspdk_bdev_virtio.so.6.0 00:01:55.304 LIB libspdk_bdev_lvol.a 00:01:55.304 SO libspdk_bdev_lvol.so.6.0 00:01:55.304 SYMLINK libspdk_bdev_virtio.so 00:01:55.304 SYMLINK libspdk_bdev_lvol.so 00:01:55.872 LIB libspdk_bdev_raid.a 00:01:55.872 SO libspdk_bdev_raid.so.6.0 00:01:55.872 SYMLINK libspdk_bdev_raid.so 00:01:57.252 LIB libspdk_bdev_nvme.a 00:01:57.252 SO libspdk_bdev_nvme.so.7.1 00:01:57.510 SYMLINK libspdk_bdev_nvme.so 00:01:57.768 CC module/event/subsystems/iobuf/iobuf.o 00:01:57.768 CC module/event/subsystems/vmd/vmd.o 00:01:57.768 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:57.768 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:57.768 CC module/event/subsystems/keyring/keyring.o 00:01:57.768 CC module/event/subsystems/sock/sock.o 00:01:57.768 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:57.768 CC module/event/subsystems/scheduler/scheduler.o 00:01:57.768 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:57.768 CC module/event/subsystems/fsdev/fsdev.o 00:01:57.768 LIB libspdk_event_keyring.a 00:01:57.768 LIB libspdk_event_vhost_blk.a 00:01:57.768 LIB libspdk_event_fsdev.a 00:01:57.768 LIB libspdk_event_vfu_tgt.a 00:01:57.768 LIB libspdk_event_vmd.a 00:01:57.769 LIB libspdk_event_scheduler.a 00:01:58.029 LIB libspdk_event_sock.a 00:01:58.029 SO libspdk_event_keyring.so.1.0 00:01:58.029 SO libspdk_event_vhost_blk.so.3.0 00:01:58.029 LIB libspdk_event_iobuf.a 00:01:58.029 SO libspdk_event_fsdev.so.1.0 00:01:58.029 SO libspdk_event_vfu_tgt.so.3.0 00:01:58.029 SO libspdk_event_scheduler.so.4.0 00:01:58.029 SO libspdk_event_sock.so.5.0 00:01:58.029 SO libspdk_event_vmd.so.6.0 00:01:58.029 SO libspdk_event_iobuf.so.3.0 00:01:58.029 SYMLINK libspdk_event_vhost_blk.so 00:01:58.029 SYMLINK libspdk_event_keyring.so 00:01:58.029 SYMLINK libspdk_event_fsdev.so 00:01:58.029 SYMLINK libspdk_event_vfu_tgt.so 00:01:58.029 SYMLINK libspdk_event_sock.so 00:01:58.029 SYMLINK libspdk_event_scheduler.so 00:01:58.029 SYMLINK libspdk_event_vmd.so 00:01:58.029 SYMLINK libspdk_event_iobuf.so 00:01:58.029 CC module/event/subsystems/accel/accel.o 00:01:58.289 LIB libspdk_event_accel.a 00:01:58.289 SO libspdk_event_accel.so.6.0 00:01:58.289 SYMLINK libspdk_event_accel.so 00:01:58.547 CC module/event/subsystems/bdev/bdev.o 00:01:58.807 LIB libspdk_event_bdev.a 00:01:58.807 SO libspdk_event_bdev.so.6.0 00:01:58.807 SYMLINK libspdk_event_bdev.so 00:01:59.066 CC module/event/subsystems/nbd/nbd.o 00:01:59.066 CC module/event/subsystems/scsi/scsi.o 00:01:59.066 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:59.066 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:59.066 CC module/event/subsystems/ublk/ublk.o 00:01:59.066 LIB libspdk_event_nbd.a 00:01:59.066 LIB libspdk_event_ublk.a 00:01:59.066 LIB libspdk_event_scsi.a 00:01:59.066 SO libspdk_event_nbd.so.6.0 00:01:59.066 SO libspdk_event_ublk.so.3.0 00:01:59.066 SO libspdk_event_scsi.so.6.0 00:01:59.066 SYMLINK libspdk_event_nbd.so 00:01:59.066 SYMLINK libspdk_event_ublk.so 00:01:59.325 SYMLINK libspdk_event_scsi.so 00:01:59.325 LIB libspdk_event_nvmf.a 00:01:59.325 SO libspdk_event_nvmf.so.6.0 00:01:59.325 SYMLINK libspdk_event_nvmf.so 00:01:59.325 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:59.325 CC module/event/subsystems/iscsi/iscsi.o 00:01:59.584 LIB libspdk_event_vhost_scsi.a 00:01:59.584 SO libspdk_event_vhost_scsi.so.3.0 00:01:59.584 LIB libspdk_event_iscsi.a 00:01:59.584 SO libspdk_event_iscsi.so.6.0 00:01:59.584 SYMLINK libspdk_event_vhost_scsi.so 00:01:59.584 SYMLINK libspdk_event_iscsi.so 00:01:59.843 SO libspdk.so.6.0 00:01:59.843 SYMLINK libspdk.so 00:01:59.843 CC app/trace_record/trace_record.o 00:01:59.843 CXX app/trace/trace.o 00:01:59.843 CC app/spdk_lspci/spdk_lspci.o 00:01:59.843 CC app/spdk_nvme_perf/perf.o 00:01:59.843 CC app/spdk_top/spdk_top.o 00:01:59.843 CC app/spdk_nvme_identify/identify.o 00:01:59.843 CC app/spdk_nvme_discover/discovery_aer.o 00:01:59.843 CC test/rpc_client/rpc_client_test.o 00:01:59.843 TEST_HEADER include/spdk/accel.h 00:01:59.843 TEST_HEADER include/spdk/accel_module.h 00:01:59.843 TEST_HEADER include/spdk/assert.h 00:01:59.843 TEST_HEADER include/spdk/barrier.h 00:01:59.843 TEST_HEADER include/spdk/base64.h 00:01:59.843 TEST_HEADER include/spdk/bdev.h 00:01:59.843 TEST_HEADER include/spdk/bdev_module.h 00:01:59.843 TEST_HEADER include/spdk/bdev_zone.h 00:01:59.843 TEST_HEADER include/spdk/bit_array.h 00:01:59.843 TEST_HEADER include/spdk/bit_pool.h 00:01:59.843 TEST_HEADER include/spdk/blob_bdev.h 00:01:59.843 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:59.843 TEST_HEADER include/spdk/blobfs.h 00:01:59.843 TEST_HEADER include/spdk/blob.h 00:01:59.843 TEST_HEADER include/spdk/conf.h 00:01:59.843 TEST_HEADER include/spdk/config.h 00:01:59.843 TEST_HEADER include/spdk/cpuset.h 00:01:59.843 TEST_HEADER include/spdk/crc16.h 00:01:59.843 TEST_HEADER include/spdk/crc32.h 00:01:59.843 TEST_HEADER include/spdk/crc64.h 00:01:59.843 TEST_HEADER include/spdk/dma.h 00:01:59.843 TEST_HEADER include/spdk/dif.h 00:01:59.843 TEST_HEADER include/spdk/endian.h 00:01:59.843 TEST_HEADER include/spdk/env.h 00:01:59.843 TEST_HEADER include/spdk/env_dpdk.h 00:01:59.843 TEST_HEADER include/spdk/event.h 00:01:59.843 TEST_HEADER include/spdk/fd_group.h 00:01:59.843 TEST_HEADER include/spdk/fd.h 00:01:59.843 TEST_HEADER include/spdk/file.h 00:01:59.843 TEST_HEADER include/spdk/fsdev.h 00:01:59.843 TEST_HEADER include/spdk/fsdev_module.h 00:01:59.843 TEST_HEADER include/spdk/ftl.h 00:01:59.843 TEST_HEADER include/spdk/fuse_dispatcher.h 00:01:59.843 TEST_HEADER include/spdk/hexlify.h 00:01:59.843 TEST_HEADER include/spdk/gpt_spec.h 00:01:59.843 TEST_HEADER include/spdk/histogram_data.h 00:01:59.843 TEST_HEADER include/spdk/idxd.h 00:01:59.843 TEST_HEADER include/spdk/init.h 00:01:59.843 TEST_HEADER include/spdk/idxd_spec.h 00:01:59.843 TEST_HEADER include/spdk/ioat_spec.h 00:01:59.843 TEST_HEADER include/spdk/ioat.h 00:01:59.843 TEST_HEADER include/spdk/iscsi_spec.h 00:01:59.843 TEST_HEADER include/spdk/json.h 00:01:59.843 TEST_HEADER include/spdk/jsonrpc.h 00:01:59.843 TEST_HEADER include/spdk/keyring.h 00:01:59.843 TEST_HEADER include/spdk/keyring_module.h 00:01:59.843 TEST_HEADER include/spdk/likely.h 00:01:59.843 TEST_HEADER include/spdk/log.h 00:01:59.843 TEST_HEADER include/spdk/lvol.h 00:01:59.843 TEST_HEADER include/spdk/md5.h 00:01:59.843 TEST_HEADER include/spdk/memory.h 00:01:59.843 TEST_HEADER include/spdk/mmio.h 00:01:59.843 TEST_HEADER include/spdk/nbd.h 00:02:00.106 TEST_HEADER include/spdk/net.h 00:02:00.106 TEST_HEADER include/spdk/notify.h 00:02:00.106 TEST_HEADER include/spdk/nvme.h 00:02:00.106 TEST_HEADER include/spdk/nvme_intel.h 00:02:00.106 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:00.106 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:00.106 TEST_HEADER include/spdk/nvme_spec.h 00:02:00.106 TEST_HEADER include/spdk/nvme_zns.h 00:02:00.106 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:00.106 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:00.106 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:00.106 TEST_HEADER include/spdk/nvmf.h 00:02:00.106 TEST_HEADER include/spdk/nvmf_spec.h 00:02:00.106 TEST_HEADER include/spdk/nvmf_transport.h 00:02:00.107 TEST_HEADER include/spdk/opal.h 00:02:00.107 TEST_HEADER include/spdk/opal_spec.h 00:02:00.107 TEST_HEADER include/spdk/pci_ids.h 00:02:00.107 TEST_HEADER include/spdk/pipe.h 00:02:00.107 TEST_HEADER include/spdk/queue.h 00:02:00.107 TEST_HEADER include/spdk/reduce.h 00:02:00.107 TEST_HEADER include/spdk/rpc.h 00:02:00.107 TEST_HEADER include/spdk/scheduler.h 00:02:00.107 TEST_HEADER include/spdk/scsi.h 00:02:00.107 TEST_HEADER include/spdk/scsi_spec.h 00:02:00.107 TEST_HEADER include/spdk/sock.h 00:02:00.107 TEST_HEADER include/spdk/stdinc.h 00:02:00.107 TEST_HEADER include/spdk/string.h 00:02:00.107 TEST_HEADER include/spdk/thread.h 00:02:00.107 TEST_HEADER include/spdk/trace.h 00:02:00.107 TEST_HEADER include/spdk/trace_parser.h 00:02:00.107 CC app/spdk_dd/spdk_dd.o 00:02:00.107 TEST_HEADER include/spdk/tree.h 00:02:00.107 TEST_HEADER include/spdk/ublk.h 00:02:00.107 TEST_HEADER include/spdk/util.h 00:02:00.107 TEST_HEADER include/spdk/uuid.h 00:02:00.107 TEST_HEADER include/spdk/version.h 00:02:00.107 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:00.107 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:00.107 TEST_HEADER include/spdk/vhost.h 00:02:00.107 TEST_HEADER include/spdk/vmd.h 00:02:00.107 TEST_HEADER include/spdk/xor.h 00:02:00.107 TEST_HEADER include/spdk/zipf.h 00:02:00.107 CXX test/cpp_headers/accel.o 00:02:00.107 CXX test/cpp_headers/assert.o 00:02:00.107 CXX test/cpp_headers/accel_module.o 00:02:00.107 CXX test/cpp_headers/barrier.o 00:02:00.107 CXX test/cpp_headers/base64.o 00:02:00.107 CXX test/cpp_headers/bdev.o 00:02:00.107 CXX test/cpp_headers/bdev_module.o 00:02:00.107 CXX test/cpp_headers/bdev_zone.o 00:02:00.107 CXX test/cpp_headers/bit_array.o 00:02:00.107 CC app/nvmf_tgt/nvmf_main.o 00:02:00.107 CXX test/cpp_headers/bit_pool.o 00:02:00.107 CXX test/cpp_headers/blob_bdev.o 00:02:00.107 CXX test/cpp_headers/blobfs_bdev.o 00:02:00.107 CXX test/cpp_headers/blobfs.o 00:02:00.107 CXX test/cpp_headers/blob.o 00:02:00.107 CXX test/cpp_headers/conf.o 00:02:00.107 CXX test/cpp_headers/config.o 00:02:00.107 CXX test/cpp_headers/cpuset.o 00:02:00.107 CXX test/cpp_headers/crc16.o 00:02:00.107 CC app/iscsi_tgt/iscsi_tgt.o 00:02:00.107 CC app/spdk_tgt/spdk_tgt.o 00:02:00.107 CXX test/cpp_headers/crc32.o 00:02:00.107 CC examples/util/zipf/zipf.o 00:02:00.107 CC examples/ioat/perf/perf.o 00:02:00.107 CC examples/ioat/verify/verify.o 00:02:00.107 CC test/env/memory/memory_ut.o 00:02:00.107 CC test/app/histogram_perf/histogram_perf.o 00:02:00.107 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:00.107 CC app/fio/nvme/fio_plugin.o 00:02:00.107 CC test/env/vtophys/vtophys.o 00:02:00.107 CC test/app/stub/stub.o 00:02:00.107 CC test/env/pci/pci_ut.o 00:02:00.107 CC test/app/jsoncat/jsoncat.o 00:02:00.107 CC test/thread/poller_perf/poller_perf.o 00:02:00.107 CC app/fio/bdev/fio_plugin.o 00:02:00.107 CC test/dma/test_dma/test_dma.o 00:02:00.107 CC test/app/bdev_svc/bdev_svc.o 00:02:00.369 CC test/env/mem_callbacks/mem_callbacks.o 00:02:00.369 LINK spdk_lspci 00:02:00.369 LINK rpc_client_test 00:02:00.369 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:00.369 LINK spdk_nvme_discover 00:02:00.369 LINK jsoncat 00:02:00.369 LINK histogram_perf 00:02:00.369 LINK vtophys 00:02:00.369 LINK nvmf_tgt 00:02:00.369 LINK zipf 00:02:00.369 LINK spdk_trace_record 00:02:00.369 LINK interrupt_tgt 00:02:00.369 LINK poller_perf 00:02:00.369 CXX test/cpp_headers/crc64.o 00:02:00.369 CXX test/cpp_headers/dif.o 00:02:00.369 LINK env_dpdk_post_init 00:02:00.369 CXX test/cpp_headers/dma.o 00:02:00.369 CXX test/cpp_headers/endian.o 00:02:00.369 CXX test/cpp_headers/env_dpdk.o 00:02:00.369 CXX test/cpp_headers/env.o 00:02:00.369 CXX test/cpp_headers/event.o 00:02:00.369 CXX test/cpp_headers/fd_group.o 00:02:00.369 CXX test/cpp_headers/fd.o 00:02:00.630 CXX test/cpp_headers/file.o 00:02:00.630 LINK stub 00:02:00.630 LINK iscsi_tgt 00:02:00.630 CXX test/cpp_headers/fsdev.o 00:02:00.630 CXX test/cpp_headers/fsdev_module.o 00:02:00.630 CXX test/cpp_headers/ftl.o 00:02:00.630 CXX test/cpp_headers/fuse_dispatcher.o 00:02:00.630 CXX test/cpp_headers/gpt_spec.o 00:02:00.630 CXX test/cpp_headers/hexlify.o 00:02:00.630 LINK verify 00:02:00.630 LINK ioat_perf 00:02:00.630 LINK spdk_tgt 00:02:00.630 LINK bdev_svc 00:02:00.630 CXX test/cpp_headers/histogram_data.o 00:02:00.630 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:00.630 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:00.630 CXX test/cpp_headers/idxd.o 00:02:00.630 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:00.630 CXX test/cpp_headers/idxd_spec.o 00:02:00.630 CXX test/cpp_headers/init.o 00:02:00.630 CXX test/cpp_headers/ioat.o 00:02:00.630 CXX test/cpp_headers/ioat_spec.o 00:02:00.630 CXX test/cpp_headers/iscsi_spec.o 00:02:00.630 LINK spdk_dd 00:02:00.892 CXX test/cpp_headers/json.o 00:02:00.892 CXX test/cpp_headers/jsonrpc.o 00:02:00.892 LINK spdk_trace 00:02:00.892 CXX test/cpp_headers/keyring.o 00:02:00.892 CXX test/cpp_headers/keyring_module.o 00:02:00.892 LINK pci_ut 00:02:00.892 CXX test/cpp_headers/likely.o 00:02:00.892 CXX test/cpp_headers/log.o 00:02:00.892 CXX test/cpp_headers/lvol.o 00:02:00.892 CXX test/cpp_headers/md5.o 00:02:00.892 CXX test/cpp_headers/memory.o 00:02:00.892 CXX test/cpp_headers/mmio.o 00:02:00.892 CXX test/cpp_headers/nbd.o 00:02:00.892 CXX test/cpp_headers/net.o 00:02:00.892 CXX test/cpp_headers/notify.o 00:02:00.892 CXX test/cpp_headers/nvme.o 00:02:00.892 CXX test/cpp_headers/nvme_intel.o 00:02:00.892 CXX test/cpp_headers/nvme_ocssd.o 00:02:00.892 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:00.892 CXX test/cpp_headers/nvme_spec.o 00:02:00.892 CXX test/cpp_headers/nvme_zns.o 00:02:00.892 CXX test/cpp_headers/nvmf_cmd.o 00:02:00.892 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:00.892 CXX test/cpp_headers/nvmf.o 00:02:01.156 CXX test/cpp_headers/nvmf_spec.o 00:02:01.156 CXX test/cpp_headers/nvmf_transport.o 00:02:01.156 CC examples/sock/hello_world/hello_sock.o 00:02:01.156 CC test/event/event_perf/event_perf.o 00:02:01.156 CXX test/cpp_headers/opal.o 00:02:01.156 CC test/event/reactor/reactor.o 00:02:01.156 LINK spdk_nvme 00:02:01.156 CXX test/cpp_headers/opal_spec.o 00:02:01.156 CXX test/cpp_headers/pci_ids.o 00:02:01.156 CXX test/cpp_headers/pipe.o 00:02:01.156 CC examples/thread/thread/thread_ex.o 00:02:01.156 CC examples/vmd/lsvmd/lsvmd.o 00:02:01.156 CC examples/vmd/led/led.o 00:02:01.156 CC examples/idxd/perf/perf.o 00:02:01.156 LINK test_dma 00:02:01.156 LINK nvme_fuzz 00:02:01.156 LINK spdk_bdev 00:02:01.156 CXX test/cpp_headers/queue.o 00:02:01.156 CC test/event/reactor_perf/reactor_perf.o 00:02:01.156 CXX test/cpp_headers/reduce.o 00:02:01.417 CC test/event/app_repeat/app_repeat.o 00:02:01.417 CXX test/cpp_headers/rpc.o 00:02:01.417 CXX test/cpp_headers/scheduler.o 00:02:01.417 CXX test/cpp_headers/scsi.o 00:02:01.417 CXX test/cpp_headers/scsi_spec.o 00:02:01.417 CXX test/cpp_headers/sock.o 00:02:01.417 CXX test/cpp_headers/stdinc.o 00:02:01.417 CXX test/cpp_headers/string.o 00:02:01.417 CXX test/cpp_headers/thread.o 00:02:01.417 CXX test/cpp_headers/trace.o 00:02:01.417 CXX test/cpp_headers/trace_parser.o 00:02:01.417 CXX test/cpp_headers/tree.o 00:02:01.417 CC test/event/scheduler/scheduler.o 00:02:01.417 CXX test/cpp_headers/ublk.o 00:02:01.417 CXX test/cpp_headers/util.o 00:02:01.417 CXX test/cpp_headers/uuid.o 00:02:01.417 CXX test/cpp_headers/version.o 00:02:01.417 CXX test/cpp_headers/vfio_user_pci.o 00:02:01.417 CXX test/cpp_headers/vfio_user_spec.o 00:02:01.417 CXX test/cpp_headers/vhost.o 00:02:01.417 LINK spdk_nvme_perf 00:02:01.417 LINK reactor 00:02:01.417 LINK event_perf 00:02:01.417 CXX test/cpp_headers/vmd.o 00:02:01.417 CXX test/cpp_headers/xor.o 00:02:01.417 CXX test/cpp_headers/zipf.o 00:02:01.417 LINK lsvmd 00:02:01.417 LINK led 00:02:01.417 CC app/vhost/vhost.o 00:02:01.417 LINK vhost_fuzz 00:02:01.417 LINK mem_callbacks 00:02:01.676 LINK spdk_nvme_identify 00:02:01.676 LINK reactor_perf 00:02:01.676 LINK hello_sock 00:02:01.676 LINK app_repeat 00:02:01.676 LINK spdk_top 00:02:01.676 LINK thread 00:02:01.936 CC test/nvme/e2edp/nvme_dp.o 00:02:01.936 CC test/nvme/reserve/reserve.o 00:02:01.936 CC test/nvme/reset/reset.o 00:02:01.936 CC test/nvme/sgl/sgl.o 00:02:01.936 CC test/nvme/aer/aer.o 00:02:01.936 CC test/nvme/startup/startup.o 00:02:01.936 CC test/nvme/overhead/overhead.o 00:02:01.936 CC test/nvme/err_injection/err_injection.o 00:02:01.936 CC test/nvme/simple_copy/simple_copy.o 00:02:01.936 LINK scheduler 00:02:01.936 CC test/nvme/connect_stress/connect_stress.o 00:02:01.936 CC test/nvme/fdp/fdp.o 00:02:01.936 CC test/nvme/boot_partition/boot_partition.o 00:02:01.936 CC test/nvme/fused_ordering/fused_ordering.o 00:02:01.936 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:01.936 CC test/nvme/compliance/nvme_compliance.o 00:02:01.936 CC test/nvme/cuse/cuse.o 00:02:01.936 LINK idxd_perf 00:02:01.936 CC test/blobfs/mkfs/mkfs.o 00:02:01.936 LINK vhost 00:02:01.936 CC test/accel/dif/dif.o 00:02:01.936 CC test/lvol/esnap/esnap.o 00:02:01.936 LINK startup 00:02:01.936 CC examples/nvme/arbitration/arbitration.o 00:02:01.936 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:01.936 CC examples/nvme/abort/abort.o 00:02:01.936 CC examples/nvme/reconnect/reconnect.o 00:02:01.936 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:01.936 CC examples/nvme/hotplug/hotplug.o 00:02:02.194 CC examples/nvme/hello_world/hello_world.o 00:02:02.194 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:02.194 LINK err_injection 00:02:02.194 LINK reserve 00:02:02.194 LINK doorbell_aers 00:02:02.194 LINK fused_ordering 00:02:02.194 LINK mkfs 00:02:02.194 LINK simple_copy 00:02:02.194 LINK sgl 00:02:02.194 LINK connect_stress 00:02:02.194 LINK boot_partition 00:02:02.194 CC examples/accel/perf/accel_perf.o 00:02:02.194 LINK nvme_dp 00:02:02.194 LINK aer 00:02:02.194 CC examples/blob/cli/blobcli.o 00:02:02.194 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:02.194 CC examples/blob/hello_world/hello_blob.o 00:02:02.194 LINK overhead 00:02:02.194 LINK nvme_compliance 00:02:02.194 LINK reset 00:02:02.194 LINK pmr_persistence 00:02:02.452 LINK memory_ut 00:02:02.452 LINK cmb_copy 00:02:02.452 LINK hello_world 00:02:02.452 LINK hotplug 00:02:02.452 LINK fdp 00:02:02.452 LINK abort 00:02:02.452 LINK hello_fsdev 00:02:02.710 LINK reconnect 00:02:02.710 LINK arbitration 00:02:02.710 LINK hello_blob 00:02:02.710 LINK nvme_manage 00:02:02.710 LINK accel_perf 00:02:02.710 LINK dif 00:02:02.710 LINK blobcli 00:02:02.969 LINK iscsi_fuzz 00:02:02.969 CC examples/bdev/hello_world/hello_bdev.o 00:02:02.969 CC examples/bdev/bdevperf/bdevperf.o 00:02:03.227 CC test/bdev/bdevio/bdevio.o 00:02:03.227 LINK hello_bdev 00:02:03.485 LINK bdevio 00:02:03.485 LINK cuse 00:02:04.052 LINK bdevperf 00:02:04.310 CC examples/nvmf/nvmf/nvmf.o 00:02:04.568 LINK nvmf 00:02:07.098 LINK esnap 00:02:07.666 00:02:07.666 real 1m9.111s 00:02:07.666 user 11m49.671s 00:02:07.666 sys 2m34.209s 00:02:07.666 11:04:02 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:07.666 11:04:02 make -- common/autotest_common.sh@10 -- $ set +x 00:02:07.666 ************************************ 00:02:07.666 END TEST make 00:02:07.666 ************************************ 00:02:07.666 11:04:02 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:07.666 11:04:02 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:07.666 11:04:02 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:07.666 11:04:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.666 11:04:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:07.666 11:04:02 -- pm/common@44 -- $ pid=2394184 00:02:07.666 11:04:02 -- pm/common@50 -- $ kill -TERM 2394184 00:02:07.666 11:04:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.666 11:04:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:07.666 11:04:02 -- pm/common@44 -- $ pid=2394186 00:02:07.666 11:04:02 -- pm/common@50 -- $ kill -TERM 2394186 00:02:07.666 11:04:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.666 11:04:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:07.666 11:04:02 -- pm/common@44 -- $ pid=2394188 00:02:07.666 11:04:02 -- pm/common@50 -- $ kill -TERM 2394188 00:02:07.666 11:04:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.666 11:04:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:07.666 11:04:02 -- pm/common@44 -- $ pid=2394218 00:02:07.666 11:04:02 -- pm/common@50 -- $ sudo -E kill -TERM 2394218 00:02:07.666 11:04:02 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:07.666 11:04:02 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:07.666 11:04:02 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:07.666 11:04:02 -- common/autotest_common.sh@1693 -- # lcov --version 00:02:07.666 11:04:02 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:07.666 11:04:03 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:07.666 11:04:03 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:07.666 11:04:03 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:07.666 11:04:03 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:07.666 11:04:03 -- scripts/common.sh@336 -- # IFS=.-: 00:02:07.666 11:04:03 -- scripts/common.sh@336 -- # read -ra ver1 00:02:07.666 11:04:03 -- scripts/common.sh@337 -- # IFS=.-: 00:02:07.666 11:04:03 -- scripts/common.sh@337 -- # read -ra ver2 00:02:07.666 11:04:03 -- scripts/common.sh@338 -- # local 'op=<' 00:02:07.666 11:04:03 -- scripts/common.sh@340 -- # ver1_l=2 00:02:07.666 11:04:03 -- scripts/common.sh@341 -- # ver2_l=1 00:02:07.666 11:04:03 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:07.666 11:04:03 -- scripts/common.sh@344 -- # case "$op" in 00:02:07.666 11:04:03 -- scripts/common.sh@345 -- # : 1 00:02:07.666 11:04:03 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:07.666 11:04:03 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:07.666 11:04:03 -- scripts/common.sh@365 -- # decimal 1 00:02:07.666 11:04:03 -- scripts/common.sh@353 -- # local d=1 00:02:07.666 11:04:03 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:07.666 11:04:03 -- scripts/common.sh@355 -- # echo 1 00:02:07.666 11:04:03 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:07.666 11:04:03 -- scripts/common.sh@366 -- # decimal 2 00:02:07.666 11:04:03 -- scripts/common.sh@353 -- # local d=2 00:02:07.666 11:04:03 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:07.666 11:04:03 -- scripts/common.sh@355 -- # echo 2 00:02:07.666 11:04:03 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:07.666 11:04:03 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:07.666 11:04:03 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:07.666 11:04:03 -- scripts/common.sh@368 -- # return 0 00:02:07.666 11:04:03 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:07.666 11:04:03 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:07.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:07.666 --rc genhtml_branch_coverage=1 00:02:07.666 --rc genhtml_function_coverage=1 00:02:07.666 --rc genhtml_legend=1 00:02:07.666 --rc geninfo_all_blocks=1 00:02:07.666 --rc geninfo_unexecuted_blocks=1 00:02:07.666 00:02:07.666 ' 00:02:07.666 11:04:03 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:07.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:07.666 --rc genhtml_branch_coverage=1 00:02:07.666 --rc genhtml_function_coverage=1 00:02:07.666 --rc genhtml_legend=1 00:02:07.666 --rc geninfo_all_blocks=1 00:02:07.666 --rc geninfo_unexecuted_blocks=1 00:02:07.666 00:02:07.666 ' 00:02:07.666 11:04:03 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:07.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:07.666 --rc genhtml_branch_coverage=1 00:02:07.666 --rc genhtml_function_coverage=1 00:02:07.666 --rc genhtml_legend=1 00:02:07.666 --rc geninfo_all_blocks=1 00:02:07.666 --rc geninfo_unexecuted_blocks=1 00:02:07.666 00:02:07.666 ' 00:02:07.666 11:04:03 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:07.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:07.666 --rc genhtml_branch_coverage=1 00:02:07.666 --rc genhtml_function_coverage=1 00:02:07.666 --rc genhtml_legend=1 00:02:07.666 --rc geninfo_all_blocks=1 00:02:07.666 --rc geninfo_unexecuted_blocks=1 00:02:07.666 00:02:07.666 ' 00:02:07.666 11:04:03 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:07.666 11:04:03 -- nvmf/common.sh@7 -- # uname -s 00:02:07.666 11:04:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:07.666 11:04:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:07.666 11:04:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:07.666 11:04:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:07.666 11:04:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:07.666 11:04:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:07.666 11:04:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:07.666 11:04:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:07.666 11:04:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:07.666 11:04:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:07.666 11:04:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:02:07.666 11:04:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:02:07.666 11:04:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:07.666 11:04:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:07.666 11:04:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:07.666 11:04:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:07.666 11:04:03 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:07.666 11:04:03 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:07.666 11:04:03 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:07.666 11:04:03 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:07.666 11:04:03 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:07.666 11:04:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.666 11:04:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.666 11:04:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.666 11:04:03 -- paths/export.sh@5 -- # export PATH 00:02:07.667 11:04:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.667 11:04:03 -- nvmf/common.sh@51 -- # : 0 00:02:07.667 11:04:03 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:07.667 11:04:03 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:07.667 11:04:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:07.667 11:04:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:07.667 11:04:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:07.667 11:04:03 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:07.667 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:07.667 11:04:03 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:07.667 11:04:03 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:07.667 11:04:03 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:07.667 11:04:03 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:07.667 11:04:03 -- spdk/autotest.sh@32 -- # uname -s 00:02:07.667 11:04:03 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:07.667 11:04:03 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:07.667 11:04:03 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:07.667 11:04:03 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:07.667 11:04:03 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:07.667 11:04:03 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:07.667 11:04:03 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:07.667 11:04:03 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:07.667 11:04:03 -- spdk/autotest.sh@48 -- # udevadm_pid=2454096 00:02:07.667 11:04:03 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:07.667 11:04:03 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:07.667 11:04:03 -- pm/common@17 -- # local monitor 00:02:07.667 11:04:03 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.667 11:04:03 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.667 11:04:03 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.667 11:04:03 -- pm/common@21 -- # date +%s 00:02:07.667 11:04:03 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.667 11:04:03 -- pm/common@21 -- # date +%s 00:02:07.667 11:04:03 -- pm/common@25 -- # sleep 1 00:02:07.667 11:04:03 -- pm/common@21 -- # date +%s 00:02:07.667 11:04:03 -- pm/common@21 -- # date +%s 00:02:07.667 11:04:03 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732010643 00:02:07.667 11:04:03 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732010643 00:02:07.667 11:04:03 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732010643 00:02:07.667 11:04:03 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732010643 00:02:07.667 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732010643_collect-cpu-load.pm.log 00:02:07.667 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732010643_collect-vmstat.pm.log 00:02:07.667 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732010643_collect-cpu-temp.pm.log 00:02:07.667 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732010643_collect-bmc-pm.bmc.pm.log 00:02:08.606 11:04:04 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:08.606 11:04:04 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:08.606 11:04:04 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:08.606 11:04:04 -- common/autotest_common.sh@10 -- # set +x 00:02:08.606 11:04:04 -- spdk/autotest.sh@59 -- # create_test_list 00:02:08.606 11:04:04 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:08.606 11:04:04 -- common/autotest_common.sh@10 -- # set +x 00:02:08.864 11:04:04 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:08.864 11:04:04 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:08.864 11:04:04 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:08.864 11:04:04 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:08.864 11:04:04 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:08.864 11:04:04 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:08.864 11:04:04 -- common/autotest_common.sh@1457 -- # uname 00:02:08.864 11:04:04 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:08.864 11:04:04 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:08.864 11:04:04 -- common/autotest_common.sh@1477 -- # uname 00:02:08.864 11:04:04 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:08.864 11:04:04 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:08.864 11:04:04 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:08.864 lcov: LCOV version 1.15 00:02:08.864 11:04:04 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:26.938 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:26.938 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:48.940 11:04:41 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:02:48.940 11:04:41 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:48.940 11:04:41 -- common/autotest_common.sh@10 -- # set +x 00:02:48.940 11:04:41 -- spdk/autotest.sh@78 -- # rm -f 00:02:48.940 11:04:41 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:48.940 0000:81:00.0 (8086 0a54): Already using the nvme driver 00:02:48.940 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:02:48.940 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:02:48.940 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:02:48.940 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:02:48.940 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:02:48.940 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:02:48.940 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:02:48.940 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:02:48.940 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:02:48.940 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:02:48.940 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:02:48.940 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:02:48.940 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:02:48.940 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:02:48.940 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:02:48.940 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:02:48.940 11:04:42 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:02:48.940 11:04:42 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:02:48.940 11:04:42 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:02:48.940 11:04:42 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:02:48.940 11:04:42 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:02:48.940 11:04:42 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:02:48.940 11:04:42 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:02:48.940 11:04:42 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:48.940 11:04:42 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:02:48.940 11:04:42 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:02:48.940 11:04:42 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:02:48.940 11:04:42 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:02:48.940 11:04:42 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:02:48.940 11:04:42 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:02:48.940 11:04:42 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:48.940 No valid GPT data, bailing 00:02:48.940 11:04:43 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:48.940 11:04:43 -- scripts/common.sh@394 -- # pt= 00:02:48.940 11:04:43 -- scripts/common.sh@395 -- # return 1 00:02:48.940 11:04:43 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:48.940 1+0 records in 00:02:48.940 1+0 records out 00:02:48.940 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00218261 s, 480 MB/s 00:02:48.940 11:04:43 -- spdk/autotest.sh@105 -- # sync 00:02:48.940 11:04:43 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:48.940 11:04:43 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:48.940 11:04:43 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:49.875 11:04:45 -- spdk/autotest.sh@111 -- # uname -s 00:02:49.875 11:04:45 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:02:49.875 11:04:45 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:02:49.875 11:04:45 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:51.249 Hugepages 00:02:51.249 node hugesize free / total 00:02:51.249 node0 1048576kB 0 / 0 00:02:51.249 node0 2048kB 0 / 0 00:02:51.249 node1 1048576kB 0 / 0 00:02:51.249 node1 2048kB 0 / 0 00:02:51.249 00:02:51.249 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:51.249 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:02:51.249 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:02:51.249 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:02:51.249 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:02:51.249 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:02:51.249 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:02:51.249 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:02:51.249 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:02:51.249 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:02:51.249 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:02:51.249 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:02:51.249 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:02:51.249 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:02:51.249 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:02:51.249 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:02:51.249 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:02:51.249 NVMe 0000:81:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:02:51.249 11:04:46 -- spdk/autotest.sh@117 -- # uname -s 00:02:51.249 11:04:46 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:02:51.249 11:04:46 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:02:51.249 11:04:46 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:53.150 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:53.150 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:53.150 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:53.150 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:53.150 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:53.150 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:53.150 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:53.150 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:53.150 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:53.150 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:53.150 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:53.150 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:53.150 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:53.150 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:53.150 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:53.150 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:55.060 0000:81:00.0 (8086 0a54): nvme -> vfio-pci 00:02:55.060 11:04:50 -- common/autotest_common.sh@1517 -- # sleep 1 00:02:55.998 11:04:51 -- common/autotest_common.sh@1518 -- # bdfs=() 00:02:55.998 11:04:51 -- common/autotest_common.sh@1518 -- # local bdfs 00:02:55.998 11:04:51 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:02:55.998 11:04:51 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:02:55.998 11:04:51 -- common/autotest_common.sh@1498 -- # bdfs=() 00:02:55.998 11:04:51 -- common/autotest_common.sh@1498 -- # local bdfs 00:02:55.998 11:04:51 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:02:55.998 11:04:51 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:02:55.998 11:04:51 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:02:55.998 11:04:51 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:02:55.998 11:04:51 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:81:00.0 00:02:55.998 11:04:51 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:57.372 Waiting for block devices as requested 00:02:57.372 0000:81:00.0 (8086 0a54): vfio-pci -> nvme 00:02:57.372 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:02:57.372 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:02:57.632 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:02:57.632 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:02:57.632 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:02:57.893 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:02:57.893 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:02:57.893 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:02:57.893 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:02:58.152 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:02:58.152 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:02:58.152 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:02:58.152 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:02:58.411 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:02:58.411 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:02:58.411 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:02:58.672 11:04:53 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:02:58.672 11:04:53 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:81:00.0 00:02:58.672 11:04:53 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:02:58.672 11:04:53 -- common/autotest_common.sh@1487 -- # grep 0000:81:00.0/nvme/nvme 00:02:58.672 11:04:53 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:01.0/0000:81:00.0/nvme/nvme0 00:02:58.672 11:04:53 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:01.0/0000:81:00.0/nvme/nvme0 ]] 00:02:58.672 11:04:53 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:01.0/0000:81:00.0/nvme/nvme0 00:02:58.672 11:04:53 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:02:58.672 11:04:53 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:02:58.672 11:04:53 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:02:58.672 11:04:53 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:02:58.672 11:04:53 -- common/autotest_common.sh@1531 -- # grep oacs 00:02:58.672 11:04:53 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:02:58.672 11:04:53 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:02:58.672 11:04:53 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:02:58.672 11:04:53 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:02:58.672 11:04:53 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:02:58.672 11:04:53 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:02:58.672 11:04:53 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:02:58.672 11:04:53 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:02:58.672 11:04:53 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:02:58.673 11:04:53 -- common/autotest_common.sh@1543 -- # continue 00:02:58.673 11:04:53 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:02:58.673 11:04:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:02:58.673 11:04:53 -- common/autotest_common.sh@10 -- # set +x 00:02:58.673 11:04:54 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:02:58.673 11:04:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:58.673 11:04:54 -- common/autotest_common.sh@10 -- # set +x 00:02:58.673 11:04:54 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:00.050 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:00.050 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:00.050 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:00.050 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:00.050 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:00.050 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:00.050 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:00.050 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:00.050 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:00.050 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:00.050 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:00.308 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:00.308 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:00.308 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:00.308 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:00.308 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:02.213 0000:81:00.0 (8086 0a54): nvme -> vfio-pci 00:03:02.213 11:04:57 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:02.213 11:04:57 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:02.213 11:04:57 -- common/autotest_common.sh@10 -- # set +x 00:03:02.213 11:04:57 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:02.213 11:04:57 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:02.213 11:04:57 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:02.213 11:04:57 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:02.213 11:04:57 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:02.213 11:04:57 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:02.213 11:04:57 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:02.213 11:04:57 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:02.213 11:04:57 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:02.213 11:04:57 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:02.213 11:04:57 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:02.213 11:04:57 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:02.213 11:04:57 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:02.213 11:04:57 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:02.213 11:04:57 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:81:00.0 00:03:02.213 11:04:57 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:02.213 11:04:57 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:81:00.0/device 00:03:02.213 11:04:57 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:02.213 11:04:57 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:02.213 11:04:57 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:02.213 11:04:57 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:03:02.213 11:04:57 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:81:00.0 00:03:02.213 11:04:57 -- common/autotest_common.sh@1579 -- # [[ -z 0000:81:00.0 ]] 00:03:02.213 11:04:57 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=2465145 00:03:02.213 11:04:57 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:02.213 11:04:57 -- common/autotest_common.sh@1585 -- # waitforlisten 2465145 00:03:02.213 11:04:57 -- common/autotest_common.sh@835 -- # '[' -z 2465145 ']' 00:03:02.213 11:04:57 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:02.213 11:04:57 -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:02.213 11:04:57 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:02.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:02.213 11:04:57 -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:02.213 11:04:57 -- common/autotest_common.sh@10 -- # set +x 00:03:02.213 [2024-11-19 11:04:57.628824] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:03:02.213 [2024-11-19 11:04:57.628896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2465145 ] 00:03:02.213 [2024-11-19 11:04:57.700874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:02.471 [2024-11-19 11:04:57.755743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:02.729 11:04:58 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:02.729 11:04:58 -- common/autotest_common.sh@868 -- # return 0 00:03:02.729 11:04:58 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:03:02.729 11:04:58 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:03:02.729 11:04:58 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:81:00.0 00:03:06.015 nvme0n1 00:03:06.015 11:05:01 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:06.015 [2024-11-19 11:05:01.369992] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:03:06.015 request: 00:03:06.015 { 00:03:06.015 "nvme_ctrlr_name": "nvme0", 00:03:06.015 "password": "test", 00:03:06.015 "method": "bdev_nvme_opal_revert", 00:03:06.015 "req_id": 1 00:03:06.015 } 00:03:06.015 Got JSON-RPC error response 00:03:06.015 response: 00:03:06.015 { 00:03:06.015 "code": -32602, 00:03:06.015 "message": "Invalid parameters" 00:03:06.015 } 00:03:06.015 11:05:01 -- common/autotest_common.sh@1591 -- # true 00:03:06.015 11:05:01 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:03:06.015 11:05:01 -- common/autotest_common.sh@1595 -- # killprocess 2465145 00:03:06.015 11:05:01 -- common/autotest_common.sh@954 -- # '[' -z 2465145 ']' 00:03:06.015 11:05:01 -- common/autotest_common.sh@958 -- # kill -0 2465145 00:03:06.015 11:05:01 -- common/autotest_common.sh@959 -- # uname 00:03:06.015 11:05:01 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:06.015 11:05:01 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2465145 00:03:06.015 11:05:01 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:06.015 11:05:01 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:06.015 11:05:01 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2465145' 00:03:06.015 killing process with pid 2465145 00:03:06.015 11:05:01 -- common/autotest_common.sh@973 -- # kill 2465145 00:03:06.015 11:05:01 -- common/autotest_common.sh@978 -- # wait 2465145 00:03:09.298 11:05:04 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:09.298 11:05:04 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:09.298 11:05:04 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:09.298 11:05:04 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:09.298 11:05:04 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:09.298 11:05:04 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:09.298 11:05:04 -- common/autotest_common.sh@10 -- # set +x 00:03:09.298 11:05:04 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:09.298 11:05:04 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:09.298 11:05:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:09.298 11:05:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:09.298 11:05:04 -- common/autotest_common.sh@10 -- # set +x 00:03:09.298 ************************************ 00:03:09.298 START TEST env 00:03:09.298 ************************************ 00:03:09.298 11:05:04 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:09.298 * Looking for test storage... 00:03:09.298 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:09.298 11:05:04 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:09.298 11:05:04 env -- common/autotest_common.sh@1693 -- # lcov --version 00:03:09.298 11:05:04 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:09.298 11:05:04 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:09.298 11:05:04 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:09.298 11:05:04 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:09.298 11:05:04 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:09.298 11:05:04 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:09.298 11:05:04 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:09.298 11:05:04 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:09.298 11:05:04 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:09.298 11:05:04 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:09.298 11:05:04 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:09.298 11:05:04 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:09.298 11:05:04 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:09.298 11:05:04 env -- scripts/common.sh@344 -- # case "$op" in 00:03:09.298 11:05:04 env -- scripts/common.sh@345 -- # : 1 00:03:09.298 11:05:04 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:09.298 11:05:04 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:09.298 11:05:04 env -- scripts/common.sh@365 -- # decimal 1 00:03:09.298 11:05:04 env -- scripts/common.sh@353 -- # local d=1 00:03:09.298 11:05:04 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:09.298 11:05:04 env -- scripts/common.sh@355 -- # echo 1 00:03:09.298 11:05:04 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:09.298 11:05:04 env -- scripts/common.sh@366 -- # decimal 2 00:03:09.298 11:05:04 env -- scripts/common.sh@353 -- # local d=2 00:03:09.298 11:05:04 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:09.298 11:05:04 env -- scripts/common.sh@355 -- # echo 2 00:03:09.298 11:05:04 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:09.298 11:05:04 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:09.298 11:05:04 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:09.298 11:05:04 env -- scripts/common.sh@368 -- # return 0 00:03:09.298 11:05:04 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:09.298 11:05:04 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:09.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:09.298 --rc genhtml_branch_coverage=1 00:03:09.298 --rc genhtml_function_coverage=1 00:03:09.298 --rc genhtml_legend=1 00:03:09.298 --rc geninfo_all_blocks=1 00:03:09.298 --rc geninfo_unexecuted_blocks=1 00:03:09.298 00:03:09.298 ' 00:03:09.298 11:05:04 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:09.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:09.298 --rc genhtml_branch_coverage=1 00:03:09.298 --rc genhtml_function_coverage=1 00:03:09.298 --rc genhtml_legend=1 00:03:09.298 --rc geninfo_all_blocks=1 00:03:09.298 --rc geninfo_unexecuted_blocks=1 00:03:09.298 00:03:09.298 ' 00:03:09.298 11:05:04 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:09.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:09.298 --rc genhtml_branch_coverage=1 00:03:09.298 --rc genhtml_function_coverage=1 00:03:09.298 --rc genhtml_legend=1 00:03:09.298 --rc geninfo_all_blocks=1 00:03:09.298 --rc geninfo_unexecuted_blocks=1 00:03:09.298 00:03:09.298 ' 00:03:09.298 11:05:04 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:09.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:09.298 --rc genhtml_branch_coverage=1 00:03:09.298 --rc genhtml_function_coverage=1 00:03:09.298 --rc genhtml_legend=1 00:03:09.298 --rc geninfo_all_blocks=1 00:03:09.298 --rc geninfo_unexecuted_blocks=1 00:03:09.298 00:03:09.298 ' 00:03:09.298 11:05:04 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:09.298 11:05:04 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:09.298 11:05:04 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:09.298 11:05:04 env -- common/autotest_common.sh@10 -- # set +x 00:03:09.298 ************************************ 00:03:09.298 START TEST env_memory 00:03:09.298 ************************************ 00:03:09.298 11:05:04 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:09.298 00:03:09.298 00:03:09.298 CUnit - A unit testing framework for C - Version 2.1-3 00:03:09.298 http://cunit.sourceforge.net/ 00:03:09.298 00:03:09.298 00:03:09.298 Suite: memory 00:03:09.298 Test: alloc and free memory map ...[2024-11-19 11:05:04.314475] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:09.298 passed 00:03:09.298 Test: mem map translation ...[2024-11-19 11:05:04.334430] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:09.298 [2024-11-19 11:05:04.334451] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:09.298 [2024-11-19 11:05:04.334507] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:09.298 [2024-11-19 11:05:04.334519] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:09.298 passed 00:03:09.298 Test: mem map registration ...[2024-11-19 11:05:04.375805] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:09.298 [2024-11-19 11:05:04.375825] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:09.298 passed 00:03:09.298 Test: mem map adjacent registrations ...passed 00:03:09.298 00:03:09.298 Run Summary: Type Total Ran Passed Failed Inactive 00:03:09.298 suites 1 1 n/a 0 0 00:03:09.298 tests 4 4 4 0 0 00:03:09.298 asserts 152 152 152 0 n/a 00:03:09.298 00:03:09.298 Elapsed time = 0.142 seconds 00:03:09.298 00:03:09.298 real 0m0.151s 00:03:09.298 user 0m0.141s 00:03:09.298 sys 0m0.009s 00:03:09.298 11:05:04 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:09.298 11:05:04 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:09.298 ************************************ 00:03:09.298 END TEST env_memory 00:03:09.298 ************************************ 00:03:09.298 11:05:04 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:09.298 11:05:04 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:09.298 11:05:04 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:09.298 11:05:04 env -- common/autotest_common.sh@10 -- # set +x 00:03:09.298 ************************************ 00:03:09.298 START TEST env_vtophys 00:03:09.299 ************************************ 00:03:09.299 11:05:04 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:09.299 EAL: lib.eal log level changed from notice to debug 00:03:09.299 EAL: Detected lcore 0 as core 0 on socket 0 00:03:09.299 EAL: Detected lcore 1 as core 1 on socket 0 00:03:09.299 EAL: Detected lcore 2 as core 2 on socket 0 00:03:09.299 EAL: Detected lcore 3 as core 3 on socket 0 00:03:09.299 EAL: Detected lcore 4 as core 4 on socket 0 00:03:09.299 EAL: Detected lcore 5 as core 5 on socket 0 00:03:09.299 EAL: Detected lcore 6 as core 8 on socket 0 00:03:09.299 EAL: Detected lcore 7 as core 9 on socket 0 00:03:09.299 EAL: Detected lcore 8 as core 10 on socket 0 00:03:09.299 EAL: Detected lcore 9 as core 11 on socket 0 00:03:09.299 EAL: Detected lcore 10 as core 12 on socket 0 00:03:09.299 EAL: Detected lcore 11 as core 13 on socket 0 00:03:09.299 EAL: Detected lcore 12 as core 0 on socket 1 00:03:09.299 EAL: Detected lcore 13 as core 1 on socket 1 00:03:09.299 EAL: Detected lcore 14 as core 2 on socket 1 00:03:09.299 EAL: Detected lcore 15 as core 3 on socket 1 00:03:09.299 EAL: Detected lcore 16 as core 4 on socket 1 00:03:09.299 EAL: Detected lcore 17 as core 5 on socket 1 00:03:09.299 EAL: Detected lcore 18 as core 8 on socket 1 00:03:09.299 EAL: Detected lcore 19 as core 9 on socket 1 00:03:09.299 EAL: Detected lcore 20 as core 10 on socket 1 00:03:09.299 EAL: Detected lcore 21 as core 11 on socket 1 00:03:09.299 EAL: Detected lcore 22 as core 12 on socket 1 00:03:09.299 EAL: Detected lcore 23 as core 13 on socket 1 00:03:09.299 EAL: Detected lcore 24 as core 0 on socket 0 00:03:09.299 EAL: Detected lcore 25 as core 1 on socket 0 00:03:09.299 EAL: Detected lcore 26 as core 2 on socket 0 00:03:09.299 EAL: Detected lcore 27 as core 3 on socket 0 00:03:09.299 EAL: Detected lcore 28 as core 4 on socket 0 00:03:09.299 EAL: Detected lcore 29 as core 5 on socket 0 00:03:09.299 EAL: Detected lcore 30 as core 8 on socket 0 00:03:09.299 EAL: Detected lcore 31 as core 9 on socket 0 00:03:09.299 EAL: Detected lcore 32 as core 10 on socket 0 00:03:09.299 EAL: Detected lcore 33 as core 11 on socket 0 00:03:09.299 EAL: Detected lcore 34 as core 12 on socket 0 00:03:09.299 EAL: Detected lcore 35 as core 13 on socket 0 00:03:09.299 EAL: Detected lcore 36 as core 0 on socket 1 00:03:09.299 EAL: Detected lcore 37 as core 1 on socket 1 00:03:09.299 EAL: Detected lcore 38 as core 2 on socket 1 00:03:09.299 EAL: Detected lcore 39 as core 3 on socket 1 00:03:09.299 EAL: Detected lcore 40 as core 4 on socket 1 00:03:09.299 EAL: Detected lcore 41 as core 5 on socket 1 00:03:09.299 EAL: Detected lcore 42 as core 8 on socket 1 00:03:09.299 EAL: Detected lcore 43 as core 9 on socket 1 00:03:09.299 EAL: Detected lcore 44 as core 10 on socket 1 00:03:09.299 EAL: Detected lcore 45 as core 11 on socket 1 00:03:09.299 EAL: Detected lcore 46 as core 12 on socket 1 00:03:09.299 EAL: Detected lcore 47 as core 13 on socket 1 00:03:09.299 EAL: Maximum logical cores by configuration: 128 00:03:09.299 EAL: Detected CPU lcores: 48 00:03:09.299 EAL: Detected NUMA nodes: 2 00:03:09.299 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:09.299 EAL: Detected shared linkage of DPDK 00:03:09.299 EAL: No shared files mode enabled, IPC will be disabled 00:03:09.299 EAL: Bus pci wants IOVA as 'DC' 00:03:09.299 EAL: Buses did not request a specific IOVA mode. 00:03:09.299 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:09.299 EAL: Selected IOVA mode 'VA' 00:03:09.299 EAL: Probing VFIO support... 00:03:09.299 EAL: IOMMU type 1 (Type 1) is supported 00:03:09.299 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:09.299 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:09.299 EAL: VFIO support initialized 00:03:09.299 EAL: Ask a virtual area of 0x2e000 bytes 00:03:09.299 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:09.299 EAL: Setting up physically contiguous memory... 00:03:09.299 EAL: Setting maximum number of open files to 524288 00:03:09.299 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:09.299 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:09.299 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:09.299 EAL: Ask a virtual area of 0x61000 bytes 00:03:09.299 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:09.299 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:09.299 EAL: Ask a virtual area of 0x400000000 bytes 00:03:09.299 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:09.299 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:09.299 EAL: Ask a virtual area of 0x61000 bytes 00:03:09.299 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:09.299 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:09.299 EAL: Ask a virtual area of 0x400000000 bytes 00:03:09.299 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:09.299 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:09.299 EAL: Ask a virtual area of 0x61000 bytes 00:03:09.299 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:09.299 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:09.299 EAL: Ask a virtual area of 0x400000000 bytes 00:03:09.299 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:09.299 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:09.299 EAL: Ask a virtual area of 0x61000 bytes 00:03:09.299 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:09.299 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:09.299 EAL: Ask a virtual area of 0x400000000 bytes 00:03:09.299 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:09.299 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:09.299 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:09.299 EAL: Ask a virtual area of 0x61000 bytes 00:03:09.299 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:09.299 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:09.299 EAL: Ask a virtual area of 0x400000000 bytes 00:03:09.299 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:09.299 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:09.299 EAL: Ask a virtual area of 0x61000 bytes 00:03:09.299 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:09.299 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:09.299 EAL: Ask a virtual area of 0x400000000 bytes 00:03:09.299 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:09.299 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:09.299 EAL: Ask a virtual area of 0x61000 bytes 00:03:09.299 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:09.299 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:09.299 EAL: Ask a virtual area of 0x400000000 bytes 00:03:09.299 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:09.299 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:09.299 EAL: Ask a virtual area of 0x61000 bytes 00:03:09.299 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:09.299 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:09.299 EAL: Ask a virtual area of 0x400000000 bytes 00:03:09.299 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:09.299 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:09.299 EAL: Hugepages will be freed exactly as allocated. 00:03:09.299 EAL: No shared files mode enabled, IPC is disabled 00:03:09.299 EAL: No shared files mode enabled, IPC is disabled 00:03:09.299 EAL: TSC frequency is ~2700000 KHz 00:03:09.299 EAL: Main lcore 0 is ready (tid=7fe713185a00;cpuset=[0]) 00:03:09.299 EAL: Trying to obtain current memory policy. 00:03:09.299 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:09.299 EAL: Restoring previous memory policy: 0 00:03:09.299 EAL: request: mp_malloc_sync 00:03:09.299 EAL: No shared files mode enabled, IPC is disabled 00:03:09.299 EAL: Heap on socket 0 was expanded by 2MB 00:03:09.299 EAL: No shared files mode enabled, IPC is disabled 00:03:09.299 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:09.299 EAL: Mem event callback 'spdk:(nil)' registered 00:03:09.299 00:03:09.299 00:03:09.299 CUnit - A unit testing framework for C - Version 2.1-3 00:03:09.299 http://cunit.sourceforge.net/ 00:03:09.299 00:03:09.299 00:03:09.299 Suite: components_suite 00:03:09.299 Test: vtophys_malloc_test ...passed 00:03:09.299 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:09.299 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:09.299 EAL: Restoring previous memory policy: 4 00:03:09.299 EAL: Calling mem event callback 'spdk:(nil)' 00:03:09.299 EAL: request: mp_malloc_sync 00:03:09.299 EAL: No shared files mode enabled, IPC is disabled 00:03:09.299 EAL: Heap on socket 0 was expanded by 4MB 00:03:09.299 EAL: Calling mem event callback 'spdk:(nil)' 00:03:09.299 EAL: request: mp_malloc_sync 00:03:09.299 EAL: No shared files mode enabled, IPC is disabled 00:03:09.299 EAL: Heap on socket 0 was shrunk by 4MB 00:03:09.299 EAL: Trying to obtain current memory policy. 00:03:09.299 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:09.299 EAL: Restoring previous memory policy: 4 00:03:09.299 EAL: Calling mem event callback 'spdk:(nil)' 00:03:09.299 EAL: request: mp_malloc_sync 00:03:09.299 EAL: No shared files mode enabled, IPC is disabled 00:03:09.299 EAL: Heap on socket 0 was expanded by 6MB 00:03:09.299 EAL: Calling mem event callback 'spdk:(nil)' 00:03:09.299 EAL: request: mp_malloc_sync 00:03:09.299 EAL: No shared files mode enabled, IPC is disabled 00:03:09.299 EAL: Heap on socket 0 was shrunk by 6MB 00:03:09.299 EAL: Trying to obtain current memory policy. 00:03:09.299 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:09.299 EAL: Restoring previous memory policy: 4 00:03:09.299 EAL: Calling mem event callback 'spdk:(nil)' 00:03:09.299 EAL: request: mp_malloc_sync 00:03:09.299 EAL: No shared files mode enabled, IPC is disabled 00:03:09.299 EAL: Heap on socket 0 was expanded by 10MB 00:03:09.299 EAL: Calling mem event callback 'spdk:(nil)' 00:03:09.299 EAL: request: mp_malloc_sync 00:03:09.299 EAL: No shared files mode enabled, IPC is disabled 00:03:09.299 EAL: Heap on socket 0 was shrunk by 10MB 00:03:09.299 EAL: Trying to obtain current memory policy. 00:03:09.300 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:09.300 EAL: Restoring previous memory policy: 4 00:03:09.300 EAL: Calling mem event callback 'spdk:(nil)' 00:03:09.300 EAL: request: mp_malloc_sync 00:03:09.300 EAL: No shared files mode enabled, IPC is disabled 00:03:09.300 EAL: Heap on socket 0 was expanded by 18MB 00:03:09.300 EAL: Calling mem event callback 'spdk:(nil)' 00:03:09.300 EAL: request: mp_malloc_sync 00:03:09.300 EAL: No shared files mode enabled, IPC is disabled 00:03:09.300 EAL: Heap on socket 0 was shrunk by 18MB 00:03:09.300 EAL: Trying to obtain current memory policy. 00:03:09.300 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:09.300 EAL: Restoring previous memory policy: 4 00:03:09.300 EAL: Calling mem event callback 'spdk:(nil)' 00:03:09.300 EAL: request: mp_malloc_sync 00:03:09.300 EAL: No shared files mode enabled, IPC is disabled 00:03:09.300 EAL: Heap on socket 0 was expanded by 34MB 00:03:09.300 EAL: Calling mem event callback 'spdk:(nil)' 00:03:09.300 EAL: request: mp_malloc_sync 00:03:09.300 EAL: No shared files mode enabled, IPC is disabled 00:03:09.300 EAL: Heap on socket 0 was shrunk by 34MB 00:03:09.300 EAL: Trying to obtain current memory policy. 00:03:09.300 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:09.300 EAL: Restoring previous memory policy: 4 00:03:09.300 EAL: Calling mem event callback 'spdk:(nil)' 00:03:09.300 EAL: request: mp_malloc_sync 00:03:09.300 EAL: No shared files mode enabled, IPC is disabled 00:03:09.300 EAL: Heap on socket 0 was expanded by 66MB 00:03:09.300 EAL: Calling mem event callback 'spdk:(nil)' 00:03:09.300 EAL: request: mp_malloc_sync 00:03:09.300 EAL: No shared files mode enabled, IPC is disabled 00:03:09.300 EAL: Heap on socket 0 was shrunk by 66MB 00:03:09.300 EAL: Trying to obtain current memory policy. 00:03:09.300 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:09.300 EAL: Restoring previous memory policy: 4 00:03:09.300 EAL: Calling mem event callback 'spdk:(nil)' 00:03:09.300 EAL: request: mp_malloc_sync 00:03:09.300 EAL: No shared files mode enabled, IPC is disabled 00:03:09.300 EAL: Heap on socket 0 was expanded by 130MB 00:03:09.300 EAL: Calling mem event callback 'spdk:(nil)' 00:03:09.300 EAL: request: mp_malloc_sync 00:03:09.300 EAL: No shared files mode enabled, IPC is disabled 00:03:09.300 EAL: Heap on socket 0 was shrunk by 130MB 00:03:09.300 EAL: Trying to obtain current memory policy. 00:03:09.300 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:09.300 EAL: Restoring previous memory policy: 4 00:03:09.300 EAL: Calling mem event callback 'spdk:(nil)' 00:03:09.300 EAL: request: mp_malloc_sync 00:03:09.300 EAL: No shared files mode enabled, IPC is disabled 00:03:09.300 EAL: Heap on socket 0 was expanded by 258MB 00:03:09.557 EAL: Calling mem event callback 'spdk:(nil)' 00:03:09.557 EAL: request: mp_malloc_sync 00:03:09.557 EAL: No shared files mode enabled, IPC is disabled 00:03:09.557 EAL: Heap on socket 0 was shrunk by 258MB 00:03:09.557 EAL: Trying to obtain current memory policy. 00:03:09.557 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:09.557 EAL: Restoring previous memory policy: 4 00:03:09.557 EAL: Calling mem event callback 'spdk:(nil)' 00:03:09.557 EAL: request: mp_malloc_sync 00:03:09.558 EAL: No shared files mode enabled, IPC is disabled 00:03:09.558 EAL: Heap on socket 0 was expanded by 514MB 00:03:09.816 EAL: Calling mem event callback 'spdk:(nil)' 00:03:09.816 EAL: request: mp_malloc_sync 00:03:09.816 EAL: No shared files mode enabled, IPC is disabled 00:03:09.816 EAL: Heap on socket 0 was shrunk by 514MB 00:03:09.816 EAL: Trying to obtain current memory policy. 00:03:09.816 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:10.081 EAL: Restoring previous memory policy: 4 00:03:10.081 EAL: Calling mem event callback 'spdk:(nil)' 00:03:10.081 EAL: request: mp_malloc_sync 00:03:10.081 EAL: No shared files mode enabled, IPC is disabled 00:03:10.081 EAL: Heap on socket 0 was expanded by 1026MB 00:03:10.431 EAL: Calling mem event callback 'spdk:(nil)' 00:03:10.716 EAL: request: mp_malloc_sync 00:03:10.716 EAL: No shared files mode enabled, IPC is disabled 00:03:10.716 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:10.716 passed 00:03:10.716 00:03:10.716 Run Summary: Type Total Ran Passed Failed Inactive 00:03:10.716 suites 1 1 n/a 0 0 00:03:10.716 tests 2 2 2 0 0 00:03:10.716 asserts 497 497 497 0 n/a 00:03:10.716 00:03:10.716 Elapsed time = 1.314 seconds 00:03:10.716 EAL: Calling mem event callback 'spdk:(nil)' 00:03:10.716 EAL: request: mp_malloc_sync 00:03:10.716 EAL: No shared files mode enabled, IPC is disabled 00:03:10.716 EAL: Heap on socket 0 was shrunk by 2MB 00:03:10.716 EAL: No shared files mode enabled, IPC is disabled 00:03:10.716 EAL: No shared files mode enabled, IPC is disabled 00:03:10.716 EAL: No shared files mode enabled, IPC is disabled 00:03:10.716 00:03:10.716 real 0m1.442s 00:03:10.716 user 0m0.842s 00:03:10.716 sys 0m0.567s 00:03:10.716 11:05:05 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:10.716 11:05:05 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:10.716 ************************************ 00:03:10.716 END TEST env_vtophys 00:03:10.716 ************************************ 00:03:10.716 11:05:05 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:10.716 11:05:05 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:10.716 11:05:05 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:10.716 11:05:05 env -- common/autotest_common.sh@10 -- # set +x 00:03:10.716 ************************************ 00:03:10.716 START TEST env_pci 00:03:10.716 ************************************ 00:03:10.716 11:05:05 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:10.716 00:03:10.716 00:03:10.716 CUnit - A unit testing framework for C - Version 2.1-3 00:03:10.716 http://cunit.sourceforge.net/ 00:03:10.716 00:03:10.716 00:03:10.716 Suite: pci 00:03:10.716 Test: pci_hook ...[2024-11-19 11:05:05.982583] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2466169 has claimed it 00:03:10.716 EAL: Cannot find device (10000:00:01.0) 00:03:10.716 EAL: Failed to attach device on primary process 00:03:10.716 passed 00:03:10.716 00:03:10.716 Run Summary: Type Total Ran Passed Failed Inactive 00:03:10.716 suites 1 1 n/a 0 0 00:03:10.716 tests 1 1 1 0 0 00:03:10.716 asserts 25 25 25 0 n/a 00:03:10.716 00:03:10.716 Elapsed time = 0.025 seconds 00:03:10.716 00:03:10.716 real 0m0.039s 00:03:10.716 user 0m0.015s 00:03:10.716 sys 0m0.023s 00:03:10.716 11:05:06 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:10.716 11:05:06 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:10.716 ************************************ 00:03:10.716 END TEST env_pci 00:03:10.716 ************************************ 00:03:10.716 11:05:06 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:10.716 11:05:06 env -- env/env.sh@15 -- # uname 00:03:10.716 11:05:06 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:10.716 11:05:06 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:10.716 11:05:06 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:10.716 11:05:06 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:10.716 11:05:06 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:10.716 11:05:06 env -- common/autotest_common.sh@10 -- # set +x 00:03:10.716 ************************************ 00:03:10.716 START TEST env_dpdk_post_init 00:03:10.716 ************************************ 00:03:10.716 11:05:06 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:10.716 EAL: Detected CPU lcores: 48 00:03:10.716 EAL: Detected NUMA nodes: 2 00:03:10.716 EAL: Detected shared linkage of DPDK 00:03:10.716 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:10.716 EAL: Selected IOVA mode 'VA' 00:03:10.716 EAL: VFIO support initialized 00:03:10.716 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:10.716 EAL: Using IOMMU type 1 (Type 1) 00:03:10.716 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:03:10.716 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:03:10.976 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:03:10.976 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:03:10.976 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:03:10.976 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:03:10.976 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:03:10.976 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:03:10.976 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:03:10.976 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:03:10.976 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:03:10.976 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:03:10.976 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:03:10.976 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:03:10.976 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:03:10.976 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:03:11.913 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:81:00.0 (socket 1) 00:03:16.093 EAL: Releasing PCI mapped resource for 0000:81:00.0 00:03:16.093 EAL: Calling pci_unmap_resource for 0000:81:00.0 at 0x202001040000 00:03:16.093 Starting DPDK initialization... 00:03:16.093 Starting SPDK post initialization... 00:03:16.093 SPDK NVMe probe 00:03:16.093 Attaching to 0000:81:00.0 00:03:16.093 Attached to 0000:81:00.0 00:03:16.093 Cleaning up... 00:03:16.093 00:03:16.093 real 0m5.329s 00:03:16.093 user 0m3.828s 00:03:16.093 sys 0m0.564s 00:03:16.093 11:05:11 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:16.093 11:05:11 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:16.093 ************************************ 00:03:16.093 END TEST env_dpdk_post_init 00:03:16.093 ************************************ 00:03:16.093 11:05:11 env -- env/env.sh@26 -- # uname 00:03:16.093 11:05:11 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:16.093 11:05:11 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:16.093 11:05:11 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:16.093 11:05:11 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:16.093 11:05:11 env -- common/autotest_common.sh@10 -- # set +x 00:03:16.093 ************************************ 00:03:16.093 START TEST env_mem_callbacks 00:03:16.093 ************************************ 00:03:16.093 11:05:11 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:16.093 EAL: Detected CPU lcores: 48 00:03:16.093 EAL: Detected NUMA nodes: 2 00:03:16.093 EAL: Detected shared linkage of DPDK 00:03:16.093 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:16.093 EAL: Selected IOVA mode 'VA' 00:03:16.093 EAL: VFIO support initialized 00:03:16.093 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:16.093 00:03:16.093 00:03:16.093 CUnit - A unit testing framework for C - Version 2.1-3 00:03:16.093 http://cunit.sourceforge.net/ 00:03:16.093 00:03:16.093 00:03:16.093 Suite: memory 00:03:16.093 Test: test ... 00:03:16.093 register 0x200000200000 2097152 00:03:16.093 malloc 3145728 00:03:16.093 register 0x200000400000 4194304 00:03:16.093 buf 0x200000500000 len 3145728 PASSED 00:03:16.093 malloc 64 00:03:16.093 buf 0x2000004fff40 len 64 PASSED 00:03:16.093 malloc 4194304 00:03:16.093 register 0x200000800000 6291456 00:03:16.093 buf 0x200000a00000 len 4194304 PASSED 00:03:16.093 free 0x200000500000 3145728 00:03:16.093 free 0x2000004fff40 64 00:03:16.093 unregister 0x200000400000 4194304 PASSED 00:03:16.093 free 0x200000a00000 4194304 00:03:16.093 unregister 0x200000800000 6291456 PASSED 00:03:16.093 malloc 8388608 00:03:16.093 register 0x200000400000 10485760 00:03:16.093 buf 0x200000600000 len 8388608 PASSED 00:03:16.093 free 0x200000600000 8388608 00:03:16.093 unregister 0x200000400000 10485760 PASSED 00:03:16.093 passed 00:03:16.093 00:03:16.093 Run Summary: Type Total Ran Passed Failed Inactive 00:03:16.093 suites 1 1 n/a 0 0 00:03:16.093 tests 1 1 1 0 0 00:03:16.093 asserts 15 15 15 0 n/a 00:03:16.093 00:03:16.093 Elapsed time = 0.005 seconds 00:03:16.093 00:03:16.093 real 0m0.051s 00:03:16.093 user 0m0.016s 00:03:16.093 sys 0m0.035s 00:03:16.093 11:05:11 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:16.093 11:05:11 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:16.093 ************************************ 00:03:16.093 END TEST env_mem_callbacks 00:03:16.093 ************************************ 00:03:16.093 00:03:16.093 real 0m7.404s 00:03:16.093 user 0m5.031s 00:03:16.093 sys 0m1.425s 00:03:16.093 11:05:11 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:16.093 11:05:11 env -- common/autotest_common.sh@10 -- # set +x 00:03:16.093 ************************************ 00:03:16.093 END TEST env 00:03:16.093 ************************************ 00:03:16.093 11:05:11 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:16.093 11:05:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:16.093 11:05:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:16.093 11:05:11 -- common/autotest_common.sh@10 -- # set +x 00:03:16.093 ************************************ 00:03:16.093 START TEST rpc 00:03:16.094 ************************************ 00:03:16.094 11:05:11 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:16.352 * Looking for test storage... 00:03:16.352 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:16.352 11:05:11 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:16.352 11:05:11 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:16.352 11:05:11 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:16.352 11:05:11 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:16.352 11:05:11 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:16.352 11:05:11 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:16.352 11:05:11 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:16.352 11:05:11 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:16.352 11:05:11 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:16.352 11:05:11 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:16.352 11:05:11 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:16.352 11:05:11 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:16.352 11:05:11 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:16.352 11:05:11 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:16.352 11:05:11 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:16.352 11:05:11 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:16.352 11:05:11 rpc -- scripts/common.sh@345 -- # : 1 00:03:16.352 11:05:11 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:16.352 11:05:11 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:16.352 11:05:11 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:16.352 11:05:11 rpc -- scripts/common.sh@353 -- # local d=1 00:03:16.352 11:05:11 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:16.352 11:05:11 rpc -- scripts/common.sh@355 -- # echo 1 00:03:16.352 11:05:11 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:16.352 11:05:11 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:16.352 11:05:11 rpc -- scripts/common.sh@353 -- # local d=2 00:03:16.352 11:05:11 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:16.352 11:05:11 rpc -- scripts/common.sh@355 -- # echo 2 00:03:16.352 11:05:11 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:16.352 11:05:11 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:16.352 11:05:11 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:16.352 11:05:11 rpc -- scripts/common.sh@368 -- # return 0 00:03:16.352 11:05:11 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:16.352 11:05:11 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:16.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.352 --rc genhtml_branch_coverage=1 00:03:16.352 --rc genhtml_function_coverage=1 00:03:16.352 --rc genhtml_legend=1 00:03:16.352 --rc geninfo_all_blocks=1 00:03:16.352 --rc geninfo_unexecuted_blocks=1 00:03:16.352 00:03:16.352 ' 00:03:16.352 11:05:11 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:16.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.352 --rc genhtml_branch_coverage=1 00:03:16.352 --rc genhtml_function_coverage=1 00:03:16.352 --rc genhtml_legend=1 00:03:16.352 --rc geninfo_all_blocks=1 00:03:16.352 --rc geninfo_unexecuted_blocks=1 00:03:16.352 00:03:16.352 ' 00:03:16.352 11:05:11 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:16.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.352 --rc genhtml_branch_coverage=1 00:03:16.352 --rc genhtml_function_coverage=1 00:03:16.352 --rc genhtml_legend=1 00:03:16.352 --rc geninfo_all_blocks=1 00:03:16.352 --rc geninfo_unexecuted_blocks=1 00:03:16.352 00:03:16.352 ' 00:03:16.352 11:05:11 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:16.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.352 --rc genhtml_branch_coverage=1 00:03:16.352 --rc genhtml_function_coverage=1 00:03:16.352 --rc genhtml_legend=1 00:03:16.352 --rc geninfo_all_blocks=1 00:03:16.352 --rc geninfo_unexecuted_blocks=1 00:03:16.352 00:03:16.352 ' 00:03:16.352 11:05:11 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2466973 00:03:16.353 11:05:11 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:16.353 11:05:11 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:16.353 11:05:11 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2466973 00:03:16.353 11:05:11 rpc -- common/autotest_common.sh@835 -- # '[' -z 2466973 ']' 00:03:16.353 11:05:11 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:16.353 11:05:11 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:16.353 11:05:11 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:16.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:16.353 11:05:11 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:16.353 11:05:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:16.353 [2024-11-19 11:05:11.750694] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:03:16.353 [2024-11-19 11:05:11.750804] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2466973 ] 00:03:16.353 [2024-11-19 11:05:11.826376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:16.611 [2024-11-19 11:05:11.884287] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:16.611 [2024-11-19 11:05:11.884349] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2466973' to capture a snapshot of events at runtime. 00:03:16.611 [2024-11-19 11:05:11.884371] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:16.611 [2024-11-19 11:05:11.884400] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:16.611 [2024-11-19 11:05:11.884410] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2466973 for offline analysis/debug. 00:03:16.611 [2024-11-19 11:05:11.885063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:16.869 11:05:12 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:16.869 11:05:12 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:16.869 11:05:12 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:16.869 11:05:12 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:16.869 11:05:12 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:16.869 11:05:12 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:16.869 11:05:12 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:16.869 11:05:12 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:16.869 11:05:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:16.869 ************************************ 00:03:16.869 START TEST rpc_integrity 00:03:16.870 ************************************ 00:03:16.870 11:05:12 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:16.870 11:05:12 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:16.870 11:05:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:16.870 11:05:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:16.870 11:05:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:16.870 11:05:12 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:16.870 11:05:12 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:16.870 11:05:12 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:16.870 11:05:12 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:16.870 11:05:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:16.870 11:05:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:16.870 11:05:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:16.870 11:05:12 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:16.870 11:05:12 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:16.870 11:05:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:16.870 11:05:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:16.870 11:05:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:16.870 11:05:12 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:16.870 { 00:03:16.870 "name": "Malloc0", 00:03:16.870 "aliases": [ 00:03:16.870 "39994244-aa2e-4eba-9bf3-08db7f0ed2a2" 00:03:16.870 ], 00:03:16.870 "product_name": "Malloc disk", 00:03:16.870 "block_size": 512, 00:03:16.870 "num_blocks": 16384, 00:03:16.870 "uuid": "39994244-aa2e-4eba-9bf3-08db7f0ed2a2", 00:03:16.870 "assigned_rate_limits": { 00:03:16.870 "rw_ios_per_sec": 0, 00:03:16.870 "rw_mbytes_per_sec": 0, 00:03:16.870 "r_mbytes_per_sec": 0, 00:03:16.870 "w_mbytes_per_sec": 0 00:03:16.870 }, 00:03:16.870 "claimed": false, 00:03:16.870 "zoned": false, 00:03:16.870 "supported_io_types": { 00:03:16.870 "read": true, 00:03:16.870 "write": true, 00:03:16.870 "unmap": true, 00:03:16.870 "flush": true, 00:03:16.870 "reset": true, 00:03:16.870 "nvme_admin": false, 00:03:16.870 "nvme_io": false, 00:03:16.870 "nvme_io_md": false, 00:03:16.870 "write_zeroes": true, 00:03:16.870 "zcopy": true, 00:03:16.870 "get_zone_info": false, 00:03:16.870 "zone_management": false, 00:03:16.870 "zone_append": false, 00:03:16.870 "compare": false, 00:03:16.870 "compare_and_write": false, 00:03:16.870 "abort": true, 00:03:16.870 "seek_hole": false, 00:03:16.870 "seek_data": false, 00:03:16.870 "copy": true, 00:03:16.870 "nvme_iov_md": false 00:03:16.870 }, 00:03:16.870 "memory_domains": [ 00:03:16.870 { 00:03:16.870 "dma_device_id": "system", 00:03:16.870 "dma_device_type": 1 00:03:16.870 }, 00:03:16.870 { 00:03:16.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:16.870 "dma_device_type": 2 00:03:16.870 } 00:03:16.870 ], 00:03:16.870 "driver_specific": {} 00:03:16.870 } 00:03:16.870 ]' 00:03:16.870 11:05:12 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:16.870 11:05:12 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:16.870 11:05:12 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:16.870 11:05:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:16.870 11:05:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:16.870 [2024-11-19 11:05:12.284722] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:16.870 [2024-11-19 11:05:12.284762] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:16.870 [2024-11-19 11:05:12.284782] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xdb8750 00:03:16.870 [2024-11-19 11:05:12.284796] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:16.870 [2024-11-19 11:05:12.286148] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:16.870 [2024-11-19 11:05:12.286171] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:16.870 Passthru0 00:03:16.870 11:05:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:16.870 11:05:12 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:16.870 11:05:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:16.870 11:05:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:16.870 11:05:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:16.870 11:05:12 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:16.870 { 00:03:16.870 "name": "Malloc0", 00:03:16.870 "aliases": [ 00:03:16.870 "39994244-aa2e-4eba-9bf3-08db7f0ed2a2" 00:03:16.870 ], 00:03:16.870 "product_name": "Malloc disk", 00:03:16.870 "block_size": 512, 00:03:16.870 "num_blocks": 16384, 00:03:16.870 "uuid": "39994244-aa2e-4eba-9bf3-08db7f0ed2a2", 00:03:16.870 "assigned_rate_limits": { 00:03:16.870 "rw_ios_per_sec": 0, 00:03:16.870 "rw_mbytes_per_sec": 0, 00:03:16.870 "r_mbytes_per_sec": 0, 00:03:16.870 "w_mbytes_per_sec": 0 00:03:16.870 }, 00:03:16.870 "claimed": true, 00:03:16.870 "claim_type": "exclusive_write", 00:03:16.870 "zoned": false, 00:03:16.870 "supported_io_types": { 00:03:16.870 "read": true, 00:03:16.870 "write": true, 00:03:16.870 "unmap": true, 00:03:16.870 "flush": true, 00:03:16.870 "reset": true, 00:03:16.870 "nvme_admin": false, 00:03:16.870 "nvme_io": false, 00:03:16.870 "nvme_io_md": false, 00:03:16.870 "write_zeroes": true, 00:03:16.870 "zcopy": true, 00:03:16.870 "get_zone_info": false, 00:03:16.870 "zone_management": false, 00:03:16.870 "zone_append": false, 00:03:16.870 "compare": false, 00:03:16.870 "compare_and_write": false, 00:03:16.870 "abort": true, 00:03:16.870 "seek_hole": false, 00:03:16.870 "seek_data": false, 00:03:16.870 "copy": true, 00:03:16.870 "nvme_iov_md": false 00:03:16.870 }, 00:03:16.870 "memory_domains": [ 00:03:16.870 { 00:03:16.870 "dma_device_id": "system", 00:03:16.870 "dma_device_type": 1 00:03:16.870 }, 00:03:16.870 { 00:03:16.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:16.870 "dma_device_type": 2 00:03:16.870 } 00:03:16.870 ], 00:03:16.870 "driver_specific": {} 00:03:16.870 }, 00:03:16.870 { 00:03:16.870 "name": "Passthru0", 00:03:16.870 "aliases": [ 00:03:16.870 "a293eaba-ab59-5535-a97a-75103b833065" 00:03:16.870 ], 00:03:16.870 "product_name": "passthru", 00:03:16.870 "block_size": 512, 00:03:16.870 "num_blocks": 16384, 00:03:16.870 "uuid": "a293eaba-ab59-5535-a97a-75103b833065", 00:03:16.870 "assigned_rate_limits": { 00:03:16.870 "rw_ios_per_sec": 0, 00:03:16.870 "rw_mbytes_per_sec": 0, 00:03:16.870 "r_mbytes_per_sec": 0, 00:03:16.870 "w_mbytes_per_sec": 0 00:03:16.870 }, 00:03:16.870 "claimed": false, 00:03:16.870 "zoned": false, 00:03:16.870 "supported_io_types": { 00:03:16.870 "read": true, 00:03:16.870 "write": true, 00:03:16.870 "unmap": true, 00:03:16.870 "flush": true, 00:03:16.870 "reset": true, 00:03:16.870 "nvme_admin": false, 00:03:16.870 "nvme_io": false, 00:03:16.870 "nvme_io_md": false, 00:03:16.870 "write_zeroes": true, 00:03:16.870 "zcopy": true, 00:03:16.870 "get_zone_info": false, 00:03:16.870 "zone_management": false, 00:03:16.870 "zone_append": false, 00:03:16.870 "compare": false, 00:03:16.870 "compare_and_write": false, 00:03:16.870 "abort": true, 00:03:16.870 "seek_hole": false, 00:03:16.870 "seek_data": false, 00:03:16.870 "copy": true, 00:03:16.870 "nvme_iov_md": false 00:03:16.870 }, 00:03:16.870 "memory_domains": [ 00:03:16.870 { 00:03:16.870 "dma_device_id": "system", 00:03:16.870 "dma_device_type": 1 00:03:16.870 }, 00:03:16.870 { 00:03:16.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:16.870 "dma_device_type": 2 00:03:16.870 } 00:03:16.870 ], 00:03:16.870 "driver_specific": { 00:03:16.870 "passthru": { 00:03:16.870 "name": "Passthru0", 00:03:16.870 "base_bdev_name": "Malloc0" 00:03:16.870 } 00:03:16.870 } 00:03:16.870 } 00:03:16.870 ]' 00:03:16.870 11:05:12 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:16.870 11:05:12 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:16.870 11:05:12 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:16.870 11:05:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:16.870 11:05:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:16.870 11:05:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:16.870 11:05:12 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:16.870 11:05:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:16.870 11:05:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:16.870 11:05:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:16.870 11:05:12 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:16.870 11:05:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:16.870 11:05:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:16.870 11:05:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:16.870 11:05:12 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:16.870 11:05:12 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:17.129 11:05:12 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:17.129 00:03:17.129 real 0m0.211s 00:03:17.129 user 0m0.134s 00:03:17.129 sys 0m0.022s 00:03:17.129 11:05:12 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:17.129 11:05:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:17.129 ************************************ 00:03:17.129 END TEST rpc_integrity 00:03:17.129 ************************************ 00:03:17.129 11:05:12 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:17.129 11:05:12 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:17.129 11:05:12 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:17.129 11:05:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:17.129 ************************************ 00:03:17.129 START TEST rpc_plugins 00:03:17.129 ************************************ 00:03:17.130 11:05:12 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:17.130 11:05:12 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:17.130 11:05:12 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:17.130 11:05:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:17.130 11:05:12 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:17.130 11:05:12 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:17.130 11:05:12 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:17.130 11:05:12 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:17.130 11:05:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:17.130 11:05:12 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:17.130 11:05:12 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:17.130 { 00:03:17.130 "name": "Malloc1", 00:03:17.130 "aliases": [ 00:03:17.130 "7759058b-7f5b-4fc7-8b42-394fac8db887" 00:03:17.130 ], 00:03:17.130 "product_name": "Malloc disk", 00:03:17.130 "block_size": 4096, 00:03:17.130 "num_blocks": 256, 00:03:17.130 "uuid": "7759058b-7f5b-4fc7-8b42-394fac8db887", 00:03:17.130 "assigned_rate_limits": { 00:03:17.130 "rw_ios_per_sec": 0, 00:03:17.130 "rw_mbytes_per_sec": 0, 00:03:17.130 "r_mbytes_per_sec": 0, 00:03:17.130 "w_mbytes_per_sec": 0 00:03:17.130 }, 00:03:17.130 "claimed": false, 00:03:17.130 "zoned": false, 00:03:17.130 "supported_io_types": { 00:03:17.130 "read": true, 00:03:17.130 "write": true, 00:03:17.130 "unmap": true, 00:03:17.130 "flush": true, 00:03:17.130 "reset": true, 00:03:17.130 "nvme_admin": false, 00:03:17.130 "nvme_io": false, 00:03:17.130 "nvme_io_md": false, 00:03:17.130 "write_zeroes": true, 00:03:17.130 "zcopy": true, 00:03:17.130 "get_zone_info": false, 00:03:17.130 "zone_management": false, 00:03:17.130 "zone_append": false, 00:03:17.130 "compare": false, 00:03:17.130 "compare_and_write": false, 00:03:17.130 "abort": true, 00:03:17.130 "seek_hole": false, 00:03:17.130 "seek_data": false, 00:03:17.130 "copy": true, 00:03:17.130 "nvme_iov_md": false 00:03:17.130 }, 00:03:17.130 "memory_domains": [ 00:03:17.130 { 00:03:17.130 "dma_device_id": "system", 00:03:17.130 "dma_device_type": 1 00:03:17.130 }, 00:03:17.130 { 00:03:17.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:17.130 "dma_device_type": 2 00:03:17.130 } 00:03:17.130 ], 00:03:17.130 "driver_specific": {} 00:03:17.130 } 00:03:17.130 ]' 00:03:17.130 11:05:12 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:17.130 11:05:12 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:17.130 11:05:12 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:17.130 11:05:12 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:17.130 11:05:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:17.130 11:05:12 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:17.130 11:05:12 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:17.130 11:05:12 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:17.130 11:05:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:17.130 11:05:12 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:17.130 11:05:12 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:17.130 11:05:12 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:17.130 11:05:12 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:17.130 00:03:17.130 real 0m0.107s 00:03:17.130 user 0m0.068s 00:03:17.130 sys 0m0.007s 00:03:17.130 11:05:12 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:17.130 11:05:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:17.130 ************************************ 00:03:17.130 END TEST rpc_plugins 00:03:17.130 ************************************ 00:03:17.130 11:05:12 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:17.130 11:05:12 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:17.130 11:05:12 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:17.130 11:05:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:17.130 ************************************ 00:03:17.130 START TEST rpc_trace_cmd_test 00:03:17.130 ************************************ 00:03:17.130 11:05:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:17.130 11:05:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:17.130 11:05:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:17.130 11:05:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:17.130 11:05:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:17.130 11:05:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:17.130 11:05:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:17.130 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2466973", 00:03:17.130 "tpoint_group_mask": "0x8", 00:03:17.130 "iscsi_conn": { 00:03:17.130 "mask": "0x2", 00:03:17.130 "tpoint_mask": "0x0" 00:03:17.130 }, 00:03:17.130 "scsi": { 00:03:17.130 "mask": "0x4", 00:03:17.130 "tpoint_mask": "0x0" 00:03:17.130 }, 00:03:17.130 "bdev": { 00:03:17.130 "mask": "0x8", 00:03:17.130 "tpoint_mask": "0xffffffffffffffff" 00:03:17.130 }, 00:03:17.130 "nvmf_rdma": { 00:03:17.130 "mask": "0x10", 00:03:17.130 "tpoint_mask": "0x0" 00:03:17.130 }, 00:03:17.130 "nvmf_tcp": { 00:03:17.130 "mask": "0x20", 00:03:17.130 "tpoint_mask": "0x0" 00:03:17.130 }, 00:03:17.130 "ftl": { 00:03:17.130 "mask": "0x40", 00:03:17.130 "tpoint_mask": "0x0" 00:03:17.130 }, 00:03:17.130 "blobfs": { 00:03:17.130 "mask": "0x80", 00:03:17.130 "tpoint_mask": "0x0" 00:03:17.130 }, 00:03:17.130 "dsa": { 00:03:17.130 "mask": "0x200", 00:03:17.130 "tpoint_mask": "0x0" 00:03:17.130 }, 00:03:17.130 "thread": { 00:03:17.130 "mask": "0x400", 00:03:17.130 "tpoint_mask": "0x0" 00:03:17.130 }, 00:03:17.130 "nvme_pcie": { 00:03:17.130 "mask": "0x800", 00:03:17.130 "tpoint_mask": "0x0" 00:03:17.130 }, 00:03:17.130 "iaa": { 00:03:17.130 "mask": "0x1000", 00:03:17.130 "tpoint_mask": "0x0" 00:03:17.130 }, 00:03:17.130 "nvme_tcp": { 00:03:17.130 "mask": "0x2000", 00:03:17.130 "tpoint_mask": "0x0" 00:03:17.130 }, 00:03:17.130 "bdev_nvme": { 00:03:17.130 "mask": "0x4000", 00:03:17.130 "tpoint_mask": "0x0" 00:03:17.130 }, 00:03:17.130 "sock": { 00:03:17.130 "mask": "0x8000", 00:03:17.130 "tpoint_mask": "0x0" 00:03:17.130 }, 00:03:17.130 "blob": { 00:03:17.130 "mask": "0x10000", 00:03:17.130 "tpoint_mask": "0x0" 00:03:17.130 }, 00:03:17.130 "bdev_raid": { 00:03:17.130 "mask": "0x20000", 00:03:17.130 "tpoint_mask": "0x0" 00:03:17.130 }, 00:03:17.130 "scheduler": { 00:03:17.130 "mask": "0x40000", 00:03:17.130 "tpoint_mask": "0x0" 00:03:17.130 } 00:03:17.130 }' 00:03:17.130 11:05:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:17.388 11:05:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:17.388 11:05:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:17.388 11:05:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:17.388 11:05:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:17.388 11:05:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:17.388 11:05:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:17.388 11:05:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:17.388 11:05:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:17.388 11:05:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:17.388 00:03:17.388 real 0m0.183s 00:03:17.388 user 0m0.163s 00:03:17.388 sys 0m0.011s 00:03:17.388 11:05:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:17.388 11:05:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:17.388 ************************************ 00:03:17.388 END TEST rpc_trace_cmd_test 00:03:17.388 ************************************ 00:03:17.388 11:05:12 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:17.388 11:05:12 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:17.388 11:05:12 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:17.388 11:05:12 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:17.388 11:05:12 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:17.388 11:05:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:17.388 ************************************ 00:03:17.388 START TEST rpc_daemon_integrity 00:03:17.388 ************************************ 00:03:17.388 11:05:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:17.388 11:05:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:17.388 11:05:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:17.388 11:05:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:17.389 11:05:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:17.389 11:05:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:17.389 11:05:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:17.389 11:05:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:17.389 11:05:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:17.389 11:05:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:17.389 11:05:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:17.389 11:05:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:17.389 11:05:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:17.389 11:05:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:17.389 11:05:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:17.389 11:05:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:17.647 11:05:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:17.647 11:05:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:17.647 { 00:03:17.647 "name": "Malloc2", 00:03:17.647 "aliases": [ 00:03:17.647 "4e1ab2dc-d7d0-4cd8-8a6c-4f68fc4ba387" 00:03:17.647 ], 00:03:17.647 "product_name": "Malloc disk", 00:03:17.647 "block_size": 512, 00:03:17.647 "num_blocks": 16384, 00:03:17.647 "uuid": "4e1ab2dc-d7d0-4cd8-8a6c-4f68fc4ba387", 00:03:17.647 "assigned_rate_limits": { 00:03:17.647 "rw_ios_per_sec": 0, 00:03:17.647 "rw_mbytes_per_sec": 0, 00:03:17.647 "r_mbytes_per_sec": 0, 00:03:17.647 "w_mbytes_per_sec": 0 00:03:17.647 }, 00:03:17.647 "claimed": false, 00:03:17.647 "zoned": false, 00:03:17.647 "supported_io_types": { 00:03:17.647 "read": true, 00:03:17.647 "write": true, 00:03:17.647 "unmap": true, 00:03:17.647 "flush": true, 00:03:17.647 "reset": true, 00:03:17.647 "nvme_admin": false, 00:03:17.647 "nvme_io": false, 00:03:17.647 "nvme_io_md": false, 00:03:17.647 "write_zeroes": true, 00:03:17.647 "zcopy": true, 00:03:17.647 "get_zone_info": false, 00:03:17.647 "zone_management": false, 00:03:17.647 "zone_append": false, 00:03:17.647 "compare": false, 00:03:17.647 "compare_and_write": false, 00:03:17.647 "abort": true, 00:03:17.647 "seek_hole": false, 00:03:17.647 "seek_data": false, 00:03:17.647 "copy": true, 00:03:17.647 "nvme_iov_md": false 00:03:17.647 }, 00:03:17.647 "memory_domains": [ 00:03:17.647 { 00:03:17.647 "dma_device_id": "system", 00:03:17.647 "dma_device_type": 1 00:03:17.647 }, 00:03:17.647 { 00:03:17.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:17.647 "dma_device_type": 2 00:03:17.647 } 00:03:17.647 ], 00:03:17.647 "driver_specific": {} 00:03:17.647 } 00:03:17.647 ]' 00:03:17.647 11:05:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:17.647 11:05:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:17.647 11:05:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:17.647 11:05:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:17.647 11:05:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:17.647 [2024-11-19 11:05:12.930695] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:17.647 [2024-11-19 11:05:12.930750] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:17.647 [2024-11-19 11:05:12.930771] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe49080 00:03:17.647 [2024-11-19 11:05:12.930799] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:17.647 [2024-11-19 11:05:12.932004] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:17.647 [2024-11-19 11:05:12.932026] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:17.647 Passthru0 00:03:17.647 11:05:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:17.647 11:05:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:17.647 11:05:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:17.647 11:05:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:17.647 11:05:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:17.647 11:05:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:17.647 { 00:03:17.647 "name": "Malloc2", 00:03:17.647 "aliases": [ 00:03:17.647 "4e1ab2dc-d7d0-4cd8-8a6c-4f68fc4ba387" 00:03:17.647 ], 00:03:17.647 "product_name": "Malloc disk", 00:03:17.647 "block_size": 512, 00:03:17.647 "num_blocks": 16384, 00:03:17.647 "uuid": "4e1ab2dc-d7d0-4cd8-8a6c-4f68fc4ba387", 00:03:17.647 "assigned_rate_limits": { 00:03:17.647 "rw_ios_per_sec": 0, 00:03:17.647 "rw_mbytes_per_sec": 0, 00:03:17.647 "r_mbytes_per_sec": 0, 00:03:17.647 "w_mbytes_per_sec": 0 00:03:17.647 }, 00:03:17.647 "claimed": true, 00:03:17.647 "claim_type": "exclusive_write", 00:03:17.647 "zoned": false, 00:03:17.647 "supported_io_types": { 00:03:17.647 "read": true, 00:03:17.647 "write": true, 00:03:17.647 "unmap": true, 00:03:17.647 "flush": true, 00:03:17.647 "reset": true, 00:03:17.647 "nvme_admin": false, 00:03:17.647 "nvme_io": false, 00:03:17.647 "nvme_io_md": false, 00:03:17.647 "write_zeroes": true, 00:03:17.647 "zcopy": true, 00:03:17.647 "get_zone_info": false, 00:03:17.647 "zone_management": false, 00:03:17.647 "zone_append": false, 00:03:17.647 "compare": false, 00:03:17.647 "compare_and_write": false, 00:03:17.647 "abort": true, 00:03:17.647 "seek_hole": false, 00:03:17.647 "seek_data": false, 00:03:17.647 "copy": true, 00:03:17.647 "nvme_iov_md": false 00:03:17.647 }, 00:03:17.647 "memory_domains": [ 00:03:17.647 { 00:03:17.647 "dma_device_id": "system", 00:03:17.647 "dma_device_type": 1 00:03:17.647 }, 00:03:17.647 { 00:03:17.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:17.647 "dma_device_type": 2 00:03:17.647 } 00:03:17.647 ], 00:03:17.647 "driver_specific": {} 00:03:17.647 }, 00:03:17.647 { 00:03:17.647 "name": "Passthru0", 00:03:17.647 "aliases": [ 00:03:17.647 "2fc03dbe-fc40-509a-a9c1-979aaa8dba48" 00:03:17.647 ], 00:03:17.647 "product_name": "passthru", 00:03:17.647 "block_size": 512, 00:03:17.647 "num_blocks": 16384, 00:03:17.647 "uuid": "2fc03dbe-fc40-509a-a9c1-979aaa8dba48", 00:03:17.647 "assigned_rate_limits": { 00:03:17.647 "rw_ios_per_sec": 0, 00:03:17.647 "rw_mbytes_per_sec": 0, 00:03:17.647 "r_mbytes_per_sec": 0, 00:03:17.647 "w_mbytes_per_sec": 0 00:03:17.647 }, 00:03:17.647 "claimed": false, 00:03:17.648 "zoned": false, 00:03:17.648 "supported_io_types": { 00:03:17.648 "read": true, 00:03:17.648 "write": true, 00:03:17.648 "unmap": true, 00:03:17.648 "flush": true, 00:03:17.648 "reset": true, 00:03:17.648 "nvme_admin": false, 00:03:17.648 "nvme_io": false, 00:03:17.648 "nvme_io_md": false, 00:03:17.648 "write_zeroes": true, 00:03:17.648 "zcopy": true, 00:03:17.648 "get_zone_info": false, 00:03:17.648 "zone_management": false, 00:03:17.648 "zone_append": false, 00:03:17.648 "compare": false, 00:03:17.648 "compare_and_write": false, 00:03:17.648 "abort": true, 00:03:17.648 "seek_hole": false, 00:03:17.648 "seek_data": false, 00:03:17.648 "copy": true, 00:03:17.648 "nvme_iov_md": false 00:03:17.648 }, 00:03:17.648 "memory_domains": [ 00:03:17.648 { 00:03:17.648 "dma_device_id": "system", 00:03:17.648 "dma_device_type": 1 00:03:17.648 }, 00:03:17.648 { 00:03:17.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:17.648 "dma_device_type": 2 00:03:17.648 } 00:03:17.648 ], 00:03:17.648 "driver_specific": { 00:03:17.648 "passthru": { 00:03:17.648 "name": "Passthru0", 00:03:17.648 "base_bdev_name": "Malloc2" 00:03:17.648 } 00:03:17.648 } 00:03:17.648 } 00:03:17.648 ]' 00:03:17.648 11:05:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:17.648 11:05:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:17.648 11:05:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:17.648 11:05:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:17.648 11:05:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:17.648 11:05:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:17.648 11:05:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:17.648 11:05:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:17.648 11:05:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:17.648 11:05:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:17.648 11:05:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:17.648 11:05:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:17.648 11:05:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:17.648 11:05:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:17.648 11:05:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:17.648 11:05:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:17.648 11:05:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:17.648 00:03:17.648 real 0m0.229s 00:03:17.648 user 0m0.157s 00:03:17.648 sys 0m0.017s 00:03:17.648 11:05:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:17.648 11:05:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:17.648 ************************************ 00:03:17.648 END TEST rpc_daemon_integrity 00:03:17.648 ************************************ 00:03:17.648 11:05:13 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:17.648 11:05:13 rpc -- rpc/rpc.sh@84 -- # killprocess 2466973 00:03:17.648 11:05:13 rpc -- common/autotest_common.sh@954 -- # '[' -z 2466973 ']' 00:03:17.648 11:05:13 rpc -- common/autotest_common.sh@958 -- # kill -0 2466973 00:03:17.648 11:05:13 rpc -- common/autotest_common.sh@959 -- # uname 00:03:17.648 11:05:13 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:17.648 11:05:13 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2466973 00:03:17.648 11:05:13 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:17.648 11:05:13 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:17.648 11:05:13 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2466973' 00:03:17.648 killing process with pid 2466973 00:03:17.648 11:05:13 rpc -- common/autotest_common.sh@973 -- # kill 2466973 00:03:17.648 11:05:13 rpc -- common/autotest_common.sh@978 -- # wait 2466973 00:03:18.214 00:03:18.214 real 0m1.965s 00:03:18.214 user 0m2.440s 00:03:18.214 sys 0m0.606s 00:03:18.214 11:05:13 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:18.214 11:05:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:18.214 ************************************ 00:03:18.214 END TEST rpc 00:03:18.214 ************************************ 00:03:18.214 11:05:13 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:18.214 11:05:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:18.214 11:05:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:18.214 11:05:13 -- common/autotest_common.sh@10 -- # set +x 00:03:18.214 ************************************ 00:03:18.214 START TEST skip_rpc 00:03:18.214 ************************************ 00:03:18.214 11:05:13 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:18.214 * Looking for test storage... 00:03:18.214 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:18.214 11:05:13 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:18.214 11:05:13 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:18.214 11:05:13 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:18.214 11:05:13 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:18.214 11:05:13 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:18.472 11:05:13 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:18.472 11:05:13 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:18.472 11:05:13 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:18.472 11:05:13 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:18.472 11:05:13 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:18.472 11:05:13 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:18.472 11:05:13 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:18.472 11:05:13 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:18.472 11:05:13 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:18.472 11:05:13 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:18.472 11:05:13 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:18.472 11:05:13 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:18.472 11:05:13 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:18.472 11:05:13 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:18.472 11:05:13 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:18.472 11:05:13 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:18.472 11:05:13 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:18.472 11:05:13 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:18.472 11:05:13 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:18.472 11:05:13 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:18.472 11:05:13 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:18.472 11:05:13 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:18.472 11:05:13 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:18.472 11:05:13 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:18.472 11:05:13 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:18.473 11:05:13 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:18.473 11:05:13 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:18.473 11:05:13 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:18.473 11:05:13 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:18.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:18.473 --rc genhtml_branch_coverage=1 00:03:18.473 --rc genhtml_function_coverage=1 00:03:18.473 --rc genhtml_legend=1 00:03:18.473 --rc geninfo_all_blocks=1 00:03:18.473 --rc geninfo_unexecuted_blocks=1 00:03:18.473 00:03:18.473 ' 00:03:18.473 11:05:13 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:18.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:18.473 --rc genhtml_branch_coverage=1 00:03:18.473 --rc genhtml_function_coverage=1 00:03:18.473 --rc genhtml_legend=1 00:03:18.473 --rc geninfo_all_blocks=1 00:03:18.473 --rc geninfo_unexecuted_blocks=1 00:03:18.473 00:03:18.473 ' 00:03:18.473 11:05:13 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:18.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:18.473 --rc genhtml_branch_coverage=1 00:03:18.473 --rc genhtml_function_coverage=1 00:03:18.473 --rc genhtml_legend=1 00:03:18.473 --rc geninfo_all_blocks=1 00:03:18.473 --rc geninfo_unexecuted_blocks=1 00:03:18.473 00:03:18.473 ' 00:03:18.473 11:05:13 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:18.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:18.473 --rc genhtml_branch_coverage=1 00:03:18.473 --rc genhtml_function_coverage=1 00:03:18.473 --rc genhtml_legend=1 00:03:18.473 --rc geninfo_all_blocks=1 00:03:18.473 --rc geninfo_unexecuted_blocks=1 00:03:18.473 00:03:18.473 ' 00:03:18.473 11:05:13 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:18.473 11:05:13 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:18.473 11:05:13 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:18.473 11:05:13 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:18.473 11:05:13 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:18.473 11:05:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:18.473 ************************************ 00:03:18.473 START TEST skip_rpc 00:03:18.473 ************************************ 00:03:18.473 11:05:13 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:03:18.473 11:05:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2467415 00:03:18.473 11:05:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:18.473 11:05:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:18.473 11:05:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:18.473 [2024-11-19 11:05:13.804857] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:03:18.473 [2024-11-19 11:05:13.804935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2467415 ] 00:03:18.473 [2024-11-19 11:05:13.876045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:18.473 [2024-11-19 11:05:13.931978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:23.738 11:05:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:23.738 11:05:18 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:03:23.738 11:05:18 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:23.738 11:05:18 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:03:23.738 11:05:18 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:23.738 11:05:18 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:03:23.738 11:05:18 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:23.738 11:05:18 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:03:23.738 11:05:18 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:23.738 11:05:18 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:23.738 11:05:18 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:23.738 11:05:18 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:03:23.738 11:05:18 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:23.738 11:05:18 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:23.738 11:05:18 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:23.738 11:05:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:23.738 11:05:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2467415 00:03:23.738 11:05:18 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 2467415 ']' 00:03:23.738 11:05:18 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 2467415 00:03:23.738 11:05:18 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:03:23.738 11:05:18 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:23.738 11:05:18 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2467415 00:03:23.738 11:05:18 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:23.738 11:05:18 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:23.738 11:05:18 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2467415' 00:03:23.738 killing process with pid 2467415 00:03:23.738 11:05:18 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 2467415 00:03:23.738 11:05:18 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 2467415 00:03:23.738 00:03:23.738 real 0m5.457s 00:03:23.738 user 0m5.172s 00:03:23.738 sys 0m0.302s 00:03:23.738 11:05:19 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:23.738 11:05:19 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:23.738 ************************************ 00:03:23.738 END TEST skip_rpc 00:03:23.738 ************************************ 00:03:23.738 11:05:19 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:23.738 11:05:19 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:23.738 11:05:19 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:23.738 11:05:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:23.996 ************************************ 00:03:23.996 START TEST skip_rpc_with_json 00:03:23.996 ************************************ 00:03:23.996 11:05:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:03:23.996 11:05:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:23.996 11:05:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2468107 00:03:23.996 11:05:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:23.996 11:05:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:23.996 11:05:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2468107 00:03:23.996 11:05:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 2468107 ']' 00:03:23.996 11:05:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:23.996 11:05:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:23.996 11:05:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:23.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:23.996 11:05:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:23.996 11:05:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:23.996 [2024-11-19 11:05:19.312907] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:03:23.996 [2024-11-19 11:05:19.313011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2468107 ] 00:03:23.997 [2024-11-19 11:05:19.387602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:23.997 [2024-11-19 11:05:19.446834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:24.255 11:05:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:24.255 11:05:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:03:24.255 11:05:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:24.255 11:05:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:24.255 11:05:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:24.255 [2024-11-19 11:05:19.716701] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:24.255 request: 00:03:24.255 { 00:03:24.255 "trtype": "tcp", 00:03:24.255 "method": "nvmf_get_transports", 00:03:24.255 "req_id": 1 00:03:24.255 } 00:03:24.255 Got JSON-RPC error response 00:03:24.255 response: 00:03:24.255 { 00:03:24.255 "code": -19, 00:03:24.255 "message": "No such device" 00:03:24.255 } 00:03:24.255 11:05:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:24.255 11:05:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:24.255 11:05:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:24.255 11:05:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:24.255 [2024-11-19 11:05:19.724817] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:24.255 11:05:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:24.255 11:05:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:24.255 11:05:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:24.255 11:05:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:24.514 11:05:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:24.514 11:05:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:24.514 { 00:03:24.514 "subsystems": [ 00:03:24.514 { 00:03:24.514 "subsystem": "fsdev", 00:03:24.514 "config": [ 00:03:24.514 { 00:03:24.514 "method": "fsdev_set_opts", 00:03:24.514 "params": { 00:03:24.514 "fsdev_io_pool_size": 65535, 00:03:24.514 "fsdev_io_cache_size": 256 00:03:24.514 } 00:03:24.514 } 00:03:24.514 ] 00:03:24.514 }, 00:03:24.514 { 00:03:24.514 "subsystem": "vfio_user_target", 00:03:24.514 "config": null 00:03:24.514 }, 00:03:24.514 { 00:03:24.514 "subsystem": "keyring", 00:03:24.514 "config": [] 00:03:24.514 }, 00:03:24.514 { 00:03:24.514 "subsystem": "iobuf", 00:03:24.514 "config": [ 00:03:24.514 { 00:03:24.514 "method": "iobuf_set_options", 00:03:24.514 "params": { 00:03:24.514 "small_pool_count": 8192, 00:03:24.514 "large_pool_count": 1024, 00:03:24.514 "small_bufsize": 8192, 00:03:24.514 "large_bufsize": 135168, 00:03:24.514 "enable_numa": false 00:03:24.514 } 00:03:24.514 } 00:03:24.514 ] 00:03:24.514 }, 00:03:24.514 { 00:03:24.514 "subsystem": "sock", 00:03:24.514 "config": [ 00:03:24.514 { 00:03:24.514 "method": "sock_set_default_impl", 00:03:24.514 "params": { 00:03:24.514 "impl_name": "posix" 00:03:24.514 } 00:03:24.514 }, 00:03:24.514 { 00:03:24.514 "method": "sock_impl_set_options", 00:03:24.514 "params": { 00:03:24.514 "impl_name": "ssl", 00:03:24.514 "recv_buf_size": 4096, 00:03:24.514 "send_buf_size": 4096, 00:03:24.514 "enable_recv_pipe": true, 00:03:24.514 "enable_quickack": false, 00:03:24.514 "enable_placement_id": 0, 00:03:24.514 "enable_zerocopy_send_server": true, 00:03:24.514 "enable_zerocopy_send_client": false, 00:03:24.514 "zerocopy_threshold": 0, 00:03:24.514 "tls_version": 0, 00:03:24.514 "enable_ktls": false 00:03:24.514 } 00:03:24.514 }, 00:03:24.514 { 00:03:24.514 "method": "sock_impl_set_options", 00:03:24.514 "params": { 00:03:24.514 "impl_name": "posix", 00:03:24.514 "recv_buf_size": 2097152, 00:03:24.514 "send_buf_size": 2097152, 00:03:24.514 "enable_recv_pipe": true, 00:03:24.514 "enable_quickack": false, 00:03:24.514 "enable_placement_id": 0, 00:03:24.514 "enable_zerocopy_send_server": true, 00:03:24.514 "enable_zerocopy_send_client": false, 00:03:24.514 "zerocopy_threshold": 0, 00:03:24.514 "tls_version": 0, 00:03:24.514 "enable_ktls": false 00:03:24.514 } 00:03:24.514 } 00:03:24.514 ] 00:03:24.514 }, 00:03:24.514 { 00:03:24.514 "subsystem": "vmd", 00:03:24.514 "config": [] 00:03:24.514 }, 00:03:24.514 { 00:03:24.514 "subsystem": "accel", 00:03:24.514 "config": [ 00:03:24.514 { 00:03:24.514 "method": "accel_set_options", 00:03:24.514 "params": { 00:03:24.514 "small_cache_size": 128, 00:03:24.514 "large_cache_size": 16, 00:03:24.514 "task_count": 2048, 00:03:24.514 "sequence_count": 2048, 00:03:24.514 "buf_count": 2048 00:03:24.514 } 00:03:24.514 } 00:03:24.514 ] 00:03:24.514 }, 00:03:24.514 { 00:03:24.514 "subsystem": "bdev", 00:03:24.514 "config": [ 00:03:24.514 { 00:03:24.514 "method": "bdev_set_options", 00:03:24.514 "params": { 00:03:24.514 "bdev_io_pool_size": 65535, 00:03:24.514 "bdev_io_cache_size": 256, 00:03:24.514 "bdev_auto_examine": true, 00:03:24.514 "iobuf_small_cache_size": 128, 00:03:24.514 "iobuf_large_cache_size": 16 00:03:24.514 } 00:03:24.514 }, 00:03:24.514 { 00:03:24.514 "method": "bdev_raid_set_options", 00:03:24.514 "params": { 00:03:24.514 "process_window_size_kb": 1024, 00:03:24.514 "process_max_bandwidth_mb_sec": 0 00:03:24.514 } 00:03:24.514 }, 00:03:24.514 { 00:03:24.514 "method": "bdev_iscsi_set_options", 00:03:24.514 "params": { 00:03:24.514 "timeout_sec": 30 00:03:24.514 } 00:03:24.514 }, 00:03:24.514 { 00:03:24.514 "method": "bdev_nvme_set_options", 00:03:24.514 "params": { 00:03:24.514 "action_on_timeout": "none", 00:03:24.514 "timeout_us": 0, 00:03:24.514 "timeout_admin_us": 0, 00:03:24.514 "keep_alive_timeout_ms": 10000, 00:03:24.514 "arbitration_burst": 0, 00:03:24.514 "low_priority_weight": 0, 00:03:24.514 "medium_priority_weight": 0, 00:03:24.514 "high_priority_weight": 0, 00:03:24.514 "nvme_adminq_poll_period_us": 10000, 00:03:24.514 "nvme_ioq_poll_period_us": 0, 00:03:24.514 "io_queue_requests": 0, 00:03:24.514 "delay_cmd_submit": true, 00:03:24.514 "transport_retry_count": 4, 00:03:24.514 "bdev_retry_count": 3, 00:03:24.514 "transport_ack_timeout": 0, 00:03:24.514 "ctrlr_loss_timeout_sec": 0, 00:03:24.514 "reconnect_delay_sec": 0, 00:03:24.514 "fast_io_fail_timeout_sec": 0, 00:03:24.514 "disable_auto_failback": false, 00:03:24.514 "generate_uuids": false, 00:03:24.514 "transport_tos": 0, 00:03:24.514 "nvme_error_stat": false, 00:03:24.514 "rdma_srq_size": 0, 00:03:24.514 "io_path_stat": false, 00:03:24.514 "allow_accel_sequence": false, 00:03:24.514 "rdma_max_cq_size": 0, 00:03:24.514 "rdma_cm_event_timeout_ms": 0, 00:03:24.514 "dhchap_digests": [ 00:03:24.514 "sha256", 00:03:24.514 "sha384", 00:03:24.514 "sha512" 00:03:24.514 ], 00:03:24.514 "dhchap_dhgroups": [ 00:03:24.514 "null", 00:03:24.514 "ffdhe2048", 00:03:24.514 "ffdhe3072", 00:03:24.514 "ffdhe4096", 00:03:24.514 "ffdhe6144", 00:03:24.514 "ffdhe8192" 00:03:24.514 ] 00:03:24.514 } 00:03:24.514 }, 00:03:24.514 { 00:03:24.514 "method": "bdev_nvme_set_hotplug", 00:03:24.514 "params": { 00:03:24.514 "period_us": 100000, 00:03:24.514 "enable": false 00:03:24.514 } 00:03:24.514 }, 00:03:24.514 { 00:03:24.514 "method": "bdev_wait_for_examine" 00:03:24.514 } 00:03:24.514 ] 00:03:24.514 }, 00:03:24.514 { 00:03:24.514 "subsystem": "scsi", 00:03:24.514 "config": null 00:03:24.514 }, 00:03:24.514 { 00:03:24.514 "subsystem": "scheduler", 00:03:24.514 "config": [ 00:03:24.514 { 00:03:24.514 "method": "framework_set_scheduler", 00:03:24.514 "params": { 00:03:24.514 "name": "static" 00:03:24.514 } 00:03:24.514 } 00:03:24.514 ] 00:03:24.514 }, 00:03:24.514 { 00:03:24.514 "subsystem": "vhost_scsi", 00:03:24.514 "config": [] 00:03:24.514 }, 00:03:24.514 { 00:03:24.514 "subsystem": "vhost_blk", 00:03:24.514 "config": [] 00:03:24.514 }, 00:03:24.514 { 00:03:24.514 "subsystem": "ublk", 00:03:24.514 "config": [] 00:03:24.514 }, 00:03:24.514 { 00:03:24.514 "subsystem": "nbd", 00:03:24.514 "config": [] 00:03:24.514 }, 00:03:24.514 { 00:03:24.514 "subsystem": "nvmf", 00:03:24.514 "config": [ 00:03:24.514 { 00:03:24.514 "method": "nvmf_set_config", 00:03:24.514 "params": { 00:03:24.514 "discovery_filter": "match_any", 00:03:24.514 "admin_cmd_passthru": { 00:03:24.514 "identify_ctrlr": false 00:03:24.514 }, 00:03:24.514 "dhchap_digests": [ 00:03:24.514 "sha256", 00:03:24.514 "sha384", 00:03:24.514 "sha512" 00:03:24.514 ], 00:03:24.514 "dhchap_dhgroups": [ 00:03:24.514 "null", 00:03:24.514 "ffdhe2048", 00:03:24.514 "ffdhe3072", 00:03:24.514 "ffdhe4096", 00:03:24.514 "ffdhe6144", 00:03:24.514 "ffdhe8192" 00:03:24.514 ] 00:03:24.514 } 00:03:24.514 }, 00:03:24.514 { 00:03:24.514 "method": "nvmf_set_max_subsystems", 00:03:24.514 "params": { 00:03:24.514 "max_subsystems": 1024 00:03:24.514 } 00:03:24.514 }, 00:03:24.514 { 00:03:24.514 "method": "nvmf_set_crdt", 00:03:24.514 "params": { 00:03:24.514 "crdt1": 0, 00:03:24.514 "crdt2": 0, 00:03:24.514 "crdt3": 0 00:03:24.514 } 00:03:24.514 }, 00:03:24.515 { 00:03:24.515 "method": "nvmf_create_transport", 00:03:24.515 "params": { 00:03:24.515 "trtype": "TCP", 00:03:24.515 "max_queue_depth": 128, 00:03:24.515 "max_io_qpairs_per_ctrlr": 127, 00:03:24.515 "in_capsule_data_size": 4096, 00:03:24.515 "max_io_size": 131072, 00:03:24.515 "io_unit_size": 131072, 00:03:24.515 "max_aq_depth": 128, 00:03:24.515 "num_shared_buffers": 511, 00:03:24.515 "buf_cache_size": 4294967295, 00:03:24.515 "dif_insert_or_strip": false, 00:03:24.515 "zcopy": false, 00:03:24.515 "c2h_success": true, 00:03:24.515 "sock_priority": 0, 00:03:24.515 "abort_timeout_sec": 1, 00:03:24.515 "ack_timeout": 0, 00:03:24.515 "data_wr_pool_size": 0 00:03:24.515 } 00:03:24.515 } 00:03:24.515 ] 00:03:24.515 }, 00:03:24.515 { 00:03:24.515 "subsystem": "iscsi", 00:03:24.515 "config": [ 00:03:24.515 { 00:03:24.515 "method": "iscsi_set_options", 00:03:24.515 "params": { 00:03:24.515 "node_base": "iqn.2016-06.io.spdk", 00:03:24.515 "max_sessions": 128, 00:03:24.515 "max_connections_per_session": 2, 00:03:24.515 "max_queue_depth": 64, 00:03:24.515 "default_time2wait": 2, 00:03:24.515 "default_time2retain": 20, 00:03:24.515 "first_burst_length": 8192, 00:03:24.515 "immediate_data": true, 00:03:24.515 "allow_duplicated_isid": false, 00:03:24.515 "error_recovery_level": 0, 00:03:24.515 "nop_timeout": 60, 00:03:24.515 "nop_in_interval": 30, 00:03:24.515 "disable_chap": false, 00:03:24.515 "require_chap": false, 00:03:24.515 "mutual_chap": false, 00:03:24.515 "chap_group": 0, 00:03:24.515 "max_large_datain_per_connection": 64, 00:03:24.515 "max_r2t_per_connection": 4, 00:03:24.515 "pdu_pool_size": 36864, 00:03:24.515 "immediate_data_pool_size": 16384, 00:03:24.515 "data_out_pool_size": 2048 00:03:24.515 } 00:03:24.515 } 00:03:24.515 ] 00:03:24.515 } 00:03:24.515 ] 00:03:24.515 } 00:03:24.515 11:05:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:24.515 11:05:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2468107 00:03:24.515 11:05:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2468107 ']' 00:03:24.515 11:05:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2468107 00:03:24.515 11:05:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:24.515 11:05:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:24.515 11:05:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2468107 00:03:24.515 11:05:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:24.515 11:05:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:24.515 11:05:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2468107' 00:03:24.515 killing process with pid 2468107 00:03:24.515 11:05:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2468107 00:03:24.515 11:05:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2468107 00:03:25.081 11:05:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2468253 00:03:25.081 11:05:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:25.081 11:05:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:30.361 11:05:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2468253 00:03:30.361 11:05:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2468253 ']' 00:03:30.361 11:05:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2468253 00:03:30.361 11:05:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:30.361 11:05:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:30.361 11:05:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2468253 00:03:30.361 11:05:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:30.361 11:05:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:30.361 11:05:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2468253' 00:03:30.361 killing process with pid 2468253 00:03:30.361 11:05:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2468253 00:03:30.361 11:05:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2468253 00:03:30.361 11:05:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:30.361 11:05:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:30.361 00:03:30.361 real 0m6.531s 00:03:30.361 user 0m6.161s 00:03:30.361 sys 0m0.683s 00:03:30.361 11:05:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:30.361 11:05:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:30.361 ************************************ 00:03:30.361 END TEST skip_rpc_with_json 00:03:30.361 ************************************ 00:03:30.361 11:05:25 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:30.361 11:05:25 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:30.361 11:05:25 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:30.361 11:05:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:30.361 ************************************ 00:03:30.361 START TEST skip_rpc_with_delay 00:03:30.361 ************************************ 00:03:30.361 11:05:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:03:30.361 11:05:25 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:30.361 11:05:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:03:30.361 11:05:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:30.361 11:05:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:30.361 11:05:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:30.361 11:05:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:30.361 11:05:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:30.361 11:05:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:30.361 11:05:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:30.361 11:05:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:30.361 11:05:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:30.361 11:05:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:30.619 [2024-11-19 11:05:25.892696] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:30.619 11:05:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:03:30.619 11:05:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:30.619 11:05:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:30.619 11:05:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:30.619 00:03:30.619 real 0m0.072s 00:03:30.619 user 0m0.049s 00:03:30.619 sys 0m0.023s 00:03:30.619 11:05:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:30.619 11:05:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:30.619 ************************************ 00:03:30.619 END TEST skip_rpc_with_delay 00:03:30.619 ************************************ 00:03:30.619 11:05:25 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:30.619 11:05:25 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:30.619 11:05:25 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:30.619 11:05:25 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:30.619 11:05:25 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:30.619 11:05:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:30.619 ************************************ 00:03:30.619 START TEST exit_on_failed_rpc_init 00:03:30.619 ************************************ 00:03:30.619 11:05:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:03:30.619 11:05:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2468965 00:03:30.619 11:05:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:30.619 11:05:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2468965 00:03:30.619 11:05:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 2468965 ']' 00:03:30.619 11:05:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:30.619 11:05:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:30.619 11:05:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:30.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:30.619 11:05:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:30.619 11:05:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:30.619 [2024-11-19 11:05:26.015549] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:03:30.619 [2024-11-19 11:05:26.015636] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2468965 ] 00:03:30.619 [2024-11-19 11:05:26.093226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:30.878 [2024-11-19 11:05:26.154148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:31.136 11:05:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:31.136 11:05:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:03:31.136 11:05:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:31.136 11:05:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:31.136 11:05:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:03:31.136 11:05:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:31.136 11:05:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:31.136 11:05:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:31.136 11:05:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:31.136 11:05:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:31.136 11:05:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:31.136 11:05:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:31.136 11:05:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:31.136 11:05:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:31.136 11:05:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:31.136 [2024-11-19 11:05:26.472166] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:03:31.136 [2024-11-19 11:05:26.472260] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2468976 ] 00:03:31.136 [2024-11-19 11:05:26.545573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:31.136 [2024-11-19 11:05:26.603658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:31.136 [2024-11-19 11:05:26.603786] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:31.136 [2024-11-19 11:05:26.603805] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:31.136 [2024-11-19 11:05:26.603817] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:31.395 11:05:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:03:31.395 11:05:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:31.395 11:05:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:03:31.395 11:05:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:03:31.395 11:05:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:03:31.395 11:05:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:31.395 11:05:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:31.395 11:05:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2468965 00:03:31.395 11:05:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 2468965 ']' 00:03:31.395 11:05:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 2468965 00:03:31.395 11:05:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:03:31.395 11:05:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:31.395 11:05:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2468965 00:03:31.395 11:05:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:31.395 11:05:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:31.395 11:05:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2468965' 00:03:31.395 killing process with pid 2468965 00:03:31.395 11:05:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 2468965 00:03:31.395 11:05:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 2468965 00:03:31.654 00:03:31.654 real 0m1.162s 00:03:31.654 user 0m1.274s 00:03:31.654 sys 0m0.439s 00:03:31.654 11:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:31.654 11:05:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:31.654 ************************************ 00:03:31.654 END TEST exit_on_failed_rpc_init 00:03:31.654 ************************************ 00:03:31.654 11:05:27 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:31.654 00:03:31.654 real 0m13.571s 00:03:31.654 user 0m12.835s 00:03:31.654 sys 0m1.636s 00:03:31.654 11:05:27 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:31.654 11:05:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:31.654 ************************************ 00:03:31.654 END TEST skip_rpc 00:03:31.654 ************************************ 00:03:31.913 11:05:27 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:31.913 11:05:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:31.913 11:05:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:31.913 11:05:27 -- common/autotest_common.sh@10 -- # set +x 00:03:31.913 ************************************ 00:03:31.913 START TEST rpc_client 00:03:31.913 ************************************ 00:03:31.913 11:05:27 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:31.913 * Looking for test storage... 00:03:31.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:31.913 11:05:27 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:31.913 11:05:27 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:03:31.913 11:05:27 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:31.913 11:05:27 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:31.913 11:05:27 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:31.913 11:05:27 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:31.913 11:05:27 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:31.913 11:05:27 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:03:31.914 11:05:27 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:03:31.914 11:05:27 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:03:31.914 11:05:27 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:03:31.914 11:05:27 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:03:31.914 11:05:27 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:03:31.914 11:05:27 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:03:31.914 11:05:27 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:31.914 11:05:27 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:03:31.914 11:05:27 rpc_client -- scripts/common.sh@345 -- # : 1 00:03:31.914 11:05:27 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:31.914 11:05:27 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:31.914 11:05:27 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:03:31.914 11:05:27 rpc_client -- scripts/common.sh@353 -- # local d=1 00:03:31.914 11:05:27 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:31.914 11:05:27 rpc_client -- scripts/common.sh@355 -- # echo 1 00:03:31.914 11:05:27 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:03:31.914 11:05:27 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:03:31.914 11:05:27 rpc_client -- scripts/common.sh@353 -- # local d=2 00:03:31.914 11:05:27 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:31.914 11:05:27 rpc_client -- scripts/common.sh@355 -- # echo 2 00:03:31.914 11:05:27 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:03:31.914 11:05:27 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:31.914 11:05:27 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:31.914 11:05:27 rpc_client -- scripts/common.sh@368 -- # return 0 00:03:31.914 11:05:27 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:31.914 11:05:27 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:31.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.914 --rc genhtml_branch_coverage=1 00:03:31.914 --rc genhtml_function_coverage=1 00:03:31.914 --rc genhtml_legend=1 00:03:31.914 --rc geninfo_all_blocks=1 00:03:31.914 --rc geninfo_unexecuted_blocks=1 00:03:31.914 00:03:31.914 ' 00:03:31.914 11:05:27 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:31.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.914 --rc genhtml_branch_coverage=1 00:03:31.914 --rc genhtml_function_coverage=1 00:03:31.914 --rc genhtml_legend=1 00:03:31.914 --rc geninfo_all_blocks=1 00:03:31.914 --rc geninfo_unexecuted_blocks=1 00:03:31.914 00:03:31.914 ' 00:03:31.914 11:05:27 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:31.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.914 --rc genhtml_branch_coverage=1 00:03:31.914 --rc genhtml_function_coverage=1 00:03:31.914 --rc genhtml_legend=1 00:03:31.914 --rc geninfo_all_blocks=1 00:03:31.914 --rc geninfo_unexecuted_blocks=1 00:03:31.914 00:03:31.914 ' 00:03:31.914 11:05:27 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:31.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.914 --rc genhtml_branch_coverage=1 00:03:31.914 --rc genhtml_function_coverage=1 00:03:31.914 --rc genhtml_legend=1 00:03:31.914 --rc geninfo_all_blocks=1 00:03:31.914 --rc geninfo_unexecuted_blocks=1 00:03:31.914 00:03:31.914 ' 00:03:31.914 11:05:27 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:31.914 OK 00:03:31.914 11:05:27 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:31.914 00:03:31.914 real 0m0.157s 00:03:31.914 user 0m0.109s 00:03:31.914 sys 0m0.058s 00:03:31.914 11:05:27 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:31.914 11:05:27 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:31.914 ************************************ 00:03:31.914 END TEST rpc_client 00:03:31.914 ************************************ 00:03:31.914 11:05:27 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:31.914 11:05:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:31.914 11:05:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:31.914 11:05:27 -- common/autotest_common.sh@10 -- # set +x 00:03:31.914 ************************************ 00:03:31.914 START TEST json_config 00:03:31.914 ************************************ 00:03:31.914 11:05:27 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:32.173 11:05:27 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:32.173 11:05:27 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:03:32.173 11:05:27 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:32.173 11:05:27 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:32.173 11:05:27 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:32.173 11:05:27 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:32.173 11:05:27 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:32.173 11:05:27 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:03:32.173 11:05:27 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:03:32.173 11:05:27 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:03:32.173 11:05:27 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:03:32.173 11:05:27 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:03:32.173 11:05:27 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:03:32.173 11:05:27 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:03:32.173 11:05:27 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:32.173 11:05:27 json_config -- scripts/common.sh@344 -- # case "$op" in 00:03:32.173 11:05:27 json_config -- scripts/common.sh@345 -- # : 1 00:03:32.174 11:05:27 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:32.174 11:05:27 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:32.174 11:05:27 json_config -- scripts/common.sh@365 -- # decimal 1 00:03:32.174 11:05:27 json_config -- scripts/common.sh@353 -- # local d=1 00:03:32.174 11:05:27 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:32.174 11:05:27 json_config -- scripts/common.sh@355 -- # echo 1 00:03:32.174 11:05:27 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:03:32.174 11:05:27 json_config -- scripts/common.sh@366 -- # decimal 2 00:03:32.174 11:05:27 json_config -- scripts/common.sh@353 -- # local d=2 00:03:32.174 11:05:27 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:32.174 11:05:27 json_config -- scripts/common.sh@355 -- # echo 2 00:03:32.174 11:05:27 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:03:32.174 11:05:27 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:32.174 11:05:27 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:32.174 11:05:27 json_config -- scripts/common.sh@368 -- # return 0 00:03:32.174 11:05:27 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:32.174 11:05:27 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:32.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:32.174 --rc genhtml_branch_coverage=1 00:03:32.174 --rc genhtml_function_coverage=1 00:03:32.174 --rc genhtml_legend=1 00:03:32.174 --rc geninfo_all_blocks=1 00:03:32.174 --rc geninfo_unexecuted_blocks=1 00:03:32.174 00:03:32.174 ' 00:03:32.174 11:05:27 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:32.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:32.174 --rc genhtml_branch_coverage=1 00:03:32.174 --rc genhtml_function_coverage=1 00:03:32.174 --rc genhtml_legend=1 00:03:32.174 --rc geninfo_all_blocks=1 00:03:32.174 --rc geninfo_unexecuted_blocks=1 00:03:32.174 00:03:32.174 ' 00:03:32.174 11:05:27 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:32.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:32.174 --rc genhtml_branch_coverage=1 00:03:32.174 --rc genhtml_function_coverage=1 00:03:32.174 --rc genhtml_legend=1 00:03:32.174 --rc geninfo_all_blocks=1 00:03:32.174 --rc geninfo_unexecuted_blocks=1 00:03:32.174 00:03:32.174 ' 00:03:32.174 11:05:27 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:32.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:32.174 --rc genhtml_branch_coverage=1 00:03:32.174 --rc genhtml_function_coverage=1 00:03:32.174 --rc genhtml_legend=1 00:03:32.174 --rc geninfo_all_blocks=1 00:03:32.174 --rc geninfo_unexecuted_blocks=1 00:03:32.174 00:03:32.174 ' 00:03:32.174 11:05:27 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:32.174 11:05:27 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:32.174 11:05:27 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:32.174 11:05:27 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:32.174 11:05:27 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:32.174 11:05:27 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:32.174 11:05:27 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:32.174 11:05:27 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:32.174 11:05:27 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:32.174 11:05:27 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:32.174 11:05:27 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:32.174 11:05:27 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:32.174 11:05:27 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:03:32.174 11:05:27 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:03:32.174 11:05:27 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:32.174 11:05:27 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:32.174 11:05:27 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:32.174 11:05:27 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:32.174 11:05:27 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:32.174 11:05:27 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:03:32.174 11:05:27 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:32.174 11:05:27 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:32.174 11:05:27 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:32.174 11:05:27 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:32.174 11:05:27 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:32.174 11:05:27 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:32.174 11:05:27 json_config -- paths/export.sh@5 -- # export PATH 00:03:32.174 11:05:27 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:32.174 11:05:27 json_config -- nvmf/common.sh@51 -- # : 0 00:03:32.174 11:05:27 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:32.174 11:05:27 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:32.174 11:05:27 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:32.174 11:05:27 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:32.174 11:05:27 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:32.174 11:05:27 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:32.174 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:32.174 11:05:27 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:32.174 11:05:27 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:32.174 11:05:27 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:32.174 11:05:27 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:32.174 11:05:27 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:32.174 11:05:27 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:32.174 11:05:27 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:32.174 11:05:27 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:32.174 11:05:27 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:32.174 11:05:27 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:32.174 11:05:27 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:32.174 11:05:27 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:32.174 11:05:27 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:32.174 11:05:27 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:32.174 11:05:27 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:32.174 11:05:27 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:32.174 11:05:27 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:32.174 11:05:27 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:32.174 11:05:27 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:03:32.174 INFO: JSON configuration test init 00:03:32.174 11:05:27 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:03:32.174 11:05:27 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:03:32.174 11:05:27 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:32.174 11:05:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:32.174 11:05:27 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:03:32.174 11:05:27 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:32.174 11:05:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:32.174 11:05:27 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:03:32.174 11:05:27 json_config -- json_config/common.sh@9 -- # local app=target 00:03:32.174 11:05:27 json_config -- json_config/common.sh@10 -- # shift 00:03:32.174 11:05:27 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:32.174 11:05:27 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:32.174 11:05:27 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:32.174 11:05:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:32.174 11:05:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:32.174 11:05:27 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2469236 00:03:32.175 11:05:27 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:32.175 11:05:27 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:32.175 Waiting for target to run... 00:03:32.175 11:05:27 json_config -- json_config/common.sh@25 -- # waitforlisten 2469236 /var/tmp/spdk_tgt.sock 00:03:32.175 11:05:27 json_config -- common/autotest_common.sh@835 -- # '[' -z 2469236 ']' 00:03:32.175 11:05:27 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:32.175 11:05:27 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:32.175 11:05:27 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:32.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:32.175 11:05:27 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:32.175 11:05:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:32.175 [2024-11-19 11:05:27.605580] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:03:32.175 [2024-11-19 11:05:27.605675] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2469236 ] 00:03:32.741 [2024-11-19 11:05:27.962962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:32.741 [2024-11-19 11:05:28.005085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:33.308 11:05:28 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:33.308 11:05:28 json_config -- common/autotest_common.sh@868 -- # return 0 00:03:33.308 11:05:28 json_config -- json_config/common.sh@26 -- # echo '' 00:03:33.308 00:03:33.308 11:05:28 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:03:33.308 11:05:28 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:03:33.308 11:05:28 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:33.308 11:05:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:33.308 11:05:28 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:03:33.308 11:05:28 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:03:33.308 11:05:28 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:33.308 11:05:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:33.308 11:05:28 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:33.308 11:05:28 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:03:33.308 11:05:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:36.592 11:05:31 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:03:36.592 11:05:31 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:36.592 11:05:31 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:36.592 11:05:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:36.592 11:05:31 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:03:36.592 11:05:31 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:36.592 11:05:31 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:03:36.592 11:05:31 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:03:36.592 11:05:31 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:03:36.592 11:05:31 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:03:36.592 11:05:31 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:03:36.592 11:05:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:36.592 11:05:32 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:03:36.592 11:05:32 json_config -- json_config/json_config.sh@51 -- # local get_types 00:03:36.592 11:05:32 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:03:36.592 11:05:32 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:03:36.592 11:05:32 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:03:36.592 11:05:32 json_config -- json_config/json_config.sh@54 -- # sort 00:03:36.592 11:05:32 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:03:36.592 11:05:32 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:03:36.592 11:05:32 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:03:36.592 11:05:32 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:03:36.592 11:05:32 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:36.592 11:05:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:36.850 11:05:32 json_config -- json_config/json_config.sh@62 -- # return 0 00:03:36.850 11:05:32 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:03:36.850 11:05:32 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:03:36.850 11:05:32 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:03:36.850 11:05:32 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:03:36.850 11:05:32 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:03:36.850 11:05:32 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:03:36.850 11:05:32 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:36.850 11:05:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:36.850 11:05:32 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:36.850 11:05:32 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:03:36.850 11:05:32 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:03:36.850 11:05:32 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:36.850 11:05:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:37.108 MallocForNvmf0 00:03:37.108 11:05:32 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:37.108 11:05:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:37.366 MallocForNvmf1 00:03:37.366 11:05:32 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:37.366 11:05:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:37.623 [2024-11-19 11:05:32.891545] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:37.623 11:05:32 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:37.623 11:05:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:37.881 11:05:33 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:37.881 11:05:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:38.197 11:05:33 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:38.197 11:05:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:38.476 11:05:33 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:38.476 11:05:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:38.476 [2024-11-19 11:05:33.966922] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:38.733 11:05:33 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:03:38.733 11:05:33 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:38.733 11:05:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:38.733 11:05:34 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:03:38.733 11:05:34 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:38.733 11:05:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:38.733 11:05:34 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:03:38.733 11:05:34 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:38.733 11:05:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:38.992 MallocBdevForConfigChangeCheck 00:03:38.992 11:05:34 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:03:38.992 11:05:34 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:38.992 11:05:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:38.992 11:05:34 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:03:38.992 11:05:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:39.251 11:05:34 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:03:39.251 INFO: shutting down applications... 00:03:39.251 11:05:34 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:03:39.251 11:05:34 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:03:39.251 11:05:34 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:03:39.251 11:05:34 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:41.778 Calling clear_iscsi_subsystem 00:03:41.778 Calling clear_nvmf_subsystem 00:03:41.778 Calling clear_nbd_subsystem 00:03:41.778 Calling clear_ublk_subsystem 00:03:41.778 Calling clear_vhost_blk_subsystem 00:03:41.779 Calling clear_vhost_scsi_subsystem 00:03:41.779 Calling clear_bdev_subsystem 00:03:42.036 11:05:37 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:03:42.036 11:05:37 json_config -- json_config/json_config.sh@350 -- # count=100 00:03:42.036 11:05:37 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:03:42.037 11:05:37 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:42.037 11:05:37 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:42.037 11:05:37 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:03:42.295 11:05:37 json_config -- json_config/json_config.sh@352 -- # break 00:03:42.295 11:05:37 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:03:42.295 11:05:37 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:03:42.295 11:05:37 json_config -- json_config/common.sh@31 -- # local app=target 00:03:42.295 11:05:37 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:42.295 11:05:37 json_config -- json_config/common.sh@35 -- # [[ -n 2469236 ]] 00:03:42.295 11:05:37 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2469236 00:03:42.295 11:05:37 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:42.295 11:05:37 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:42.295 11:05:37 json_config -- json_config/common.sh@41 -- # kill -0 2469236 00:03:42.295 11:05:37 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:03:42.864 11:05:38 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:03:42.864 11:05:38 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:42.864 11:05:38 json_config -- json_config/common.sh@41 -- # kill -0 2469236 00:03:42.864 11:05:38 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:42.864 11:05:38 json_config -- json_config/common.sh@43 -- # break 00:03:42.864 11:05:38 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:42.864 11:05:38 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:42.864 SPDK target shutdown done 00:03:42.864 11:05:38 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:03:42.864 INFO: relaunching applications... 00:03:42.864 11:05:38 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:42.864 11:05:38 json_config -- json_config/common.sh@9 -- # local app=target 00:03:42.864 11:05:38 json_config -- json_config/common.sh@10 -- # shift 00:03:42.864 11:05:38 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:42.864 11:05:38 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:42.864 11:05:38 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:42.864 11:05:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:42.864 11:05:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:42.864 11:05:38 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2470686 00:03:42.864 11:05:38 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:42.864 11:05:38 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:42.864 Waiting for target to run... 00:03:42.864 11:05:38 json_config -- json_config/common.sh@25 -- # waitforlisten 2470686 /var/tmp/spdk_tgt.sock 00:03:42.864 11:05:38 json_config -- common/autotest_common.sh@835 -- # '[' -z 2470686 ']' 00:03:42.864 11:05:38 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:42.864 11:05:38 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:42.864 11:05:38 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:42.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:42.864 11:05:38 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:42.864 11:05:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:42.864 [2024-11-19 11:05:38.257338] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:03:42.864 [2024-11-19 11:05:38.257442] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2470686 ] 00:03:43.434 [2024-11-19 11:05:38.802477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:43.434 [2024-11-19 11:05:38.853653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:46.717 [2024-11-19 11:05:41.907569] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:46.717 [2024-11-19 11:05:41.940016] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:46.717 11:05:41 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:46.717 11:05:41 json_config -- common/autotest_common.sh@868 -- # return 0 00:03:46.717 11:05:41 json_config -- json_config/common.sh@26 -- # echo '' 00:03:46.717 00:03:46.717 11:05:41 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:03:46.717 11:05:41 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:03:46.717 INFO: Checking if target configuration is the same... 00:03:46.717 11:05:41 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:46.717 11:05:41 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:03:46.717 11:05:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:46.717 + '[' 2 -ne 2 ']' 00:03:46.717 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:46.717 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:46.717 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:46.717 +++ basename /dev/fd/62 00:03:46.717 ++ mktemp /tmp/62.XXX 00:03:46.717 + tmp_file_1=/tmp/62.T7M 00:03:46.717 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:46.717 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:46.717 + tmp_file_2=/tmp/spdk_tgt_config.json.e8K 00:03:46.717 + ret=0 00:03:46.717 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:46.975 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:46.975 + diff -u /tmp/62.T7M /tmp/spdk_tgt_config.json.e8K 00:03:46.975 + echo 'INFO: JSON config files are the same' 00:03:46.975 INFO: JSON config files are the same 00:03:46.975 + rm /tmp/62.T7M /tmp/spdk_tgt_config.json.e8K 00:03:46.975 + exit 0 00:03:46.975 11:05:42 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:03:46.975 11:05:42 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:03:46.975 INFO: changing configuration and checking if this can be detected... 00:03:46.976 11:05:42 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:46.976 11:05:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:47.234 11:05:42 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:47.234 11:05:42 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:03:47.234 11:05:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:47.234 + '[' 2 -ne 2 ']' 00:03:47.234 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:47.234 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:47.234 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:47.234 +++ basename /dev/fd/62 00:03:47.234 ++ mktemp /tmp/62.XXX 00:03:47.234 + tmp_file_1=/tmp/62.5VP 00:03:47.234 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:47.234 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:47.234 + tmp_file_2=/tmp/spdk_tgt_config.json.Xck 00:03:47.234 + ret=0 00:03:47.234 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:47.800 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:47.800 + diff -u /tmp/62.5VP /tmp/spdk_tgt_config.json.Xck 00:03:47.800 + ret=1 00:03:47.800 + echo '=== Start of file: /tmp/62.5VP ===' 00:03:47.800 + cat /tmp/62.5VP 00:03:47.800 + echo '=== End of file: /tmp/62.5VP ===' 00:03:47.800 + echo '' 00:03:47.800 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Xck ===' 00:03:47.800 + cat /tmp/spdk_tgt_config.json.Xck 00:03:47.800 + echo '=== End of file: /tmp/spdk_tgt_config.json.Xck ===' 00:03:47.800 + echo '' 00:03:47.800 + rm /tmp/62.5VP /tmp/spdk_tgt_config.json.Xck 00:03:47.800 + exit 1 00:03:47.800 11:05:43 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:03:47.800 INFO: configuration change detected. 00:03:47.800 11:05:43 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:03:47.800 11:05:43 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:03:47.800 11:05:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:47.800 11:05:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:47.800 11:05:43 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:03:47.800 11:05:43 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:03:47.800 11:05:43 json_config -- json_config/json_config.sh@324 -- # [[ -n 2470686 ]] 00:03:47.800 11:05:43 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:03:47.800 11:05:43 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:03:47.800 11:05:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:47.800 11:05:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:47.800 11:05:43 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:03:47.800 11:05:43 json_config -- json_config/json_config.sh@200 -- # uname -s 00:03:47.800 11:05:43 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:03:47.800 11:05:43 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:03:47.800 11:05:43 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:03:47.800 11:05:43 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:03:47.800 11:05:43 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:47.800 11:05:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:47.800 11:05:43 json_config -- json_config/json_config.sh@330 -- # killprocess 2470686 00:03:47.800 11:05:43 json_config -- common/autotest_common.sh@954 -- # '[' -z 2470686 ']' 00:03:47.800 11:05:43 json_config -- common/autotest_common.sh@958 -- # kill -0 2470686 00:03:47.800 11:05:43 json_config -- common/autotest_common.sh@959 -- # uname 00:03:47.800 11:05:43 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:47.800 11:05:43 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2470686 00:03:47.800 11:05:43 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:47.800 11:05:43 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:47.800 11:05:43 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2470686' 00:03:47.800 killing process with pid 2470686 00:03:47.800 11:05:43 json_config -- common/autotest_common.sh@973 -- # kill 2470686 00:03:47.800 11:05:43 json_config -- common/autotest_common.sh@978 -- # wait 2470686 00:03:50.330 11:05:45 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:50.330 11:05:45 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:03:50.330 11:05:45 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:50.330 11:05:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:50.330 11:05:45 json_config -- json_config/json_config.sh@335 -- # return 0 00:03:50.330 11:05:45 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:03:50.330 INFO: Success 00:03:50.330 00:03:50.330 real 0m18.416s 00:03:50.330 user 0m19.939s 00:03:50.330 sys 0m2.677s 00:03:50.330 11:05:45 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:50.330 11:05:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:50.330 ************************************ 00:03:50.330 END TEST json_config 00:03:50.330 ************************************ 00:03:50.589 11:05:45 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:50.589 11:05:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:50.589 11:05:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:50.589 11:05:45 -- common/autotest_common.sh@10 -- # set +x 00:03:50.589 ************************************ 00:03:50.589 START TEST json_config_extra_key 00:03:50.589 ************************************ 00:03:50.589 11:05:45 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:50.589 11:05:45 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:50.589 11:05:45 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:03:50.589 11:05:45 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:50.589 11:05:45 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:50.589 11:05:45 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:50.589 11:05:45 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:50.589 11:05:45 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:50.589 11:05:45 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:03:50.589 11:05:45 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:03:50.589 11:05:45 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:03:50.589 11:05:45 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:03:50.589 11:05:45 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:03:50.589 11:05:45 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:03:50.589 11:05:45 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:03:50.589 11:05:45 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:50.589 11:05:45 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:03:50.589 11:05:45 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:03:50.589 11:05:45 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:50.589 11:05:45 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:50.589 11:05:45 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:03:50.589 11:05:45 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:03:50.589 11:05:45 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:50.589 11:05:45 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:03:50.589 11:05:45 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:03:50.589 11:05:45 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:03:50.589 11:05:45 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:03:50.589 11:05:45 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:50.589 11:05:45 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:03:50.589 11:05:45 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:03:50.590 11:05:45 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:50.590 11:05:45 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:50.590 11:05:45 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:03:50.590 11:05:45 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:50.590 11:05:45 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:50.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.590 --rc genhtml_branch_coverage=1 00:03:50.590 --rc genhtml_function_coverage=1 00:03:50.590 --rc genhtml_legend=1 00:03:50.590 --rc geninfo_all_blocks=1 00:03:50.590 --rc geninfo_unexecuted_blocks=1 00:03:50.590 00:03:50.590 ' 00:03:50.590 11:05:45 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:50.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.590 --rc genhtml_branch_coverage=1 00:03:50.590 --rc genhtml_function_coverage=1 00:03:50.590 --rc genhtml_legend=1 00:03:50.590 --rc geninfo_all_blocks=1 00:03:50.590 --rc geninfo_unexecuted_blocks=1 00:03:50.590 00:03:50.590 ' 00:03:50.590 11:05:45 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:50.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.590 --rc genhtml_branch_coverage=1 00:03:50.590 --rc genhtml_function_coverage=1 00:03:50.590 --rc genhtml_legend=1 00:03:50.590 --rc geninfo_all_blocks=1 00:03:50.590 --rc geninfo_unexecuted_blocks=1 00:03:50.590 00:03:50.590 ' 00:03:50.590 11:05:45 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:50.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.590 --rc genhtml_branch_coverage=1 00:03:50.590 --rc genhtml_function_coverage=1 00:03:50.590 --rc genhtml_legend=1 00:03:50.590 --rc geninfo_all_blocks=1 00:03:50.590 --rc geninfo_unexecuted_blocks=1 00:03:50.590 00:03:50.590 ' 00:03:50.590 11:05:45 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:50.590 11:05:46 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:03:50.590 11:05:46 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:50.590 11:05:46 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:50.590 11:05:46 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:50.590 11:05:46 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:50.590 11:05:46 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:50.590 11:05:46 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:50.590 11:05:46 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:50.590 11:05:46 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:50.590 11:05:46 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:50.590 11:05:46 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:50.590 11:05:46 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:03:50.590 11:05:46 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:03:50.590 11:05:46 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:50.590 11:05:46 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:50.590 11:05:46 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:50.590 11:05:46 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:50.590 11:05:46 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:50.590 11:05:46 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:03:50.590 11:05:46 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:50.590 11:05:46 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:50.590 11:05:46 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:50.590 11:05:46 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:50.590 11:05:46 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:50.590 11:05:46 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:50.590 11:05:46 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:03:50.590 11:05:46 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:50.590 11:05:46 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:03:50.590 11:05:46 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:50.590 11:05:46 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:50.590 11:05:46 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:50.590 11:05:46 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:50.590 11:05:46 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:50.590 11:05:46 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:50.590 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:50.590 11:05:46 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:50.590 11:05:46 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:50.590 11:05:46 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:50.590 11:05:46 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:50.590 11:05:46 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:03:50.590 11:05:46 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:03:50.590 11:05:46 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:03:50.590 11:05:46 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:03:50.590 11:05:46 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:03:50.590 11:05:46 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:03:50.590 11:05:46 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:03:50.590 11:05:46 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:03:50.590 11:05:46 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:50.590 11:05:46 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:03:50.590 INFO: launching applications... 00:03:50.590 11:05:46 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:50.590 11:05:46 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:03:50.590 11:05:46 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:03:50.590 11:05:46 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:50.590 11:05:46 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:50.590 11:05:46 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:03:50.590 11:05:46 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:50.590 11:05:46 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:50.590 11:05:46 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2471737 00:03:50.590 11:05:46 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:50.590 11:05:46 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:50.590 Waiting for target to run... 00:03:50.590 11:05:46 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2471737 /var/tmp/spdk_tgt.sock 00:03:50.590 11:05:46 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 2471737 ']' 00:03:50.590 11:05:46 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:50.590 11:05:46 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:50.590 11:05:46 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:50.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:50.590 11:05:46 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:50.590 11:05:46 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:03:50.590 [2024-11-19 11:05:46.069548] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:03:50.590 [2024-11-19 11:05:46.069635] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2471737 ] 00:03:51.159 [2024-11-19 11:05:46.591633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:51.160 [2024-11-19 11:05:46.643271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:51.726 11:05:47 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:51.726 11:05:47 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:03:51.726 11:05:47 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:03:51.726 00:03:51.726 11:05:47 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:03:51.726 INFO: shutting down applications... 00:03:51.726 11:05:47 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:03:51.726 11:05:47 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:03:51.726 11:05:47 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:51.726 11:05:47 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2471737 ]] 00:03:51.726 11:05:47 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2471737 00:03:51.726 11:05:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:51.726 11:05:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:51.726 11:05:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2471737 00:03:51.726 11:05:47 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:03:52.292 11:05:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:03:52.292 11:05:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:52.292 11:05:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2471737 00:03:52.292 11:05:47 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:52.292 11:05:47 json_config_extra_key -- json_config/common.sh@43 -- # break 00:03:52.292 11:05:47 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:52.292 11:05:47 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:52.292 SPDK target shutdown done 00:03:52.292 11:05:47 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:03:52.292 Success 00:03:52.292 00:03:52.292 real 0m1.694s 00:03:52.292 user 0m1.517s 00:03:52.292 sys 0m0.636s 00:03:52.292 11:05:47 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:52.292 11:05:47 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:03:52.292 ************************************ 00:03:52.292 END TEST json_config_extra_key 00:03:52.292 ************************************ 00:03:52.292 11:05:47 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:52.292 11:05:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:52.292 11:05:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:52.292 11:05:47 -- common/autotest_common.sh@10 -- # set +x 00:03:52.292 ************************************ 00:03:52.292 START TEST alias_rpc 00:03:52.292 ************************************ 00:03:52.292 11:05:47 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:52.292 * Looking for test storage... 00:03:52.292 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:03:52.292 11:05:47 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:52.292 11:05:47 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:52.292 11:05:47 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:52.292 11:05:47 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:52.292 11:05:47 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:52.292 11:05:47 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:52.292 11:05:47 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:52.292 11:05:47 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:52.292 11:05:47 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:52.292 11:05:47 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:52.292 11:05:47 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:52.292 11:05:47 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:52.292 11:05:47 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:52.292 11:05:47 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:52.293 11:05:47 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:52.293 11:05:47 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:52.293 11:05:47 alias_rpc -- scripts/common.sh@345 -- # : 1 00:03:52.293 11:05:47 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:52.293 11:05:47 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:52.293 11:05:47 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:52.293 11:05:47 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:03:52.293 11:05:47 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:52.293 11:05:47 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:03:52.293 11:05:47 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:52.293 11:05:47 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:52.293 11:05:47 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:03:52.293 11:05:47 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:52.293 11:05:47 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:03:52.293 11:05:47 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:52.293 11:05:47 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:52.293 11:05:47 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:52.293 11:05:47 alias_rpc -- scripts/common.sh@368 -- # return 0 00:03:52.293 11:05:47 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:52.293 11:05:47 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:52.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.293 --rc genhtml_branch_coverage=1 00:03:52.293 --rc genhtml_function_coverage=1 00:03:52.293 --rc genhtml_legend=1 00:03:52.293 --rc geninfo_all_blocks=1 00:03:52.293 --rc geninfo_unexecuted_blocks=1 00:03:52.293 00:03:52.293 ' 00:03:52.293 11:05:47 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:52.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.293 --rc genhtml_branch_coverage=1 00:03:52.293 --rc genhtml_function_coverage=1 00:03:52.293 --rc genhtml_legend=1 00:03:52.293 --rc geninfo_all_blocks=1 00:03:52.293 --rc geninfo_unexecuted_blocks=1 00:03:52.293 00:03:52.293 ' 00:03:52.293 11:05:47 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:52.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.293 --rc genhtml_branch_coverage=1 00:03:52.293 --rc genhtml_function_coverage=1 00:03:52.293 --rc genhtml_legend=1 00:03:52.293 --rc geninfo_all_blocks=1 00:03:52.293 --rc geninfo_unexecuted_blocks=1 00:03:52.293 00:03:52.293 ' 00:03:52.293 11:05:47 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:52.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.293 --rc genhtml_branch_coverage=1 00:03:52.293 --rc genhtml_function_coverage=1 00:03:52.293 --rc genhtml_legend=1 00:03:52.293 --rc geninfo_all_blocks=1 00:03:52.293 --rc geninfo_unexecuted_blocks=1 00:03:52.293 00:03:52.293 ' 00:03:52.293 11:05:47 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:03:52.293 11:05:47 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2471940 00:03:52.293 11:05:47 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:52.293 11:05:47 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2471940 00:03:52.293 11:05:47 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 2471940 ']' 00:03:52.293 11:05:47 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:52.293 11:05:47 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:52.293 11:05:47 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:52.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:52.293 11:05:47 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:52.293 11:05:47 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:52.551 [2024-11-19 11:05:47.810606] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:03:52.551 [2024-11-19 11:05:47.810700] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2471940 ] 00:03:52.551 [2024-11-19 11:05:47.883939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:52.551 [2024-11-19 11:05:47.939509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:52.809 11:05:48 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:52.809 11:05:48 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:03:52.809 11:05:48 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:03:53.067 11:05:48 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2471940 00:03:53.067 11:05:48 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 2471940 ']' 00:03:53.067 11:05:48 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 2471940 00:03:53.067 11:05:48 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:03:53.067 11:05:48 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:53.067 11:05:48 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2471940 00:03:53.067 11:05:48 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:53.067 11:05:48 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:53.067 11:05:48 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2471940' 00:03:53.067 killing process with pid 2471940 00:03:53.067 11:05:48 alias_rpc -- common/autotest_common.sh@973 -- # kill 2471940 00:03:53.067 11:05:48 alias_rpc -- common/autotest_common.sh@978 -- # wait 2471940 00:03:53.635 00:03:53.635 real 0m1.319s 00:03:53.635 user 0m1.448s 00:03:53.635 sys 0m0.420s 00:03:53.635 11:05:48 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:53.635 11:05:48 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:53.635 ************************************ 00:03:53.635 END TEST alias_rpc 00:03:53.635 ************************************ 00:03:53.635 11:05:48 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:03:53.635 11:05:48 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:03:53.635 11:05:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:53.635 11:05:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:53.635 11:05:48 -- common/autotest_common.sh@10 -- # set +x 00:03:53.635 ************************************ 00:03:53.635 START TEST spdkcli_tcp 00:03:53.635 ************************************ 00:03:53.635 11:05:48 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:03:53.635 * Looking for test storage... 00:03:53.635 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:03:53.635 11:05:49 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:53.635 11:05:49 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:03:53.635 11:05:49 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:53.635 11:05:49 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:53.635 11:05:49 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:53.635 11:05:49 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:53.635 11:05:49 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:53.635 11:05:49 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:03:53.635 11:05:49 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:03:53.635 11:05:49 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:03:53.635 11:05:49 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:03:53.635 11:05:49 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:03:53.635 11:05:49 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:03:53.635 11:05:49 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:03:53.635 11:05:49 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:53.635 11:05:49 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:03:53.635 11:05:49 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:03:53.635 11:05:49 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:53.635 11:05:49 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:53.635 11:05:49 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:03:53.635 11:05:49 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:03:53.635 11:05:49 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:53.635 11:05:49 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:03:53.635 11:05:49 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:03:53.635 11:05:49 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:03:53.635 11:05:49 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:03:53.635 11:05:49 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:53.635 11:05:49 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:03:53.635 11:05:49 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:03:53.892 11:05:49 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:53.892 11:05:49 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:53.892 11:05:49 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:03:53.892 11:05:49 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:53.892 11:05:49 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:53.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.892 --rc genhtml_branch_coverage=1 00:03:53.892 --rc genhtml_function_coverage=1 00:03:53.892 --rc genhtml_legend=1 00:03:53.892 --rc geninfo_all_blocks=1 00:03:53.892 --rc geninfo_unexecuted_blocks=1 00:03:53.892 00:03:53.892 ' 00:03:53.892 11:05:49 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:53.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.892 --rc genhtml_branch_coverage=1 00:03:53.892 --rc genhtml_function_coverage=1 00:03:53.892 --rc genhtml_legend=1 00:03:53.892 --rc geninfo_all_blocks=1 00:03:53.892 --rc geninfo_unexecuted_blocks=1 00:03:53.892 00:03:53.892 ' 00:03:53.892 11:05:49 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:53.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.892 --rc genhtml_branch_coverage=1 00:03:53.892 --rc genhtml_function_coverage=1 00:03:53.892 --rc genhtml_legend=1 00:03:53.892 --rc geninfo_all_blocks=1 00:03:53.892 --rc geninfo_unexecuted_blocks=1 00:03:53.892 00:03:53.892 ' 00:03:53.892 11:05:49 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:53.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.892 --rc genhtml_branch_coverage=1 00:03:53.892 --rc genhtml_function_coverage=1 00:03:53.892 --rc genhtml_legend=1 00:03:53.892 --rc geninfo_all_blocks=1 00:03:53.892 --rc geninfo_unexecuted_blocks=1 00:03:53.892 00:03:53.892 ' 00:03:53.892 11:05:49 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:03:53.892 11:05:49 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:03:53.892 11:05:49 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:03:53.892 11:05:49 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:03:53.892 11:05:49 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:03:53.892 11:05:49 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:03:53.892 11:05:49 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:03:53.892 11:05:49 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:53.892 11:05:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:53.892 11:05:49 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2472135 00:03:53.892 11:05:49 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:03:53.892 11:05:49 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2472135 00:03:53.892 11:05:49 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 2472135 ']' 00:03:53.892 11:05:49 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:53.892 11:05:49 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:53.892 11:05:49 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:53.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:53.892 11:05:49 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:53.892 11:05:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:53.892 [2024-11-19 11:05:49.194222] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:03:53.892 [2024-11-19 11:05:49.194319] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2472135 ] 00:03:53.892 [2024-11-19 11:05:49.280501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:03:53.892 [2024-11-19 11:05:49.339250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:53.892 [2024-11-19 11:05:49.339254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:54.150 11:05:49 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:54.150 11:05:49 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:03:54.150 11:05:49 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2472259 00:03:54.150 11:05:49 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:03:54.150 11:05:49 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:03:54.408 [ 00:03:54.408 "bdev_malloc_delete", 00:03:54.408 "bdev_malloc_create", 00:03:54.408 "bdev_null_resize", 00:03:54.408 "bdev_null_delete", 00:03:54.408 "bdev_null_create", 00:03:54.408 "bdev_nvme_cuse_unregister", 00:03:54.408 "bdev_nvme_cuse_register", 00:03:54.408 "bdev_opal_new_user", 00:03:54.408 "bdev_opal_set_lock_state", 00:03:54.408 "bdev_opal_delete", 00:03:54.408 "bdev_opal_get_info", 00:03:54.408 "bdev_opal_create", 00:03:54.408 "bdev_nvme_opal_revert", 00:03:54.408 "bdev_nvme_opal_init", 00:03:54.408 "bdev_nvme_send_cmd", 00:03:54.408 "bdev_nvme_set_keys", 00:03:54.408 "bdev_nvme_get_path_iostat", 00:03:54.408 "bdev_nvme_get_mdns_discovery_info", 00:03:54.408 "bdev_nvme_stop_mdns_discovery", 00:03:54.408 "bdev_nvme_start_mdns_discovery", 00:03:54.408 "bdev_nvme_set_multipath_policy", 00:03:54.408 "bdev_nvme_set_preferred_path", 00:03:54.408 "bdev_nvme_get_io_paths", 00:03:54.408 "bdev_nvme_remove_error_injection", 00:03:54.408 "bdev_nvme_add_error_injection", 00:03:54.408 "bdev_nvme_get_discovery_info", 00:03:54.408 "bdev_nvme_stop_discovery", 00:03:54.408 "bdev_nvme_start_discovery", 00:03:54.408 "bdev_nvme_get_controller_health_info", 00:03:54.408 "bdev_nvme_disable_controller", 00:03:54.408 "bdev_nvme_enable_controller", 00:03:54.408 "bdev_nvme_reset_controller", 00:03:54.408 "bdev_nvme_get_transport_statistics", 00:03:54.408 "bdev_nvme_apply_firmware", 00:03:54.408 "bdev_nvme_detach_controller", 00:03:54.408 "bdev_nvme_get_controllers", 00:03:54.408 "bdev_nvme_attach_controller", 00:03:54.408 "bdev_nvme_set_hotplug", 00:03:54.408 "bdev_nvme_set_options", 00:03:54.408 "bdev_passthru_delete", 00:03:54.408 "bdev_passthru_create", 00:03:54.408 "bdev_lvol_set_parent_bdev", 00:03:54.408 "bdev_lvol_set_parent", 00:03:54.408 "bdev_lvol_check_shallow_copy", 00:03:54.408 "bdev_lvol_start_shallow_copy", 00:03:54.408 "bdev_lvol_grow_lvstore", 00:03:54.408 "bdev_lvol_get_lvols", 00:03:54.408 "bdev_lvol_get_lvstores", 00:03:54.408 "bdev_lvol_delete", 00:03:54.408 "bdev_lvol_set_read_only", 00:03:54.408 "bdev_lvol_resize", 00:03:54.408 "bdev_lvol_decouple_parent", 00:03:54.408 "bdev_lvol_inflate", 00:03:54.408 "bdev_lvol_rename", 00:03:54.408 "bdev_lvol_clone_bdev", 00:03:54.408 "bdev_lvol_clone", 00:03:54.408 "bdev_lvol_snapshot", 00:03:54.408 "bdev_lvol_create", 00:03:54.408 "bdev_lvol_delete_lvstore", 00:03:54.408 "bdev_lvol_rename_lvstore", 00:03:54.408 "bdev_lvol_create_lvstore", 00:03:54.408 "bdev_raid_set_options", 00:03:54.408 "bdev_raid_remove_base_bdev", 00:03:54.408 "bdev_raid_add_base_bdev", 00:03:54.408 "bdev_raid_delete", 00:03:54.408 "bdev_raid_create", 00:03:54.408 "bdev_raid_get_bdevs", 00:03:54.408 "bdev_error_inject_error", 00:03:54.408 "bdev_error_delete", 00:03:54.408 "bdev_error_create", 00:03:54.408 "bdev_split_delete", 00:03:54.408 "bdev_split_create", 00:03:54.408 "bdev_delay_delete", 00:03:54.408 "bdev_delay_create", 00:03:54.408 "bdev_delay_update_latency", 00:03:54.408 "bdev_zone_block_delete", 00:03:54.408 "bdev_zone_block_create", 00:03:54.408 "blobfs_create", 00:03:54.408 "blobfs_detect", 00:03:54.408 "blobfs_set_cache_size", 00:03:54.408 "bdev_aio_delete", 00:03:54.408 "bdev_aio_rescan", 00:03:54.408 "bdev_aio_create", 00:03:54.408 "bdev_ftl_set_property", 00:03:54.408 "bdev_ftl_get_properties", 00:03:54.408 "bdev_ftl_get_stats", 00:03:54.408 "bdev_ftl_unmap", 00:03:54.408 "bdev_ftl_unload", 00:03:54.408 "bdev_ftl_delete", 00:03:54.408 "bdev_ftl_load", 00:03:54.408 "bdev_ftl_create", 00:03:54.408 "bdev_virtio_attach_controller", 00:03:54.408 "bdev_virtio_scsi_get_devices", 00:03:54.408 "bdev_virtio_detach_controller", 00:03:54.408 "bdev_virtio_blk_set_hotplug", 00:03:54.408 "bdev_iscsi_delete", 00:03:54.408 "bdev_iscsi_create", 00:03:54.408 "bdev_iscsi_set_options", 00:03:54.408 "accel_error_inject_error", 00:03:54.408 "ioat_scan_accel_module", 00:03:54.408 "dsa_scan_accel_module", 00:03:54.408 "iaa_scan_accel_module", 00:03:54.408 "vfu_virtio_create_fs_endpoint", 00:03:54.408 "vfu_virtio_create_scsi_endpoint", 00:03:54.408 "vfu_virtio_scsi_remove_target", 00:03:54.408 "vfu_virtio_scsi_add_target", 00:03:54.408 "vfu_virtio_create_blk_endpoint", 00:03:54.408 "vfu_virtio_delete_endpoint", 00:03:54.408 "keyring_file_remove_key", 00:03:54.408 "keyring_file_add_key", 00:03:54.408 "keyring_linux_set_options", 00:03:54.408 "fsdev_aio_delete", 00:03:54.408 "fsdev_aio_create", 00:03:54.408 "iscsi_get_histogram", 00:03:54.408 "iscsi_enable_histogram", 00:03:54.408 "iscsi_set_options", 00:03:54.408 "iscsi_get_auth_groups", 00:03:54.408 "iscsi_auth_group_remove_secret", 00:03:54.408 "iscsi_auth_group_add_secret", 00:03:54.408 "iscsi_delete_auth_group", 00:03:54.409 "iscsi_create_auth_group", 00:03:54.409 "iscsi_set_discovery_auth", 00:03:54.409 "iscsi_get_options", 00:03:54.409 "iscsi_target_node_request_logout", 00:03:54.409 "iscsi_target_node_set_redirect", 00:03:54.409 "iscsi_target_node_set_auth", 00:03:54.409 "iscsi_target_node_add_lun", 00:03:54.409 "iscsi_get_stats", 00:03:54.409 "iscsi_get_connections", 00:03:54.409 "iscsi_portal_group_set_auth", 00:03:54.409 "iscsi_start_portal_group", 00:03:54.409 "iscsi_delete_portal_group", 00:03:54.409 "iscsi_create_portal_group", 00:03:54.409 "iscsi_get_portal_groups", 00:03:54.409 "iscsi_delete_target_node", 00:03:54.409 "iscsi_target_node_remove_pg_ig_maps", 00:03:54.409 "iscsi_target_node_add_pg_ig_maps", 00:03:54.409 "iscsi_create_target_node", 00:03:54.409 "iscsi_get_target_nodes", 00:03:54.409 "iscsi_delete_initiator_group", 00:03:54.409 "iscsi_initiator_group_remove_initiators", 00:03:54.409 "iscsi_initiator_group_add_initiators", 00:03:54.409 "iscsi_create_initiator_group", 00:03:54.409 "iscsi_get_initiator_groups", 00:03:54.409 "nvmf_set_crdt", 00:03:54.409 "nvmf_set_config", 00:03:54.409 "nvmf_set_max_subsystems", 00:03:54.409 "nvmf_stop_mdns_prr", 00:03:54.409 "nvmf_publish_mdns_prr", 00:03:54.409 "nvmf_subsystem_get_listeners", 00:03:54.409 "nvmf_subsystem_get_qpairs", 00:03:54.409 "nvmf_subsystem_get_controllers", 00:03:54.409 "nvmf_get_stats", 00:03:54.409 "nvmf_get_transports", 00:03:54.409 "nvmf_create_transport", 00:03:54.409 "nvmf_get_targets", 00:03:54.409 "nvmf_delete_target", 00:03:54.409 "nvmf_create_target", 00:03:54.409 "nvmf_subsystem_allow_any_host", 00:03:54.409 "nvmf_subsystem_set_keys", 00:03:54.409 "nvmf_subsystem_remove_host", 00:03:54.409 "nvmf_subsystem_add_host", 00:03:54.409 "nvmf_ns_remove_host", 00:03:54.409 "nvmf_ns_add_host", 00:03:54.409 "nvmf_subsystem_remove_ns", 00:03:54.409 "nvmf_subsystem_set_ns_ana_group", 00:03:54.409 "nvmf_subsystem_add_ns", 00:03:54.409 "nvmf_subsystem_listener_set_ana_state", 00:03:54.409 "nvmf_discovery_get_referrals", 00:03:54.409 "nvmf_discovery_remove_referral", 00:03:54.409 "nvmf_discovery_add_referral", 00:03:54.409 "nvmf_subsystem_remove_listener", 00:03:54.409 "nvmf_subsystem_add_listener", 00:03:54.409 "nvmf_delete_subsystem", 00:03:54.409 "nvmf_create_subsystem", 00:03:54.409 "nvmf_get_subsystems", 00:03:54.409 "env_dpdk_get_mem_stats", 00:03:54.409 "nbd_get_disks", 00:03:54.409 "nbd_stop_disk", 00:03:54.409 "nbd_start_disk", 00:03:54.409 "ublk_recover_disk", 00:03:54.409 "ublk_get_disks", 00:03:54.409 "ublk_stop_disk", 00:03:54.409 "ublk_start_disk", 00:03:54.409 "ublk_destroy_target", 00:03:54.409 "ublk_create_target", 00:03:54.409 "virtio_blk_create_transport", 00:03:54.409 "virtio_blk_get_transports", 00:03:54.409 "vhost_controller_set_coalescing", 00:03:54.409 "vhost_get_controllers", 00:03:54.409 "vhost_delete_controller", 00:03:54.409 "vhost_create_blk_controller", 00:03:54.409 "vhost_scsi_controller_remove_target", 00:03:54.409 "vhost_scsi_controller_add_target", 00:03:54.409 "vhost_start_scsi_controller", 00:03:54.409 "vhost_create_scsi_controller", 00:03:54.409 "thread_set_cpumask", 00:03:54.409 "scheduler_set_options", 00:03:54.409 "framework_get_governor", 00:03:54.409 "framework_get_scheduler", 00:03:54.409 "framework_set_scheduler", 00:03:54.409 "framework_get_reactors", 00:03:54.409 "thread_get_io_channels", 00:03:54.409 "thread_get_pollers", 00:03:54.409 "thread_get_stats", 00:03:54.409 "framework_monitor_context_switch", 00:03:54.409 "spdk_kill_instance", 00:03:54.409 "log_enable_timestamps", 00:03:54.409 "log_get_flags", 00:03:54.409 "log_clear_flag", 00:03:54.409 "log_set_flag", 00:03:54.409 "log_get_level", 00:03:54.409 "log_set_level", 00:03:54.409 "log_get_print_level", 00:03:54.409 "log_set_print_level", 00:03:54.409 "framework_enable_cpumask_locks", 00:03:54.409 "framework_disable_cpumask_locks", 00:03:54.409 "framework_wait_init", 00:03:54.409 "framework_start_init", 00:03:54.409 "scsi_get_devices", 00:03:54.409 "bdev_get_histogram", 00:03:54.409 "bdev_enable_histogram", 00:03:54.409 "bdev_set_qos_limit", 00:03:54.409 "bdev_set_qd_sampling_period", 00:03:54.409 "bdev_get_bdevs", 00:03:54.409 "bdev_reset_iostat", 00:03:54.409 "bdev_get_iostat", 00:03:54.409 "bdev_examine", 00:03:54.409 "bdev_wait_for_examine", 00:03:54.409 "bdev_set_options", 00:03:54.409 "accel_get_stats", 00:03:54.409 "accel_set_options", 00:03:54.409 "accel_set_driver", 00:03:54.409 "accel_crypto_key_destroy", 00:03:54.409 "accel_crypto_keys_get", 00:03:54.409 "accel_crypto_key_create", 00:03:54.409 "accel_assign_opc", 00:03:54.409 "accel_get_module_info", 00:03:54.409 "accel_get_opc_assignments", 00:03:54.409 "vmd_rescan", 00:03:54.409 "vmd_remove_device", 00:03:54.409 "vmd_enable", 00:03:54.409 "sock_get_default_impl", 00:03:54.409 "sock_set_default_impl", 00:03:54.409 "sock_impl_set_options", 00:03:54.409 "sock_impl_get_options", 00:03:54.409 "iobuf_get_stats", 00:03:54.409 "iobuf_set_options", 00:03:54.409 "keyring_get_keys", 00:03:54.409 "vfu_tgt_set_base_path", 00:03:54.409 "framework_get_pci_devices", 00:03:54.409 "framework_get_config", 00:03:54.409 "framework_get_subsystems", 00:03:54.409 "fsdev_set_opts", 00:03:54.409 "fsdev_get_opts", 00:03:54.409 "trace_get_info", 00:03:54.409 "trace_get_tpoint_group_mask", 00:03:54.409 "trace_disable_tpoint_group", 00:03:54.409 "trace_enable_tpoint_group", 00:03:54.409 "trace_clear_tpoint_mask", 00:03:54.409 "trace_set_tpoint_mask", 00:03:54.409 "notify_get_notifications", 00:03:54.409 "notify_get_types", 00:03:54.409 "spdk_get_version", 00:03:54.409 "rpc_get_methods" 00:03:54.409 ] 00:03:54.409 11:05:49 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:03:54.409 11:05:49 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:54.409 11:05:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:54.409 11:05:49 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:03:54.409 11:05:49 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2472135 00:03:54.409 11:05:49 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 2472135 ']' 00:03:54.409 11:05:49 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 2472135 00:03:54.409 11:05:49 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:03:54.409 11:05:49 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:54.409 11:05:49 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2472135 00:03:54.667 11:05:49 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:54.667 11:05:49 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:54.667 11:05:49 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2472135' 00:03:54.667 killing process with pid 2472135 00:03:54.667 11:05:49 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 2472135 00:03:54.667 11:05:49 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 2472135 00:03:54.926 00:03:54.926 real 0m1.378s 00:03:54.926 user 0m2.442s 00:03:54.926 sys 0m0.472s 00:03:54.926 11:05:50 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:54.926 11:05:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:54.926 ************************************ 00:03:54.926 END TEST spdkcli_tcp 00:03:54.926 ************************************ 00:03:54.926 11:05:50 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:03:54.926 11:05:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:54.926 11:05:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:54.926 11:05:50 -- common/autotest_common.sh@10 -- # set +x 00:03:54.926 ************************************ 00:03:54.926 START TEST dpdk_mem_utility 00:03:54.926 ************************************ 00:03:54.926 11:05:50 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:03:55.185 * Looking for test storage... 00:03:55.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:03:55.185 11:05:50 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:55.185 11:05:50 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:03:55.185 11:05:50 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:55.185 11:05:50 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:55.185 11:05:50 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:55.185 11:05:50 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:55.185 11:05:50 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:55.185 11:05:50 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:03:55.185 11:05:50 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:03:55.185 11:05:50 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:03:55.185 11:05:50 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:03:55.185 11:05:50 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:03:55.185 11:05:50 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:03:55.185 11:05:50 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:03:55.185 11:05:50 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:55.185 11:05:50 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:03:55.185 11:05:50 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:03:55.185 11:05:50 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:55.185 11:05:50 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:55.185 11:05:50 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:03:55.185 11:05:50 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:03:55.185 11:05:50 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:55.185 11:05:50 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:03:55.185 11:05:50 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:03:55.185 11:05:50 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:03:55.185 11:05:50 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:03:55.185 11:05:50 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:55.185 11:05:50 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:03:55.185 11:05:50 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:03:55.185 11:05:50 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:55.185 11:05:50 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:55.185 11:05:50 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:03:55.185 11:05:50 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:55.185 11:05:50 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:55.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.185 --rc genhtml_branch_coverage=1 00:03:55.185 --rc genhtml_function_coverage=1 00:03:55.185 --rc genhtml_legend=1 00:03:55.185 --rc geninfo_all_blocks=1 00:03:55.185 --rc geninfo_unexecuted_blocks=1 00:03:55.185 00:03:55.185 ' 00:03:55.185 11:05:50 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:55.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.185 --rc genhtml_branch_coverage=1 00:03:55.185 --rc genhtml_function_coverage=1 00:03:55.185 --rc genhtml_legend=1 00:03:55.185 --rc geninfo_all_blocks=1 00:03:55.185 --rc geninfo_unexecuted_blocks=1 00:03:55.185 00:03:55.185 ' 00:03:55.185 11:05:50 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:55.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.185 --rc genhtml_branch_coverage=1 00:03:55.185 --rc genhtml_function_coverage=1 00:03:55.185 --rc genhtml_legend=1 00:03:55.185 --rc geninfo_all_blocks=1 00:03:55.185 --rc geninfo_unexecuted_blocks=1 00:03:55.185 00:03:55.185 ' 00:03:55.186 11:05:50 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:55.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.186 --rc genhtml_branch_coverage=1 00:03:55.186 --rc genhtml_function_coverage=1 00:03:55.186 --rc genhtml_legend=1 00:03:55.186 --rc geninfo_all_blocks=1 00:03:55.186 --rc geninfo_unexecuted_blocks=1 00:03:55.186 00:03:55.186 ' 00:03:55.186 11:05:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:03:55.186 11:05:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2472458 00:03:55.186 11:05:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:55.186 11:05:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2472458 00:03:55.186 11:05:50 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 2472458 ']' 00:03:55.186 11:05:50 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:55.186 11:05:50 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:55.186 11:05:50 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:55.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:55.186 11:05:50 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:55.186 11:05:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:03:55.186 [2024-11-19 11:05:50.611190] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:03:55.186 [2024-11-19 11:05:50.611280] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2472458 ] 00:03:55.444 [2024-11-19 11:05:50.687940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:55.444 [2024-11-19 11:05:50.746510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:55.702 11:05:51 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:55.702 11:05:51 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:03:55.702 11:05:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:03:55.702 11:05:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:03:55.702 11:05:51 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:55.702 11:05:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:03:55.702 { 00:03:55.702 "filename": "/tmp/spdk_mem_dump.txt" 00:03:55.702 } 00:03:55.702 11:05:51 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:55.702 11:05:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:03:55.702 DPDK memory size 810.000000 MiB in 1 heap(s) 00:03:55.702 1 heaps totaling size 810.000000 MiB 00:03:55.702 size: 810.000000 MiB heap id: 0 00:03:55.702 end heaps---------- 00:03:55.702 9 mempools totaling size 595.772034 MiB 00:03:55.702 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:03:55.702 size: 158.602051 MiB name: PDU_data_out_Pool 00:03:55.702 size: 92.545471 MiB name: bdev_io_2472458 00:03:55.702 size: 50.003479 MiB name: msgpool_2472458 00:03:55.702 size: 36.509338 MiB name: fsdev_io_2472458 00:03:55.702 size: 21.763794 MiB name: PDU_Pool 00:03:55.702 size: 19.513306 MiB name: SCSI_TASK_Pool 00:03:55.702 size: 4.133484 MiB name: evtpool_2472458 00:03:55.702 size: 0.026123 MiB name: Session_Pool 00:03:55.702 end mempools------- 00:03:55.702 6 memzones totaling size 4.142822 MiB 00:03:55.702 size: 1.000366 MiB name: RG_ring_0_2472458 00:03:55.702 size: 1.000366 MiB name: RG_ring_1_2472458 00:03:55.702 size: 1.000366 MiB name: RG_ring_4_2472458 00:03:55.702 size: 1.000366 MiB name: RG_ring_5_2472458 00:03:55.702 size: 0.125366 MiB name: RG_ring_2_2472458 00:03:55.702 size: 0.015991 MiB name: RG_ring_3_2472458 00:03:55.702 end memzones------- 00:03:55.702 11:05:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:03:55.702 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:03:55.702 list of free elements. size: 10.862488 MiB 00:03:55.702 element at address: 0x200018a00000 with size: 0.999878 MiB 00:03:55.702 element at address: 0x200018c00000 with size: 0.999878 MiB 00:03:55.702 element at address: 0x200000400000 with size: 0.998535 MiB 00:03:55.702 element at address: 0x200031800000 with size: 0.994446 MiB 00:03:55.702 element at address: 0x200006400000 with size: 0.959839 MiB 00:03:55.702 element at address: 0x200012c00000 with size: 0.954285 MiB 00:03:55.702 element at address: 0x200018e00000 with size: 0.936584 MiB 00:03:55.702 element at address: 0x200000200000 with size: 0.717346 MiB 00:03:55.702 element at address: 0x20001a600000 with size: 0.582886 MiB 00:03:55.702 element at address: 0x200000c00000 with size: 0.495422 MiB 00:03:55.702 element at address: 0x20000a600000 with size: 0.490723 MiB 00:03:55.702 element at address: 0x200019000000 with size: 0.485657 MiB 00:03:55.702 element at address: 0x200003e00000 with size: 0.481934 MiB 00:03:55.702 element at address: 0x200027a00000 with size: 0.410034 MiB 00:03:55.702 element at address: 0x200000800000 with size: 0.355042 MiB 00:03:55.702 list of standard malloc elements. size: 199.218628 MiB 00:03:55.702 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:03:55.702 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:03:55.702 element at address: 0x200018afff80 with size: 1.000122 MiB 00:03:55.702 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:03:55.702 element at address: 0x200018efff80 with size: 1.000122 MiB 00:03:55.702 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:03:55.702 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:03:55.702 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:03:55.702 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:03:55.702 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:03:55.702 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:03:55.702 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:03:55.702 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:03:55.702 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:03:55.702 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:03:55.702 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:03:55.702 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:03:55.702 element at address: 0x20000085b040 with size: 0.000183 MiB 00:03:55.702 element at address: 0x20000085f300 with size: 0.000183 MiB 00:03:55.702 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:03:55.702 element at address: 0x20000087f680 with size: 0.000183 MiB 00:03:55.702 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:03:55.702 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:03:55.702 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:03:55.702 element at address: 0x200000cff000 with size: 0.000183 MiB 00:03:55.702 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:03:55.702 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:03:55.702 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:03:55.702 element at address: 0x200003efb980 with size: 0.000183 MiB 00:03:55.702 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:03:55.702 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:03:55.702 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:03:55.702 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:03:55.702 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:03:55.702 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:03:55.702 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:03:55.702 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:03:55.702 element at address: 0x20001a695380 with size: 0.000183 MiB 00:03:55.702 element at address: 0x20001a695440 with size: 0.000183 MiB 00:03:55.702 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:03:55.702 element at address: 0x200027a69040 with size: 0.000183 MiB 00:03:55.702 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:03:55.702 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:03:55.702 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:03:55.702 list of memzone associated elements. size: 599.918884 MiB 00:03:55.702 element at address: 0x20001a695500 with size: 211.416748 MiB 00:03:55.702 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:03:55.702 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:03:55.702 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:03:55.702 element at address: 0x200012df4780 with size: 92.045044 MiB 00:03:55.702 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_2472458_0 00:03:55.702 element at address: 0x200000dff380 with size: 48.003052 MiB 00:03:55.702 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2472458_0 00:03:55.702 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:03:55.702 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2472458_0 00:03:55.702 element at address: 0x2000191be940 with size: 20.255554 MiB 00:03:55.702 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:03:55.702 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:03:55.702 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:03:55.702 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:03:55.702 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2472458_0 00:03:55.702 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:03:55.702 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2472458 00:03:55.702 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:03:55.702 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2472458 00:03:55.702 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:03:55.702 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:03:55.702 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:03:55.702 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:03:55.702 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:03:55.702 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:03:55.702 element at address: 0x200003efba40 with size: 1.008118 MiB 00:03:55.702 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:03:55.702 element at address: 0x200000cff180 with size: 1.000488 MiB 00:03:55.703 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2472458 00:03:55.703 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:03:55.703 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2472458 00:03:55.703 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:03:55.703 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2472458 00:03:55.703 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:03:55.703 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2472458 00:03:55.703 element at address: 0x20000087f740 with size: 0.500488 MiB 00:03:55.703 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2472458 00:03:55.703 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:03:55.703 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2472458 00:03:55.703 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:03:55.703 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:03:55.703 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:03:55.703 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:03:55.703 element at address: 0x20001907c540 with size: 0.250488 MiB 00:03:55.703 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:03:55.703 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:03:55.703 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2472458 00:03:55.703 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:03:55.703 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2472458 00:03:55.703 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:03:55.703 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:03:55.703 element at address: 0x200027a69100 with size: 0.023743 MiB 00:03:55.703 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:03:55.703 element at address: 0x20000085b100 with size: 0.016113 MiB 00:03:55.703 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2472458 00:03:55.703 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:03:55.703 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:03:55.703 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:03:55.703 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2472458 00:03:55.703 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:03:55.703 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2472458 00:03:55.703 element at address: 0x20000085af00 with size: 0.000305 MiB 00:03:55.703 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2472458 00:03:55.703 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:03:55.703 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:03:55.703 11:05:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:03:55.703 11:05:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2472458 00:03:55.703 11:05:51 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 2472458 ']' 00:03:55.703 11:05:51 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 2472458 00:03:55.703 11:05:51 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:03:55.703 11:05:51 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:55.703 11:05:51 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2472458 00:03:55.703 11:05:51 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:55.703 11:05:51 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:55.703 11:05:51 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2472458' 00:03:55.703 killing process with pid 2472458 00:03:55.703 11:05:51 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 2472458 00:03:55.703 11:05:51 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 2472458 00:03:56.269 00:03:56.269 real 0m1.170s 00:03:56.269 user 0m1.126s 00:03:56.269 sys 0m0.442s 00:03:56.269 11:05:51 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:56.269 11:05:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:03:56.269 ************************************ 00:03:56.269 END TEST dpdk_mem_utility 00:03:56.269 ************************************ 00:03:56.269 11:05:51 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:03:56.269 11:05:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:56.269 11:05:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:56.269 11:05:51 -- common/autotest_common.sh@10 -- # set +x 00:03:56.269 ************************************ 00:03:56.269 START TEST event 00:03:56.269 ************************************ 00:03:56.269 11:05:51 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:03:56.269 * Looking for test storage... 00:03:56.269 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:03:56.269 11:05:51 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:56.269 11:05:51 event -- common/autotest_common.sh@1693 -- # lcov --version 00:03:56.269 11:05:51 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:56.527 11:05:51 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:56.527 11:05:51 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:56.527 11:05:51 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:56.527 11:05:51 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:56.527 11:05:51 event -- scripts/common.sh@336 -- # IFS=.-: 00:03:56.527 11:05:51 event -- scripts/common.sh@336 -- # read -ra ver1 00:03:56.527 11:05:51 event -- scripts/common.sh@337 -- # IFS=.-: 00:03:56.527 11:05:51 event -- scripts/common.sh@337 -- # read -ra ver2 00:03:56.527 11:05:51 event -- scripts/common.sh@338 -- # local 'op=<' 00:03:56.527 11:05:51 event -- scripts/common.sh@340 -- # ver1_l=2 00:03:56.527 11:05:51 event -- scripts/common.sh@341 -- # ver2_l=1 00:03:56.527 11:05:51 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:56.527 11:05:51 event -- scripts/common.sh@344 -- # case "$op" in 00:03:56.527 11:05:51 event -- scripts/common.sh@345 -- # : 1 00:03:56.527 11:05:51 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:56.527 11:05:51 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:56.527 11:05:51 event -- scripts/common.sh@365 -- # decimal 1 00:03:56.527 11:05:51 event -- scripts/common.sh@353 -- # local d=1 00:03:56.527 11:05:51 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:56.527 11:05:51 event -- scripts/common.sh@355 -- # echo 1 00:03:56.527 11:05:51 event -- scripts/common.sh@365 -- # ver1[v]=1 00:03:56.527 11:05:51 event -- scripts/common.sh@366 -- # decimal 2 00:03:56.527 11:05:51 event -- scripts/common.sh@353 -- # local d=2 00:03:56.527 11:05:51 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:56.527 11:05:51 event -- scripts/common.sh@355 -- # echo 2 00:03:56.527 11:05:51 event -- scripts/common.sh@366 -- # ver2[v]=2 00:03:56.527 11:05:51 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:56.527 11:05:51 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:56.527 11:05:51 event -- scripts/common.sh@368 -- # return 0 00:03:56.527 11:05:51 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:56.527 11:05:51 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:56.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.527 --rc genhtml_branch_coverage=1 00:03:56.527 --rc genhtml_function_coverage=1 00:03:56.527 --rc genhtml_legend=1 00:03:56.527 --rc geninfo_all_blocks=1 00:03:56.527 --rc geninfo_unexecuted_blocks=1 00:03:56.527 00:03:56.527 ' 00:03:56.527 11:05:51 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:56.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.527 --rc genhtml_branch_coverage=1 00:03:56.527 --rc genhtml_function_coverage=1 00:03:56.527 --rc genhtml_legend=1 00:03:56.527 --rc geninfo_all_blocks=1 00:03:56.527 --rc geninfo_unexecuted_blocks=1 00:03:56.527 00:03:56.527 ' 00:03:56.527 11:05:51 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:56.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.527 --rc genhtml_branch_coverage=1 00:03:56.527 --rc genhtml_function_coverage=1 00:03:56.527 --rc genhtml_legend=1 00:03:56.527 --rc geninfo_all_blocks=1 00:03:56.527 --rc geninfo_unexecuted_blocks=1 00:03:56.527 00:03:56.527 ' 00:03:56.527 11:05:51 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:56.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.527 --rc genhtml_branch_coverage=1 00:03:56.527 --rc genhtml_function_coverage=1 00:03:56.527 --rc genhtml_legend=1 00:03:56.527 --rc geninfo_all_blocks=1 00:03:56.527 --rc geninfo_unexecuted_blocks=1 00:03:56.527 00:03:56.527 ' 00:03:56.527 11:05:51 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:03:56.527 11:05:51 event -- bdev/nbd_common.sh@6 -- # set -e 00:03:56.527 11:05:51 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:03:56.527 11:05:51 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:03:56.527 11:05:51 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:56.527 11:05:51 event -- common/autotest_common.sh@10 -- # set +x 00:03:56.527 ************************************ 00:03:56.527 START TEST event_perf 00:03:56.527 ************************************ 00:03:56.527 11:05:51 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:03:56.527 Running I/O for 1 seconds...[2024-11-19 11:05:51.819160] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:03:56.527 [2024-11-19 11:05:51.819229] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2472666 ] 00:03:56.527 [2024-11-19 11:05:51.895749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:03:56.527 [2024-11-19 11:05:51.958158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:56.527 [2024-11-19 11:05:51.958267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:03:56.527 [2024-11-19 11:05:51.958357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:03:56.527 [2024-11-19 11:05:51.958360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:57.900 Running I/O for 1 seconds... 00:03:57.900 lcore 0: 230424 00:03:57.900 lcore 1: 230423 00:03:57.900 lcore 2: 230425 00:03:57.900 lcore 3: 230425 00:03:57.900 done. 00:03:57.900 00:03:57.900 real 0m1.214s 00:03:57.900 user 0m4.130s 00:03:57.900 sys 0m0.080s 00:03:57.900 11:05:53 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:57.900 11:05:53 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:03:57.900 ************************************ 00:03:57.900 END TEST event_perf 00:03:57.900 ************************************ 00:03:57.900 11:05:53 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:03:57.900 11:05:53 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:03:57.900 11:05:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:57.900 11:05:53 event -- common/autotest_common.sh@10 -- # set +x 00:03:57.900 ************************************ 00:03:57.900 START TEST event_reactor 00:03:57.900 ************************************ 00:03:57.900 11:05:53 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:03:57.900 [2024-11-19 11:05:53.077211] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:03:57.900 [2024-11-19 11:05:53.077276] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2472828 ] 00:03:57.900 [2024-11-19 11:05:53.147584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:57.900 [2024-11-19 11:05:53.202616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:58.835 test_start 00:03:58.835 oneshot 00:03:58.835 tick 100 00:03:58.835 tick 100 00:03:58.835 tick 250 00:03:58.835 tick 100 00:03:58.835 tick 100 00:03:58.835 tick 100 00:03:58.835 tick 250 00:03:58.835 tick 500 00:03:58.835 tick 100 00:03:58.835 tick 100 00:03:58.835 tick 250 00:03:58.835 tick 100 00:03:58.835 tick 100 00:03:58.835 test_end 00:03:58.835 00:03:58.835 real 0m1.199s 00:03:58.835 user 0m1.125s 00:03:58.835 sys 0m0.070s 00:03:58.835 11:05:54 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:58.835 11:05:54 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:03:58.835 ************************************ 00:03:58.835 END TEST event_reactor 00:03:58.835 ************************************ 00:03:58.835 11:05:54 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:03:58.835 11:05:54 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:03:58.835 11:05:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:58.835 11:05:54 event -- common/autotest_common.sh@10 -- # set +x 00:03:58.835 ************************************ 00:03:58.835 START TEST event_reactor_perf 00:03:58.835 ************************************ 00:03:58.835 11:05:54 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:03:58.835 [2024-11-19 11:05:54.324779] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:03:58.835 [2024-11-19 11:05:54.324843] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2472980 ] 00:03:59.093 [2024-11-19 11:05:54.399019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:59.093 [2024-11-19 11:05:54.454448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:00.028 test_start 00:04:00.028 test_end 00:04:00.028 Performance: 442516 events per second 00:04:00.028 00:04:00.028 real 0m1.205s 00:04:00.028 user 0m1.129s 00:04:00.028 sys 0m0.072s 00:04:00.028 11:05:55 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.028 11:05:55 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:00.028 ************************************ 00:04:00.028 END TEST event_reactor_perf 00:04:00.028 ************************************ 00:04:00.287 11:05:55 event -- event/event.sh@49 -- # uname -s 00:04:00.287 11:05:55 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:00.287 11:05:55 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:00.287 11:05:55 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.287 11:05:55 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.287 11:05:55 event -- common/autotest_common.sh@10 -- # set +x 00:04:00.287 ************************************ 00:04:00.287 START TEST event_scheduler 00:04:00.287 ************************************ 00:04:00.287 11:05:55 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:00.287 * Looking for test storage... 00:04:00.287 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:00.287 11:05:55 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:00.287 11:05:55 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:00.287 11:05:55 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:00.287 11:05:55 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:00.287 11:05:55 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:00.287 11:05:55 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:00.287 11:05:55 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:00.287 11:05:55 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:00.287 11:05:55 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:00.287 11:05:55 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:00.287 11:05:55 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:00.287 11:05:55 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:00.287 11:05:55 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:00.287 11:05:55 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:00.287 11:05:55 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:00.287 11:05:55 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:00.287 11:05:55 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:00.287 11:05:55 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:00.287 11:05:55 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:00.287 11:05:55 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:00.287 11:05:55 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:00.287 11:05:55 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:00.287 11:05:55 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:00.287 11:05:55 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:00.287 11:05:55 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:00.287 11:05:55 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:00.287 11:05:55 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:00.287 11:05:55 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:00.287 11:05:55 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:00.287 11:05:55 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:00.287 11:05:55 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:00.287 11:05:55 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:00.287 11:05:55 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:00.287 11:05:55 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:00.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.287 --rc genhtml_branch_coverage=1 00:04:00.287 --rc genhtml_function_coverage=1 00:04:00.287 --rc genhtml_legend=1 00:04:00.287 --rc geninfo_all_blocks=1 00:04:00.287 --rc geninfo_unexecuted_blocks=1 00:04:00.287 00:04:00.287 ' 00:04:00.287 11:05:55 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:00.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.287 --rc genhtml_branch_coverage=1 00:04:00.287 --rc genhtml_function_coverage=1 00:04:00.287 --rc genhtml_legend=1 00:04:00.287 --rc geninfo_all_blocks=1 00:04:00.287 --rc geninfo_unexecuted_blocks=1 00:04:00.287 00:04:00.287 ' 00:04:00.287 11:05:55 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:00.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.287 --rc genhtml_branch_coverage=1 00:04:00.287 --rc genhtml_function_coverage=1 00:04:00.287 --rc genhtml_legend=1 00:04:00.287 --rc geninfo_all_blocks=1 00:04:00.287 --rc geninfo_unexecuted_blocks=1 00:04:00.287 00:04:00.287 ' 00:04:00.287 11:05:55 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:00.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.287 --rc genhtml_branch_coverage=1 00:04:00.287 --rc genhtml_function_coverage=1 00:04:00.287 --rc genhtml_legend=1 00:04:00.287 --rc geninfo_all_blocks=1 00:04:00.287 --rc geninfo_unexecuted_blocks=1 00:04:00.287 00:04:00.287 ' 00:04:00.287 11:05:55 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:00.287 11:05:55 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2473168 00:04:00.287 11:05:55 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:00.287 11:05:55 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:00.287 11:05:55 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2473168 00:04:00.287 11:05:55 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 2473168 ']' 00:04:00.287 11:05:55 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:00.287 11:05:55 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:00.287 11:05:55 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:00.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:00.287 11:05:55 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:00.287 11:05:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:00.287 [2024-11-19 11:05:55.760870] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:04:00.288 [2024-11-19 11:05:55.760962] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2473168 ] 00:04:00.546 [2024-11-19 11:05:55.838383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:00.546 [2024-11-19 11:05:55.901625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:00.546 [2024-11-19 11:05:55.901688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:00.546 [2024-11-19 11:05:55.901752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:00.546 [2024-11-19 11:05:55.901755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:00.546 11:05:56 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:00.546 11:05:56 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:00.546 11:05:56 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:00.546 11:05:56 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.546 11:05:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:00.546 [2024-11-19 11:05:56.014698] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:00.546 [2024-11-19 11:05:56.014725] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:00.546 [2024-11-19 11:05:56.014757] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:00.546 [2024-11-19 11:05:56.014768] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:00.546 [2024-11-19 11:05:56.014778] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:00.546 11:05:56 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.546 11:05:56 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:00.546 11:05:56 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.546 11:05:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:00.806 [2024-11-19 11:05:56.111756] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:00.806 11:05:56 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.806 11:05:56 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:00.806 11:05:56 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.806 11:05:56 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.806 11:05:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:00.806 ************************************ 00:04:00.806 START TEST scheduler_create_thread 00:04:00.806 ************************************ 00:04:00.806 11:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:00.806 11:05:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:00.806 11:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.806 11:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:00.806 2 00:04:00.806 11:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.806 11:05:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:00.806 11:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.806 11:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:00.806 3 00:04:00.806 11:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.806 11:05:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:00.806 11:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.806 11:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:00.806 4 00:04:00.806 11:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.806 11:05:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:00.806 11:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.806 11:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:00.806 5 00:04:00.806 11:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.806 11:05:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:00.806 11:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.806 11:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:00.806 6 00:04:00.806 11:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.806 11:05:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:00.806 11:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.806 11:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:00.806 7 00:04:00.806 11:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.806 11:05:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:00.806 11:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.806 11:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:00.806 8 00:04:00.806 11:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.806 11:05:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:00.806 11:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.806 11:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:00.806 9 00:04:00.806 11:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.806 11:05:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:00.806 11:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.806 11:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:00.806 10 00:04:00.806 11:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.806 11:05:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:00.806 11:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.807 11:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:00.807 11:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.807 11:05:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:00.807 11:05:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:00.807 11:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.807 11:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:00.807 11:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.807 11:05:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:00.807 11:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.807 11:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:00.807 11:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.807 11:05:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:00.807 11:05:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:00.807 11:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.807 11:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:01.437 11:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.437 00:04:01.437 real 0m0.590s 00:04:01.437 user 0m0.008s 00:04:01.437 sys 0m0.005s 00:04:01.437 11:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:01.437 11:05:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:01.437 ************************************ 00:04:01.437 END TEST scheduler_create_thread 00:04:01.437 ************************************ 00:04:01.437 11:05:56 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:01.437 11:05:56 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2473168 00:04:01.437 11:05:56 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 2473168 ']' 00:04:01.437 11:05:56 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 2473168 00:04:01.437 11:05:56 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:01.437 11:05:56 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:01.437 11:05:56 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2473168 00:04:01.437 11:05:56 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:01.437 11:05:56 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:01.437 11:05:56 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2473168' 00:04:01.437 killing process with pid 2473168 00:04:01.437 11:05:56 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 2473168 00:04:01.437 11:05:56 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 2473168 00:04:02.002 [2024-11-19 11:05:57.207970] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:02.002 00:04:02.002 real 0m1.840s 00:04:02.002 user 0m2.519s 00:04:02.002 sys 0m0.354s 00:04:02.002 11:05:57 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.002 11:05:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:02.002 ************************************ 00:04:02.002 END TEST event_scheduler 00:04:02.002 ************************************ 00:04:02.002 11:05:57 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:02.002 11:05:57 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:02.002 11:05:57 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.002 11:05:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.002 11:05:57 event -- common/autotest_common.sh@10 -- # set +x 00:04:02.002 ************************************ 00:04:02.002 START TEST app_repeat 00:04:02.002 ************************************ 00:04:02.002 11:05:57 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:02.002 11:05:57 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:02.002 11:05:57 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:02.002 11:05:57 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:02.002 11:05:57 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:02.002 11:05:57 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:02.002 11:05:57 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:02.002 11:05:57 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:02.002 11:05:57 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2473485 00:04:02.002 11:05:57 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:02.002 11:05:57 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:02.002 11:05:57 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2473485' 00:04:02.002 Process app_repeat pid: 2473485 00:04:02.002 11:05:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:02.002 11:05:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:02.002 spdk_app_start Round 0 00:04:02.002 11:05:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2473485 /var/tmp/spdk-nbd.sock 00:04:02.002 11:05:57 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2473485 ']' 00:04:02.002 11:05:57 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:02.002 11:05:57 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:02.003 11:05:57 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:02.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:02.003 11:05:57 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:02.003 11:05:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:02.003 [2024-11-19 11:05:57.495495] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:04:02.003 [2024-11-19 11:05:57.495557] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2473485 ] 00:04:02.261 [2024-11-19 11:05:57.573252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:02.261 [2024-11-19 11:05:57.636030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:02.261 [2024-11-19 11:05:57.636035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.518 11:05:57 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:02.519 11:05:57 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:02.519 11:05:57 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:02.776 Malloc0 00:04:02.776 11:05:58 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:03.034 Malloc1 00:04:03.034 11:05:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:03.034 11:05:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:03.034 11:05:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:03.034 11:05:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:03.034 11:05:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:03.034 11:05:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:03.034 11:05:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:03.034 11:05:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:03.034 11:05:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:03.034 11:05:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:03.034 11:05:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:03.034 11:05:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:03.034 11:05:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:03.034 11:05:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:03.034 11:05:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:03.034 11:05:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:03.292 /dev/nbd0 00:04:03.292 11:05:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:03.292 11:05:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:03.292 11:05:58 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:03.292 11:05:58 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:03.292 11:05:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:03.292 11:05:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:03.292 11:05:58 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:03.292 11:05:58 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:03.292 11:05:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:03.292 11:05:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:03.292 11:05:58 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:03.292 1+0 records in 00:04:03.292 1+0 records out 00:04:03.292 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000180349 s, 22.7 MB/s 00:04:03.292 11:05:58 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:03.292 11:05:58 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:03.292 11:05:58 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:03.292 11:05:58 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:03.292 11:05:58 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:03.292 11:05:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:03.292 11:05:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:03.292 11:05:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:03.550 /dev/nbd1 00:04:03.550 11:05:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:03.550 11:05:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:03.550 11:05:58 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:03.550 11:05:58 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:03.550 11:05:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:03.550 11:05:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:03.550 11:05:58 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:03.550 11:05:58 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:03.550 11:05:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:03.550 11:05:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:03.550 11:05:58 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:03.550 1+0 records in 00:04:03.550 1+0 records out 00:04:03.550 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000195363 s, 21.0 MB/s 00:04:03.550 11:05:58 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:03.550 11:05:58 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:03.550 11:05:58 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:03.550 11:05:58 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:03.550 11:05:58 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:03.550 11:05:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:03.550 11:05:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:03.550 11:05:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:03.550 11:05:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:03.550 11:05:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:03.808 11:05:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:03.808 { 00:04:03.808 "nbd_device": "/dev/nbd0", 00:04:03.808 "bdev_name": "Malloc0" 00:04:03.808 }, 00:04:03.808 { 00:04:03.808 "nbd_device": "/dev/nbd1", 00:04:03.808 "bdev_name": "Malloc1" 00:04:03.808 } 00:04:03.808 ]' 00:04:03.808 11:05:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:03.808 { 00:04:03.808 "nbd_device": "/dev/nbd0", 00:04:03.808 "bdev_name": "Malloc0" 00:04:03.808 }, 00:04:03.808 { 00:04:03.808 "nbd_device": "/dev/nbd1", 00:04:03.808 "bdev_name": "Malloc1" 00:04:03.808 } 00:04:03.808 ]' 00:04:03.808 11:05:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:03.808 11:05:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:03.808 /dev/nbd1' 00:04:03.808 11:05:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:03.808 /dev/nbd1' 00:04:03.808 11:05:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:03.808 11:05:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:03.808 11:05:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:03.808 11:05:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:03.808 11:05:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:03.808 11:05:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:03.808 11:05:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:03.808 11:05:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:03.808 11:05:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:03.808 11:05:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:03.808 11:05:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:03.808 11:05:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:03.808 256+0 records in 00:04:03.808 256+0 records out 00:04:03.808 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00486986 s, 215 MB/s 00:04:03.808 11:05:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:03.808 11:05:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:04.066 256+0 records in 00:04:04.066 256+0 records out 00:04:04.066 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0205576 s, 51.0 MB/s 00:04:04.066 11:05:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:04.066 11:05:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:04.066 256+0 records in 00:04:04.066 256+0 records out 00:04:04.066 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0214821 s, 48.8 MB/s 00:04:04.066 11:05:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:04.066 11:05:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:04.066 11:05:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:04.066 11:05:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:04.066 11:05:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:04.066 11:05:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:04.066 11:05:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:04.066 11:05:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:04.066 11:05:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:04.066 11:05:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:04.066 11:05:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:04.066 11:05:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:04.066 11:05:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:04.066 11:05:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:04.066 11:05:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:04.066 11:05:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:04.066 11:05:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:04.066 11:05:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:04.066 11:05:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:04.324 11:05:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:04.324 11:05:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:04.324 11:05:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:04.324 11:05:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:04.324 11:05:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:04.324 11:05:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:04.324 11:05:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:04.324 11:05:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:04.324 11:05:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:04.324 11:05:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:04.582 11:05:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:04.582 11:05:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:04.582 11:05:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:04.582 11:05:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:04.582 11:05:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:04.582 11:05:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:04.582 11:05:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:04.582 11:05:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:04.582 11:05:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:04.582 11:05:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:04.582 11:05:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:04.840 11:06:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:04.840 11:06:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:04.840 11:06:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:04.840 11:06:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:04.840 11:06:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:04.840 11:06:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:04.840 11:06:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:04.840 11:06:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:04.840 11:06:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:04.840 11:06:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:04.840 11:06:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:04.840 11:06:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:04.840 11:06:00 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:05.098 11:06:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:05.356 [2024-11-19 11:06:00.796394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:05.614 [2024-11-19 11:06:00.855230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.614 [2024-11-19 11:06:00.855235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:05.614 [2024-11-19 11:06:00.908836] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:05.614 [2024-11-19 11:06:00.908898] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:08.140 11:06:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:08.140 11:06:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:08.140 spdk_app_start Round 1 00:04:08.140 11:06:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2473485 /var/tmp/spdk-nbd.sock 00:04:08.140 11:06:03 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2473485 ']' 00:04:08.140 11:06:03 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:08.140 11:06:03 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:08.140 11:06:03 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:08.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:08.140 11:06:03 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:08.140 11:06:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:08.397 11:06:03 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:08.397 11:06:03 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:08.397 11:06:03 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:08.656 Malloc0 00:04:08.914 11:06:04 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:09.171 Malloc1 00:04:09.171 11:06:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:09.171 11:06:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:09.171 11:06:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:09.171 11:06:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:09.171 11:06:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:09.172 11:06:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:09.172 11:06:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:09.172 11:06:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:09.172 11:06:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:09.172 11:06:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:09.172 11:06:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:09.172 11:06:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:09.172 11:06:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:09.172 11:06:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:09.172 11:06:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:09.172 11:06:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:09.430 /dev/nbd0 00:04:09.430 11:06:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:09.430 11:06:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:09.430 11:06:04 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:09.430 11:06:04 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:09.430 11:06:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:09.430 11:06:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:09.430 11:06:04 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:09.430 11:06:04 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:09.430 11:06:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:09.430 11:06:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:09.430 11:06:04 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:09.430 1+0 records in 00:04:09.430 1+0 records out 00:04:09.430 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191687 s, 21.4 MB/s 00:04:09.430 11:06:04 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:09.430 11:06:04 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:09.430 11:06:04 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:09.430 11:06:04 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:09.430 11:06:04 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:09.430 11:06:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:09.430 11:06:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:09.430 11:06:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:09.688 /dev/nbd1 00:04:09.688 11:06:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:09.688 11:06:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:09.688 11:06:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:09.688 11:06:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:09.688 11:06:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:09.688 11:06:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:09.688 11:06:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:09.688 11:06:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:09.688 11:06:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:09.688 11:06:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:09.688 11:06:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:09.688 1+0 records in 00:04:09.688 1+0 records out 00:04:09.688 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000177868 s, 23.0 MB/s 00:04:09.688 11:06:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:09.688 11:06:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:09.688 11:06:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:09.688 11:06:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:09.688 11:06:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:09.688 11:06:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:09.688 11:06:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:09.688 11:06:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:09.688 11:06:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:09.688 11:06:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:09.945 11:06:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:09.945 { 00:04:09.945 "nbd_device": "/dev/nbd0", 00:04:09.945 "bdev_name": "Malloc0" 00:04:09.945 }, 00:04:09.945 { 00:04:09.945 "nbd_device": "/dev/nbd1", 00:04:09.945 "bdev_name": "Malloc1" 00:04:09.945 } 00:04:09.945 ]' 00:04:09.945 11:06:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:09.945 { 00:04:09.945 "nbd_device": "/dev/nbd0", 00:04:09.945 "bdev_name": "Malloc0" 00:04:09.945 }, 00:04:09.945 { 00:04:09.945 "nbd_device": "/dev/nbd1", 00:04:09.945 "bdev_name": "Malloc1" 00:04:09.945 } 00:04:09.945 ]' 00:04:09.945 11:06:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:09.945 11:06:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:09.945 /dev/nbd1' 00:04:09.945 11:06:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:09.945 /dev/nbd1' 00:04:09.945 11:06:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:09.945 11:06:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:09.945 11:06:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:09.945 11:06:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:09.945 11:06:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:09.945 11:06:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:09.945 11:06:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:09.945 11:06:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:09.945 11:06:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:09.945 11:06:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:09.945 11:06:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:09.945 11:06:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:09.945 256+0 records in 00:04:09.945 256+0 records out 00:04:09.945 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00454679 s, 231 MB/s 00:04:09.945 11:06:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:09.945 11:06:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:10.203 256+0 records in 00:04:10.203 256+0 records out 00:04:10.203 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0207601 s, 50.5 MB/s 00:04:10.203 11:06:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:10.203 11:06:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:10.203 256+0 records in 00:04:10.203 256+0 records out 00:04:10.203 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0224191 s, 46.8 MB/s 00:04:10.203 11:06:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:10.203 11:06:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:10.203 11:06:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:10.203 11:06:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:10.203 11:06:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:10.203 11:06:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:10.203 11:06:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:10.203 11:06:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:10.203 11:06:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:10.203 11:06:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:10.203 11:06:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:10.203 11:06:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:10.203 11:06:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:10.203 11:06:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:10.203 11:06:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:10.203 11:06:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:10.203 11:06:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:10.203 11:06:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:10.203 11:06:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:10.461 11:06:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:10.461 11:06:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:10.461 11:06:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:10.461 11:06:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:10.461 11:06:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:10.461 11:06:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:10.461 11:06:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:10.461 11:06:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:10.461 11:06:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:10.461 11:06:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:10.721 11:06:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:10.721 11:06:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:10.721 11:06:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:10.721 11:06:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:10.721 11:06:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:10.721 11:06:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:10.721 11:06:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:10.721 11:06:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:10.721 11:06:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:10.721 11:06:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:10.721 11:06:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:10.979 11:06:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:10.979 11:06:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:10.979 11:06:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:10.979 11:06:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:10.979 11:06:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:10.979 11:06:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:10.979 11:06:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:10.979 11:06:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:10.979 11:06:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:10.979 11:06:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:10.979 11:06:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:10.979 11:06:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:10.979 11:06:06 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:11.237 11:06:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:11.495 [2024-11-19 11:06:06.908089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:11.495 [2024-11-19 11:06:06.965818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.495 [2024-11-19 11:06:06.965818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:11.752 [2024-11-19 11:06:07.024344] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:11.753 [2024-11-19 11:06:07.024443] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:14.277 11:06:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:14.277 11:06:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:14.277 spdk_app_start Round 2 00:04:14.277 11:06:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2473485 /var/tmp/spdk-nbd.sock 00:04:14.277 11:06:09 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2473485 ']' 00:04:14.277 11:06:09 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:14.277 11:06:09 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:14.277 11:06:09 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:14.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:14.277 11:06:09 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:14.277 11:06:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:14.535 11:06:09 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:14.535 11:06:09 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:14.535 11:06:09 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:14.793 Malloc0 00:04:14.793 11:06:10 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:15.051 Malloc1 00:04:15.309 11:06:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:15.309 11:06:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:15.309 11:06:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:15.309 11:06:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:15.309 11:06:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:15.309 11:06:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:15.309 11:06:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:15.309 11:06:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:15.309 11:06:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:15.309 11:06:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:15.309 11:06:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:15.309 11:06:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:15.309 11:06:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:15.309 11:06:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:15.309 11:06:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:15.309 11:06:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:15.566 /dev/nbd0 00:04:15.566 11:06:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:15.566 11:06:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:15.566 11:06:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:15.566 11:06:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:15.566 11:06:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:15.566 11:06:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:15.566 11:06:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:15.566 11:06:10 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:15.566 11:06:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:15.566 11:06:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:15.566 11:06:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:15.566 1+0 records in 00:04:15.566 1+0 records out 00:04:15.566 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269111 s, 15.2 MB/s 00:04:15.566 11:06:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:15.566 11:06:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:15.566 11:06:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:15.566 11:06:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:15.566 11:06:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:15.566 11:06:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:15.566 11:06:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:15.566 11:06:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:15.824 /dev/nbd1 00:04:15.824 11:06:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:15.824 11:06:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:15.824 11:06:11 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:15.824 11:06:11 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:15.824 11:06:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:15.824 11:06:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:15.824 11:06:11 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:15.824 11:06:11 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:15.824 11:06:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:15.824 11:06:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:15.824 11:06:11 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:15.824 1+0 records in 00:04:15.824 1+0 records out 00:04:15.824 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000185977 s, 22.0 MB/s 00:04:15.824 11:06:11 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:15.824 11:06:11 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:15.824 11:06:11 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:15.824 11:06:11 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:15.824 11:06:11 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:15.824 11:06:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:15.824 11:06:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:15.824 11:06:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:15.824 11:06:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:15.824 11:06:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:16.082 11:06:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:16.082 { 00:04:16.082 "nbd_device": "/dev/nbd0", 00:04:16.082 "bdev_name": "Malloc0" 00:04:16.082 }, 00:04:16.082 { 00:04:16.082 "nbd_device": "/dev/nbd1", 00:04:16.082 "bdev_name": "Malloc1" 00:04:16.082 } 00:04:16.082 ]' 00:04:16.082 11:06:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:16.082 { 00:04:16.082 "nbd_device": "/dev/nbd0", 00:04:16.082 "bdev_name": "Malloc0" 00:04:16.082 }, 00:04:16.082 { 00:04:16.082 "nbd_device": "/dev/nbd1", 00:04:16.082 "bdev_name": "Malloc1" 00:04:16.082 } 00:04:16.082 ]' 00:04:16.082 11:06:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:16.082 11:06:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:16.082 /dev/nbd1' 00:04:16.082 11:06:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:16.082 /dev/nbd1' 00:04:16.082 11:06:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:16.082 11:06:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:16.082 11:06:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:16.082 11:06:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:16.082 11:06:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:16.082 11:06:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:16.082 11:06:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:16.082 11:06:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:16.082 11:06:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:16.082 11:06:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:16.082 11:06:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:16.082 11:06:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:16.082 256+0 records in 00:04:16.082 256+0 records out 00:04:16.082 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00530911 s, 198 MB/s 00:04:16.082 11:06:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:16.082 11:06:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:16.082 256+0 records in 00:04:16.082 256+0 records out 00:04:16.082 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204952 s, 51.2 MB/s 00:04:16.082 11:06:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:16.082 11:06:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:16.082 256+0 records in 00:04:16.082 256+0 records out 00:04:16.082 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0214024 s, 49.0 MB/s 00:04:16.082 11:06:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:16.082 11:06:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:16.082 11:06:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:16.082 11:06:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:16.082 11:06:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:16.082 11:06:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:16.082 11:06:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:16.082 11:06:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:16.082 11:06:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:16.082 11:06:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:16.082 11:06:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:16.082 11:06:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:16.082 11:06:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:16.082 11:06:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:16.082 11:06:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:16.082 11:06:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:16.082 11:06:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:16.082 11:06:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:16.340 11:06:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:16.597 11:06:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:16.597 11:06:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:16.597 11:06:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:16.597 11:06:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:16.597 11:06:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:16.597 11:06:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:16.597 11:06:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:16.597 11:06:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:16.597 11:06:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:16.597 11:06:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:16.855 11:06:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:16.855 11:06:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:16.855 11:06:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:16.855 11:06:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:16.855 11:06:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:16.855 11:06:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:16.855 11:06:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:16.855 11:06:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:16.855 11:06:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:16.855 11:06:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:16.855 11:06:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:17.113 11:06:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:17.113 11:06:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:17.113 11:06:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:17.113 11:06:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:17.113 11:06:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:17.113 11:06:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:17.113 11:06:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:17.113 11:06:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:17.113 11:06:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:17.113 11:06:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:17.113 11:06:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:17.113 11:06:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:17.113 11:06:12 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:17.371 11:06:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:17.629 [2024-11-19 11:06:12.994532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:17.629 [2024-11-19 11:06:13.051089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:17.629 [2024-11-19 11:06:13.051094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.629 [2024-11-19 11:06:13.109012] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:17.629 [2024-11-19 11:06:13.109084] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:20.911 11:06:15 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2473485 /var/tmp/spdk-nbd.sock 00:04:20.911 11:06:15 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2473485 ']' 00:04:20.911 11:06:15 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:20.911 11:06:15 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:20.911 11:06:15 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:20.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:20.911 11:06:15 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:20.911 11:06:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:20.911 11:06:16 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:20.911 11:06:16 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:20.911 11:06:16 event.app_repeat -- event/event.sh@39 -- # killprocess 2473485 00:04:20.911 11:06:16 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 2473485 ']' 00:04:20.911 11:06:16 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 2473485 00:04:20.911 11:06:16 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:20.911 11:06:16 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:20.911 11:06:16 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2473485 00:04:20.911 11:06:16 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:20.911 11:06:16 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:20.911 11:06:16 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2473485' 00:04:20.911 killing process with pid 2473485 00:04:20.911 11:06:16 event.app_repeat -- common/autotest_common.sh@973 -- # kill 2473485 00:04:20.911 11:06:16 event.app_repeat -- common/autotest_common.sh@978 -- # wait 2473485 00:04:20.911 spdk_app_start is called in Round 0. 00:04:20.911 Shutdown signal received, stop current app iteration 00:04:20.911 Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 reinitialization... 00:04:20.911 spdk_app_start is called in Round 1. 00:04:20.911 Shutdown signal received, stop current app iteration 00:04:20.911 Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 reinitialization... 00:04:20.911 spdk_app_start is called in Round 2. 00:04:20.911 Shutdown signal received, stop current app iteration 00:04:20.911 Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 reinitialization... 00:04:20.911 spdk_app_start is called in Round 3. 00:04:20.911 Shutdown signal received, stop current app iteration 00:04:20.911 11:06:16 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:20.911 11:06:16 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:20.911 00:04:20.911 real 0m18.808s 00:04:20.911 user 0m41.524s 00:04:20.911 sys 0m3.299s 00:04:20.911 11:06:16 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.911 11:06:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:20.911 ************************************ 00:04:20.911 END TEST app_repeat 00:04:20.911 ************************************ 00:04:20.911 11:06:16 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:20.911 11:06:16 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:20.911 11:06:16 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.911 11:06:16 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.911 11:06:16 event -- common/autotest_common.sh@10 -- # set +x 00:04:20.911 ************************************ 00:04:20.911 START TEST cpu_locks 00:04:20.911 ************************************ 00:04:20.911 11:06:16 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:20.911 * Looking for test storage... 00:04:20.911 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:20.911 11:06:16 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:20.911 11:06:16 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:04:20.911 11:06:16 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:21.169 11:06:16 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:21.169 11:06:16 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:21.169 11:06:16 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:21.169 11:06:16 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:21.169 11:06:16 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:21.169 11:06:16 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:21.169 11:06:16 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:21.169 11:06:16 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:21.169 11:06:16 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:21.169 11:06:16 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:21.169 11:06:16 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:21.169 11:06:16 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:21.169 11:06:16 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:21.169 11:06:16 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:21.169 11:06:16 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:21.169 11:06:16 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:21.169 11:06:16 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:21.169 11:06:16 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:21.169 11:06:16 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:21.169 11:06:16 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:21.169 11:06:16 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:21.169 11:06:16 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:21.169 11:06:16 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:21.169 11:06:16 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:21.169 11:06:16 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:21.169 11:06:16 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:21.169 11:06:16 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:21.169 11:06:16 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:21.169 11:06:16 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:21.169 11:06:16 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:21.169 11:06:16 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:21.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.169 --rc genhtml_branch_coverage=1 00:04:21.169 --rc genhtml_function_coverage=1 00:04:21.169 --rc genhtml_legend=1 00:04:21.169 --rc geninfo_all_blocks=1 00:04:21.169 --rc geninfo_unexecuted_blocks=1 00:04:21.169 00:04:21.169 ' 00:04:21.169 11:06:16 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:21.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.169 --rc genhtml_branch_coverage=1 00:04:21.169 --rc genhtml_function_coverage=1 00:04:21.169 --rc genhtml_legend=1 00:04:21.169 --rc geninfo_all_blocks=1 00:04:21.169 --rc geninfo_unexecuted_blocks=1 00:04:21.169 00:04:21.169 ' 00:04:21.170 11:06:16 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:21.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.170 --rc genhtml_branch_coverage=1 00:04:21.170 --rc genhtml_function_coverage=1 00:04:21.170 --rc genhtml_legend=1 00:04:21.170 --rc geninfo_all_blocks=1 00:04:21.170 --rc geninfo_unexecuted_blocks=1 00:04:21.170 00:04:21.170 ' 00:04:21.170 11:06:16 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:21.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.170 --rc genhtml_branch_coverage=1 00:04:21.170 --rc genhtml_function_coverage=1 00:04:21.170 --rc genhtml_legend=1 00:04:21.170 --rc geninfo_all_blocks=1 00:04:21.170 --rc geninfo_unexecuted_blocks=1 00:04:21.170 00:04:21.170 ' 00:04:21.170 11:06:16 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:21.170 11:06:16 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:21.170 11:06:16 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:21.170 11:06:16 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:21.170 11:06:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.170 11:06:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.170 11:06:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:21.170 ************************************ 00:04:21.170 START TEST default_locks 00:04:21.170 ************************************ 00:04:21.170 11:06:16 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:04:21.170 11:06:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2476591 00:04:21.170 11:06:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:21.170 11:06:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2476591 00:04:21.170 11:06:16 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2476591 ']' 00:04:21.170 11:06:16 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:21.170 11:06:16 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:21.170 11:06:16 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:21.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:21.170 11:06:16 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:21.170 11:06:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:21.170 [2024-11-19 11:06:16.558613] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:04:21.170 [2024-11-19 11:06:16.558710] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2476591 ] 00:04:21.170 [2024-11-19 11:06:16.633153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.427 [2024-11-19 11:06:16.693984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.685 11:06:16 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:21.685 11:06:16 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:04:21.685 11:06:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2476591 00:04:21.685 11:06:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2476591 00:04:21.685 11:06:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:21.943 lslocks: write error 00:04:21.943 11:06:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2476591 00:04:21.943 11:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 2476591 ']' 00:04:21.943 11:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 2476591 00:04:21.943 11:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:04:21.943 11:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:21.943 11:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2476591 00:04:21.943 11:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:21.943 11:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:21.943 11:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2476591' 00:04:21.943 killing process with pid 2476591 00:04:21.943 11:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 2476591 00:04:21.943 11:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 2476591 00:04:22.202 11:06:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2476591 00:04:22.202 11:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:04:22.202 11:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2476591 00:04:22.202 11:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:22.202 11:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:22.202 11:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:22.202 11:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:22.202 11:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 2476591 00:04:22.202 11:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2476591 ']' 00:04:22.202 11:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:22.202 11:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:22.202 11:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:22.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:22.202 11:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:22.202 11:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:22.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2476591) - No such process 00:04:22.202 ERROR: process (pid: 2476591) is no longer running 00:04:22.202 11:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:22.202 11:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:04:22.202 11:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:04:22.202 11:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:22.202 11:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:22.202 11:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:22.202 11:06:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:22.202 11:06:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:22.202 11:06:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:22.202 11:06:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:22.202 00:04:22.202 real 0m1.148s 00:04:22.202 user 0m1.109s 00:04:22.202 sys 0m0.503s 00:04:22.202 11:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.202 11:06:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:22.202 ************************************ 00:04:22.202 END TEST default_locks 00:04:22.202 ************************************ 00:04:22.202 11:06:17 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:22.202 11:06:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.202 11:06:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.202 11:06:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:22.494 ************************************ 00:04:22.494 START TEST default_locks_via_rpc 00:04:22.494 ************************************ 00:04:22.494 11:06:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:04:22.494 11:06:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2476753 00:04:22.494 11:06:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2476753 00:04:22.494 11:06:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:22.494 11:06:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2476753 ']' 00:04:22.494 11:06:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:22.494 11:06:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:22.494 11:06:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:22.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:22.494 11:06:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:22.494 11:06:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.494 [2024-11-19 11:06:17.755443] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:04:22.494 [2024-11-19 11:06:17.755534] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2476753 ] 00:04:22.494 [2024-11-19 11:06:17.828314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.494 [2024-11-19 11:06:17.881943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.752 11:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:22.752 11:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:22.752 11:06:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:22.752 11:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.752 11:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.752 11:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.752 11:06:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:22.752 11:06:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:22.752 11:06:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:22.752 11:06:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:22.752 11:06:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:22.752 11:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.752 11:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.752 11:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.752 11:06:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2476753 00:04:22.752 11:06:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2476753 00:04:22.752 11:06:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:23.010 11:06:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2476753 00:04:23.010 11:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 2476753 ']' 00:04:23.010 11:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 2476753 00:04:23.010 11:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:04:23.010 11:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:23.010 11:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2476753 00:04:23.010 11:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:23.010 11:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:23.010 11:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2476753' 00:04:23.010 killing process with pid 2476753 00:04:23.010 11:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 2476753 00:04:23.010 11:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 2476753 00:04:23.576 00:04:23.576 real 0m1.157s 00:04:23.576 user 0m1.108s 00:04:23.576 sys 0m0.510s 00:04:23.576 11:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.576 11:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.576 ************************************ 00:04:23.576 END TEST default_locks_via_rpc 00:04:23.576 ************************************ 00:04:23.576 11:06:18 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:23.576 11:06:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.576 11:06:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.576 11:06:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:23.576 ************************************ 00:04:23.576 START TEST non_locking_app_on_locked_coremask 00:04:23.576 ************************************ 00:04:23.576 11:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:04:23.576 11:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2476917 00:04:23.576 11:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:23.576 11:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2476917 /var/tmp/spdk.sock 00:04:23.576 11:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2476917 ']' 00:04:23.576 11:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:23.576 11:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:23.576 11:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:23.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:23.576 11:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:23.576 11:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:23.576 [2024-11-19 11:06:18.969893] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:04:23.576 [2024-11-19 11:06:18.969998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2476917 ] 00:04:23.576 [2024-11-19 11:06:19.046047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.832 [2024-11-19 11:06:19.106789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.090 11:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:24.090 11:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:24.090 11:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2476926 00:04:24.090 11:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:24.090 11:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2476926 /var/tmp/spdk2.sock 00:04:24.090 11:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2476926 ']' 00:04:24.090 11:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:24.090 11:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:24.090 11:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:24.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:24.090 11:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:24.090 11:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:24.090 [2024-11-19 11:06:19.427743] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:04:24.090 [2024-11-19 11:06:19.427830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2476926 ] 00:04:24.090 [2024-11-19 11:06:19.547418] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:24.090 [2024-11-19 11:06:19.547460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.348 [2024-11-19 11:06:19.665271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.284 11:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:25.284 11:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:25.284 11:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2476917 00:04:25.284 11:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2476917 00:04:25.284 11:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:25.541 lslocks: write error 00:04:25.541 11:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2476917 00:04:25.541 11:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2476917 ']' 00:04:25.541 11:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2476917 00:04:25.541 11:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:25.541 11:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:25.541 11:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2476917 00:04:25.541 11:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:25.541 11:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:25.541 11:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2476917' 00:04:25.541 killing process with pid 2476917 00:04:25.541 11:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2476917 00:04:25.541 11:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2476917 00:04:26.476 11:06:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2476926 00:04:26.476 11:06:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2476926 ']' 00:04:26.476 11:06:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2476926 00:04:26.476 11:06:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:26.476 11:06:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:26.476 11:06:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2476926 00:04:26.476 11:06:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:26.476 11:06:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:26.476 11:06:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2476926' 00:04:26.476 killing process with pid 2476926 00:04:26.476 11:06:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2476926 00:04:26.476 11:06:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2476926 00:04:26.736 00:04:26.736 real 0m3.284s 00:04:26.736 user 0m3.500s 00:04:26.736 sys 0m1.050s 00:04:26.736 11:06:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.736 11:06:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:26.736 ************************************ 00:04:26.736 END TEST non_locking_app_on_locked_coremask 00:04:26.736 ************************************ 00:04:26.736 11:06:22 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:26.736 11:06:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.736 11:06:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.736 11:06:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:26.995 ************************************ 00:04:26.995 START TEST locking_app_on_unlocked_coremask 00:04:26.995 ************************************ 00:04:26.995 11:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:04:26.995 11:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2477357 00:04:26.995 11:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:26.995 11:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2477357 /var/tmp/spdk.sock 00:04:26.995 11:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2477357 ']' 00:04:26.995 11:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.995 11:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:26.995 11:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.995 11:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:26.995 11:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:26.995 [2024-11-19 11:06:22.304174] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:04:26.995 [2024-11-19 11:06:22.304265] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2477357 ] 00:04:26.995 [2024-11-19 11:06:22.376049] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:26.995 [2024-11-19 11:06:22.376080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.995 [2024-11-19 11:06:22.428458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.253 11:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:27.253 11:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:27.253 11:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2477360 00:04:27.253 11:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:27.253 11:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2477360 /var/tmp/spdk2.sock 00:04:27.253 11:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2477360 ']' 00:04:27.253 11:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:27.253 11:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:27.253 11:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:27.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:27.253 11:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:27.253 11:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:27.253 [2024-11-19 11:06:22.741821] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:04:27.253 [2024-11-19 11:06:22.741904] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2477360 ] 00:04:27.511 [2024-11-19 11:06:22.856624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.511 [2024-11-19 11:06:22.978806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.446 11:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:28.446 11:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:28.446 11:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2477360 00:04:28.446 11:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2477360 00:04:28.446 11:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:28.704 lslocks: write error 00:04:28.704 11:06:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2477357 00:04:28.704 11:06:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2477357 ']' 00:04:28.704 11:06:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2477357 00:04:28.704 11:06:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:28.704 11:06:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:28.704 11:06:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2477357 00:04:28.961 11:06:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:28.961 11:06:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:28.961 11:06:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2477357' 00:04:28.961 killing process with pid 2477357 00:04:28.961 11:06:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2477357 00:04:28.961 11:06:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2477357 00:04:29.895 11:06:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2477360 00:04:29.895 11:06:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2477360 ']' 00:04:29.895 11:06:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2477360 00:04:29.895 11:06:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:29.895 11:06:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:29.895 11:06:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2477360 00:04:29.895 11:06:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:29.895 11:06:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:29.895 11:06:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2477360' 00:04:29.895 killing process with pid 2477360 00:04:29.895 11:06:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2477360 00:04:29.895 11:06:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2477360 00:04:30.154 00:04:30.154 real 0m3.215s 00:04:30.154 user 0m3.423s 00:04:30.154 sys 0m1.060s 00:04:30.154 11:06:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.154 11:06:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:30.154 ************************************ 00:04:30.154 END TEST locking_app_on_unlocked_coremask 00:04:30.154 ************************************ 00:04:30.154 11:06:25 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:30.154 11:06:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.154 11:06:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.154 11:06:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:30.154 ************************************ 00:04:30.154 START TEST locking_app_on_locked_coremask 00:04:30.154 ************************************ 00:04:30.154 11:06:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:04:30.154 11:06:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2477791 00:04:30.154 11:06:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:30.154 11:06:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2477791 /var/tmp/spdk.sock 00:04:30.154 11:06:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2477791 ']' 00:04:30.154 11:06:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.154 11:06:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:30.154 11:06:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.154 11:06:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:30.154 11:06:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:30.154 [2024-11-19 11:06:25.570887] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:04:30.154 [2024-11-19 11:06:25.570977] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2477791 ] 00:04:30.154 [2024-11-19 11:06:25.644000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.413 [2024-11-19 11:06:25.697418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.670 11:06:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:30.670 11:06:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:30.670 11:06:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2477794 00:04:30.670 11:06:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:30.670 11:06:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2477794 /var/tmp/spdk2.sock 00:04:30.670 11:06:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:30.670 11:06:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2477794 /var/tmp/spdk2.sock 00:04:30.670 11:06:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:30.670 11:06:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:30.670 11:06:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:30.670 11:06:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:30.670 11:06:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2477794 /var/tmp/spdk2.sock 00:04:30.670 11:06:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2477794 ']' 00:04:30.670 11:06:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:30.670 11:06:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:30.670 11:06:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:30.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:30.670 11:06:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:30.670 11:06:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:30.670 [2024-11-19 11:06:26.007469] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:04:30.670 [2024-11-19 11:06:26.007552] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2477794 ] 00:04:30.670 [2024-11-19 11:06:26.116922] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2477791 has claimed it. 00:04:30.670 [2024-11-19 11:06:26.116992] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:31.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2477794) - No such process 00:04:31.235 ERROR: process (pid: 2477794) is no longer running 00:04:31.235 11:06:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:31.235 11:06:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:31.235 11:06:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:31.235 11:06:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:31.235 11:06:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:31.235 11:06:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:31.235 11:06:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2477791 00:04:31.235 11:06:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2477791 00:04:31.235 11:06:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:31.799 lslocks: write error 00:04:31.799 11:06:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2477791 00:04:31.799 11:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2477791 ']' 00:04:31.799 11:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2477791 00:04:31.799 11:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:31.799 11:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:31.799 11:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2477791 00:04:31.799 11:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:31.799 11:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:31.799 11:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2477791' 00:04:31.799 killing process with pid 2477791 00:04:31.799 11:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2477791 00:04:31.799 11:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2477791 00:04:32.364 00:04:32.364 real 0m2.053s 00:04:32.364 user 0m2.259s 00:04:32.364 sys 0m0.649s 00:04:32.364 11:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.364 11:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:32.364 ************************************ 00:04:32.364 END TEST locking_app_on_locked_coremask 00:04:32.364 ************************************ 00:04:32.364 11:06:27 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:32.364 11:06:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.364 11:06:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.364 11:06:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:32.364 ************************************ 00:04:32.364 START TEST locking_overlapped_coremask 00:04:32.364 ************************************ 00:04:32.364 11:06:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:04:32.364 11:06:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2478089 00:04:32.364 11:06:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2478089 /var/tmp/spdk.sock 00:04:32.364 11:06:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:32.365 11:06:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2478089 ']' 00:04:32.365 11:06:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.365 11:06:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:32.365 11:06:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.365 11:06:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:32.365 11:06:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:32.365 [2024-11-19 11:06:27.674242] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:04:32.365 [2024-11-19 11:06:27.674343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2478089 ] 00:04:32.365 [2024-11-19 11:06:27.749754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:32.365 [2024-11-19 11:06:27.811780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:32.365 [2024-11-19 11:06:27.811845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:32.365 [2024-11-19 11:06:27.811848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.621 11:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:32.621 11:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:32.621 11:06:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2478094 00:04:32.621 11:06:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2478094 /var/tmp/spdk2.sock 00:04:32.621 11:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:32.621 11:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2478094 /var/tmp/spdk2.sock 00:04:32.621 11:06:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:32.621 11:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:32.621 11:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:32.621 11:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:32.621 11:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:32.622 11:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2478094 /var/tmp/spdk2.sock 00:04:32.622 11:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2478094 ']' 00:04:32.622 11:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:32.622 11:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:32.622 11:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:32.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:32.622 11:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:32.622 11:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:32.879 [2024-11-19 11:06:28.146915] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:04:32.879 [2024-11-19 11:06:28.146997] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2478094 ] 00:04:32.879 [2024-11-19 11:06:28.264855] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2478089 has claimed it. 00:04:32.879 [2024-11-19 11:06:28.264920] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:33.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2478094) - No such process 00:04:33.443 ERROR: process (pid: 2478094) is no longer running 00:04:33.443 11:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:33.443 11:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:33.443 11:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:33.443 11:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:33.443 11:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:33.443 11:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:33.443 11:06:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:33.443 11:06:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:33.443 11:06:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:33.443 11:06:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:33.443 11:06:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2478089 00:04:33.443 11:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 2478089 ']' 00:04:33.443 11:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 2478089 00:04:33.443 11:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:04:33.443 11:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:33.443 11:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2478089 00:04:33.443 11:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:33.443 11:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:33.443 11:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2478089' 00:04:33.443 killing process with pid 2478089 00:04:33.443 11:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 2478089 00:04:33.443 11:06:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 2478089 00:04:34.009 00:04:34.009 real 0m1.706s 00:04:34.009 user 0m4.755s 00:04:34.009 sys 0m0.465s 00:04:34.009 11:06:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.009 11:06:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:34.009 ************************************ 00:04:34.009 END TEST locking_overlapped_coremask 00:04:34.009 ************************************ 00:04:34.009 11:06:29 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:34.009 11:06:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.009 11:06:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.009 11:06:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:34.009 ************************************ 00:04:34.009 START TEST locking_overlapped_coremask_via_rpc 00:04:34.009 ************************************ 00:04:34.009 11:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:04:34.009 11:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2478262 00:04:34.009 11:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:34.009 11:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2478262 /var/tmp/spdk.sock 00:04:34.009 11:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2478262 ']' 00:04:34.009 11:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.009 11:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:34.009 11:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.009 11:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:34.009 11:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.009 [2024-11-19 11:06:29.432895] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:04:34.009 [2024-11-19 11:06:29.432985] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2478262 ] 00:04:34.267 [2024-11-19 11:06:29.511841] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:34.267 [2024-11-19 11:06:29.511874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:34.267 [2024-11-19 11:06:29.572945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:34.267 [2024-11-19 11:06:29.573012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:34.267 [2024-11-19 11:06:29.573016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.526 11:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.526 11:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:34.526 11:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2478389 00:04:34.526 11:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2478389 /var/tmp/spdk2.sock 00:04:34.526 11:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:34.526 11:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2478389 ']' 00:04:34.526 11:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:34.526 11:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:34.526 11:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:34.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:34.526 11:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:34.526 11:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.526 [2024-11-19 11:06:29.899242] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:04:34.526 [2024-11-19 11:06:29.899329] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2478389 ] 00:04:34.526 [2024-11-19 11:06:30.017835] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:34.526 [2024-11-19 11:06:30.017887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:34.784 [2024-11-19 11:06:30.144382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:34.784 [2024-11-19 11:06:30.144439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:04:34.784 [2024-11-19 11:06:30.144441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:35.718 11:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:35.718 11:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:35.718 11:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:35.718 11:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.718 11:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.718 11:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.718 11:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:35.718 11:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:35.718 11:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:35.718 11:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:35.718 11:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:35.718 11:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:35.718 11:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:35.718 11:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:35.718 11:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.718 11:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.718 [2024-11-19 11:06:30.897456] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2478262 has claimed it. 00:04:35.718 request: 00:04:35.718 { 00:04:35.718 "method": "framework_enable_cpumask_locks", 00:04:35.718 "req_id": 1 00:04:35.718 } 00:04:35.718 Got JSON-RPC error response 00:04:35.718 response: 00:04:35.718 { 00:04:35.718 "code": -32603, 00:04:35.718 "message": "Failed to claim CPU core: 2" 00:04:35.718 } 00:04:35.718 11:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:35.718 11:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:35.718 11:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:35.718 11:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:35.718 11:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:35.718 11:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2478262 /var/tmp/spdk.sock 00:04:35.718 11:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2478262 ']' 00:04:35.718 11:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.718 11:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:35.718 11:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.718 11:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:35.718 11:06:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.718 11:06:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:35.718 11:06:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:35.718 11:06:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2478389 /var/tmp/spdk2.sock 00:04:35.718 11:06:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2478389 ']' 00:04:35.718 11:06:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:35.718 11:06:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:35.718 11:06:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:35.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:35.718 11:06:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:35.718 11:06:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.976 11:06:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:35.976 11:06:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:35.976 11:06:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:35.976 11:06:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:35.976 11:06:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:35.976 11:06:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:35.976 00:04:35.976 real 0m2.068s 00:04:35.976 user 0m1.104s 00:04:35.976 sys 0m0.214s 00:04:35.976 11:06:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.976 11:06:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.976 ************************************ 00:04:35.976 END TEST locking_overlapped_coremask_via_rpc 00:04:35.976 ************************************ 00:04:35.976 11:06:31 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:35.976 11:06:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2478262 ]] 00:04:35.976 11:06:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2478262 00:04:35.976 11:06:31 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2478262 ']' 00:04:35.976 11:06:31 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2478262 00:04:35.976 11:06:31 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:35.976 11:06:31 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:35.976 11:06:31 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2478262 00:04:36.234 11:06:31 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:36.234 11:06:31 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:36.234 11:06:31 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2478262' 00:04:36.234 killing process with pid 2478262 00:04:36.234 11:06:31 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2478262 00:04:36.234 11:06:31 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2478262 00:04:36.492 11:06:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2478389 ]] 00:04:36.492 11:06:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2478389 00:04:36.492 11:06:31 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2478389 ']' 00:04:36.492 11:06:31 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2478389 00:04:36.492 11:06:31 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:36.492 11:06:31 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:36.492 11:06:31 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2478389 00:04:36.492 11:06:31 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:36.492 11:06:31 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:36.492 11:06:31 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2478389' 00:04:36.492 killing process with pid 2478389 00:04:36.492 11:06:31 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2478389 00:04:36.492 11:06:31 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2478389 00:04:37.057 11:06:32 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:37.057 11:06:32 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:37.058 11:06:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2478262 ]] 00:04:37.058 11:06:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2478262 00:04:37.058 11:06:32 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2478262 ']' 00:04:37.058 11:06:32 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2478262 00:04:37.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2478262) - No such process 00:04:37.058 11:06:32 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2478262 is not found' 00:04:37.058 Process with pid 2478262 is not found 00:04:37.058 11:06:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2478389 ]] 00:04:37.058 11:06:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2478389 00:04:37.058 11:06:32 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2478389 ']' 00:04:37.058 11:06:32 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2478389 00:04:37.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2478389) - No such process 00:04:37.058 11:06:32 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2478389 is not found' 00:04:37.058 Process with pid 2478389 is not found 00:04:37.058 11:06:32 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:37.058 00:04:37.058 real 0m16.073s 00:04:37.058 user 0m28.931s 00:04:37.058 sys 0m5.401s 00:04:37.058 11:06:32 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.058 11:06:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:37.058 ************************************ 00:04:37.058 END TEST cpu_locks 00:04:37.058 ************************************ 00:04:37.058 00:04:37.058 real 0m40.793s 00:04:37.058 user 1m19.592s 00:04:37.058 sys 0m9.521s 00:04:37.058 11:06:32 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.058 11:06:32 event -- common/autotest_common.sh@10 -- # set +x 00:04:37.058 ************************************ 00:04:37.058 END TEST event 00:04:37.058 ************************************ 00:04:37.058 11:06:32 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:37.058 11:06:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.058 11:06:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.058 11:06:32 -- common/autotest_common.sh@10 -- # set +x 00:04:37.058 ************************************ 00:04:37.058 START TEST thread 00:04:37.058 ************************************ 00:04:37.058 11:06:32 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:37.058 * Looking for test storage... 00:04:37.058 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:37.058 11:06:32 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:37.058 11:06:32 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:04:37.058 11:06:32 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:37.316 11:06:32 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:37.316 11:06:32 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.316 11:06:32 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.316 11:06:32 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.316 11:06:32 thread -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.316 11:06:32 thread -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.316 11:06:32 thread -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.316 11:06:32 thread -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.316 11:06:32 thread -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.316 11:06:32 thread -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.316 11:06:32 thread -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.316 11:06:32 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.316 11:06:32 thread -- scripts/common.sh@344 -- # case "$op" in 00:04:37.316 11:06:32 thread -- scripts/common.sh@345 -- # : 1 00:04:37.316 11:06:32 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.316 11:06:32 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.316 11:06:32 thread -- scripts/common.sh@365 -- # decimal 1 00:04:37.316 11:06:32 thread -- scripts/common.sh@353 -- # local d=1 00:04:37.316 11:06:32 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.316 11:06:32 thread -- scripts/common.sh@355 -- # echo 1 00:04:37.316 11:06:32 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.316 11:06:32 thread -- scripts/common.sh@366 -- # decimal 2 00:04:37.316 11:06:32 thread -- scripts/common.sh@353 -- # local d=2 00:04:37.316 11:06:32 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.316 11:06:32 thread -- scripts/common.sh@355 -- # echo 2 00:04:37.316 11:06:32 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.316 11:06:32 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.316 11:06:32 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.316 11:06:32 thread -- scripts/common.sh@368 -- # return 0 00:04:37.316 11:06:32 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.316 11:06:32 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:37.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.316 --rc genhtml_branch_coverage=1 00:04:37.316 --rc genhtml_function_coverage=1 00:04:37.316 --rc genhtml_legend=1 00:04:37.316 --rc geninfo_all_blocks=1 00:04:37.316 --rc geninfo_unexecuted_blocks=1 00:04:37.316 00:04:37.316 ' 00:04:37.316 11:06:32 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:37.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.316 --rc genhtml_branch_coverage=1 00:04:37.316 --rc genhtml_function_coverage=1 00:04:37.316 --rc genhtml_legend=1 00:04:37.316 --rc geninfo_all_blocks=1 00:04:37.316 --rc geninfo_unexecuted_blocks=1 00:04:37.316 00:04:37.316 ' 00:04:37.316 11:06:32 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:37.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.316 --rc genhtml_branch_coverage=1 00:04:37.316 --rc genhtml_function_coverage=1 00:04:37.316 --rc genhtml_legend=1 00:04:37.316 --rc geninfo_all_blocks=1 00:04:37.316 --rc geninfo_unexecuted_blocks=1 00:04:37.316 00:04:37.316 ' 00:04:37.316 11:06:32 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:37.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.316 --rc genhtml_branch_coverage=1 00:04:37.316 --rc genhtml_function_coverage=1 00:04:37.316 --rc genhtml_legend=1 00:04:37.316 --rc geninfo_all_blocks=1 00:04:37.316 --rc geninfo_unexecuted_blocks=1 00:04:37.316 00:04:37.316 ' 00:04:37.316 11:06:32 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:37.316 11:06:32 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:37.316 11:06:32 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.316 11:06:32 thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.316 ************************************ 00:04:37.316 START TEST thread_poller_perf 00:04:37.316 ************************************ 00:04:37.317 11:06:32 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:37.317 [2024-11-19 11:06:32.664806] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:04:37.317 [2024-11-19 11:06:32.664875] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2478769 ] 00:04:37.317 [2024-11-19 11:06:32.744658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.317 [2024-11-19 11:06:32.803592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.317 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:38.691 [2024-11-19T10:06:34.188Z] ====================================== 00:04:38.691 [2024-11-19T10:06:34.188Z] busy:2710626933 (cyc) 00:04:38.691 [2024-11-19T10:06:34.188Z] total_run_count: 365000 00:04:38.691 [2024-11-19T10:06:34.188Z] tsc_hz: 2700000000 (cyc) 00:04:38.691 [2024-11-19T10:06:34.188Z] ====================================== 00:04:38.691 [2024-11-19T10:06:34.188Z] poller_cost: 7426 (cyc), 2750 (nsec) 00:04:38.691 00:04:38.691 real 0m1.222s 00:04:38.691 user 0m1.135s 00:04:38.691 sys 0m0.079s 00:04:38.691 11:06:33 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.691 11:06:33 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:38.691 ************************************ 00:04:38.691 END TEST thread_poller_perf 00:04:38.691 ************************************ 00:04:38.691 11:06:33 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:38.691 11:06:33 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:38.691 11:06:33 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.691 11:06:33 thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.691 ************************************ 00:04:38.691 START TEST thread_poller_perf 00:04:38.691 ************************************ 00:04:38.691 11:06:33 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:38.691 [2024-11-19 11:06:33.938888] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:04:38.691 [2024-11-19 11:06:33.938947] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2478924 ] 00:04:38.691 [2024-11-19 11:06:34.012540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.691 [2024-11-19 11:06:34.068119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.691 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:40.065 [2024-11-19T10:06:35.562Z] ====================================== 00:04:40.065 [2024-11-19T10:06:35.562Z] busy:2702460078 (cyc) 00:04:40.065 [2024-11-19T10:06:35.562Z] total_run_count: 4874000 00:04:40.065 [2024-11-19T10:06:35.562Z] tsc_hz: 2700000000 (cyc) 00:04:40.065 [2024-11-19T10:06:35.562Z] ====================================== 00:04:40.065 [2024-11-19T10:06:35.562Z] poller_cost: 554 (cyc), 205 (nsec) 00:04:40.065 00:04:40.065 real 0m1.206s 00:04:40.065 user 0m1.139s 00:04:40.065 sys 0m0.063s 00:04:40.065 11:06:35 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.065 11:06:35 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:40.065 ************************************ 00:04:40.065 END TEST thread_poller_perf 00:04:40.065 ************************************ 00:04:40.065 11:06:35 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:40.065 00:04:40.065 real 0m2.677s 00:04:40.065 user 0m2.404s 00:04:40.065 sys 0m0.276s 00:04:40.065 11:06:35 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.065 11:06:35 thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.065 ************************************ 00:04:40.065 END TEST thread 00:04:40.065 ************************************ 00:04:40.065 11:06:35 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:04:40.065 11:06:35 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:40.065 11:06:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.065 11:06:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.065 11:06:35 -- common/autotest_common.sh@10 -- # set +x 00:04:40.065 ************************************ 00:04:40.065 START TEST app_cmdline 00:04:40.065 ************************************ 00:04:40.065 11:06:35 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:40.065 * Looking for test storage... 00:04:40.065 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:40.065 11:06:35 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:40.065 11:06:35 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:04:40.066 11:06:35 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:40.066 11:06:35 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:40.066 11:06:35 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:40.066 11:06:35 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:40.066 11:06:35 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:40.066 11:06:35 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.066 11:06:35 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:04:40.066 11:06:35 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:04:40.066 11:06:35 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:04:40.066 11:06:35 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:04:40.066 11:06:35 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:04:40.066 11:06:35 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:04:40.066 11:06:35 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:40.066 11:06:35 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:04:40.066 11:06:35 app_cmdline -- scripts/common.sh@345 -- # : 1 00:04:40.066 11:06:35 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:40.066 11:06:35 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.066 11:06:35 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:04:40.066 11:06:35 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:04:40.066 11:06:35 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.066 11:06:35 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:04:40.066 11:06:35 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:04:40.066 11:06:35 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:04:40.066 11:06:35 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:04:40.066 11:06:35 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.066 11:06:35 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:04:40.066 11:06:35 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:04:40.066 11:06:35 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:40.066 11:06:35 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:40.066 11:06:35 app_cmdline -- scripts/common.sh@368 -- # return 0 00:04:40.066 11:06:35 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.066 11:06:35 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:40.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.066 --rc genhtml_branch_coverage=1 00:04:40.066 --rc genhtml_function_coverage=1 00:04:40.066 --rc genhtml_legend=1 00:04:40.066 --rc geninfo_all_blocks=1 00:04:40.066 --rc geninfo_unexecuted_blocks=1 00:04:40.066 00:04:40.066 ' 00:04:40.066 11:06:35 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:40.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.066 --rc genhtml_branch_coverage=1 00:04:40.066 --rc genhtml_function_coverage=1 00:04:40.066 --rc genhtml_legend=1 00:04:40.066 --rc geninfo_all_blocks=1 00:04:40.066 --rc geninfo_unexecuted_blocks=1 00:04:40.066 00:04:40.066 ' 00:04:40.066 11:06:35 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:40.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.066 --rc genhtml_branch_coverage=1 00:04:40.066 --rc genhtml_function_coverage=1 00:04:40.066 --rc genhtml_legend=1 00:04:40.066 --rc geninfo_all_blocks=1 00:04:40.066 --rc geninfo_unexecuted_blocks=1 00:04:40.066 00:04:40.066 ' 00:04:40.066 11:06:35 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:40.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.066 --rc genhtml_branch_coverage=1 00:04:40.066 --rc genhtml_function_coverage=1 00:04:40.066 --rc genhtml_legend=1 00:04:40.066 --rc geninfo_all_blocks=1 00:04:40.066 --rc geninfo_unexecuted_blocks=1 00:04:40.066 00:04:40.066 ' 00:04:40.066 11:06:35 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:04:40.066 11:06:35 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2479248 00:04:40.066 11:06:35 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:04:40.066 11:06:35 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2479248 00:04:40.066 11:06:35 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 2479248 ']' 00:04:40.066 11:06:35 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.066 11:06:35 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.066 11:06:35 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.066 11:06:35 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.066 11:06:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:40.066 [2024-11-19 11:06:35.397974] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:04:40.066 [2024-11-19 11:06:35.398058] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2479248 ] 00:04:40.066 [2024-11-19 11:06:35.472452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.066 [2024-11-19 11:06:35.528750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.324 11:06:35 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:40.324 11:06:35 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:04:40.324 11:06:35 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:04:40.582 { 00:04:40.582 "version": "SPDK v25.01-pre git sha1 73f18e890", 00:04:40.582 "fields": { 00:04:40.582 "major": 25, 00:04:40.582 "minor": 1, 00:04:40.582 "patch": 0, 00:04:40.582 "suffix": "-pre", 00:04:40.582 "commit": "73f18e890" 00:04:40.582 } 00:04:40.582 } 00:04:40.582 11:06:36 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:04:40.582 11:06:36 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:04:40.582 11:06:36 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:04:40.582 11:06:36 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:04:40.582 11:06:36 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:04:40.582 11:06:36 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.582 11:06:36 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:04:40.582 11:06:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:40.582 11:06:36 app_cmdline -- app/cmdline.sh@26 -- # sort 00:04:40.582 11:06:36 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.840 11:06:36 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:04:40.840 11:06:36 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:04:40.840 11:06:36 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:40.840 11:06:36 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:04:40.840 11:06:36 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:40.840 11:06:36 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:40.840 11:06:36 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.840 11:06:36 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:40.840 11:06:36 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.840 11:06:36 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:40.840 11:06:36 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.840 11:06:36 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:40.840 11:06:36 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:04:40.840 11:06:36 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:41.098 request: 00:04:41.098 { 00:04:41.098 "method": "env_dpdk_get_mem_stats", 00:04:41.098 "req_id": 1 00:04:41.098 } 00:04:41.098 Got JSON-RPC error response 00:04:41.098 response: 00:04:41.098 { 00:04:41.098 "code": -32601, 00:04:41.098 "message": "Method not found" 00:04:41.098 } 00:04:41.098 11:06:36 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:04:41.098 11:06:36 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:41.098 11:06:36 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:41.098 11:06:36 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:41.098 11:06:36 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2479248 00:04:41.098 11:06:36 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 2479248 ']' 00:04:41.098 11:06:36 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 2479248 00:04:41.098 11:06:36 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:04:41.098 11:06:36 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:41.098 11:06:36 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2479248 00:04:41.098 11:06:36 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:41.098 11:06:36 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:41.098 11:06:36 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2479248' 00:04:41.098 killing process with pid 2479248 00:04:41.098 11:06:36 app_cmdline -- common/autotest_common.sh@973 -- # kill 2479248 00:04:41.098 11:06:36 app_cmdline -- common/autotest_common.sh@978 -- # wait 2479248 00:04:41.357 00:04:41.357 real 0m1.625s 00:04:41.357 user 0m2.013s 00:04:41.357 sys 0m0.478s 00:04:41.357 11:06:36 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.357 11:06:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:41.357 ************************************ 00:04:41.357 END TEST app_cmdline 00:04:41.357 ************************************ 00:04:41.357 11:06:36 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:41.357 11:06:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.357 11:06:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.357 11:06:36 -- common/autotest_common.sh@10 -- # set +x 00:04:41.617 ************************************ 00:04:41.617 START TEST version 00:04:41.617 ************************************ 00:04:41.617 11:06:36 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:41.617 * Looking for test storage... 00:04:41.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:41.617 11:06:36 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:41.617 11:06:36 version -- common/autotest_common.sh@1693 -- # lcov --version 00:04:41.617 11:06:36 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:41.617 11:06:37 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:41.617 11:06:37 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.617 11:06:37 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.617 11:06:37 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.617 11:06:37 version -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.617 11:06:37 version -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.617 11:06:37 version -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.617 11:06:37 version -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.617 11:06:37 version -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.617 11:06:37 version -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.617 11:06:37 version -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.617 11:06:37 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.617 11:06:37 version -- scripts/common.sh@344 -- # case "$op" in 00:04:41.617 11:06:37 version -- scripts/common.sh@345 -- # : 1 00:04:41.617 11:06:37 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.617 11:06:37 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.617 11:06:37 version -- scripts/common.sh@365 -- # decimal 1 00:04:41.617 11:06:37 version -- scripts/common.sh@353 -- # local d=1 00:04:41.617 11:06:37 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.617 11:06:37 version -- scripts/common.sh@355 -- # echo 1 00:04:41.617 11:06:37 version -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.617 11:06:37 version -- scripts/common.sh@366 -- # decimal 2 00:04:41.617 11:06:37 version -- scripts/common.sh@353 -- # local d=2 00:04:41.617 11:06:37 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.617 11:06:37 version -- scripts/common.sh@355 -- # echo 2 00:04:41.617 11:06:37 version -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.617 11:06:37 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.617 11:06:37 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.617 11:06:37 version -- scripts/common.sh@368 -- # return 0 00:04:41.617 11:06:37 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.617 11:06:37 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:41.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.617 --rc genhtml_branch_coverage=1 00:04:41.617 --rc genhtml_function_coverage=1 00:04:41.617 --rc genhtml_legend=1 00:04:41.617 --rc geninfo_all_blocks=1 00:04:41.617 --rc geninfo_unexecuted_blocks=1 00:04:41.617 00:04:41.617 ' 00:04:41.617 11:06:37 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:41.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.617 --rc genhtml_branch_coverage=1 00:04:41.617 --rc genhtml_function_coverage=1 00:04:41.617 --rc genhtml_legend=1 00:04:41.617 --rc geninfo_all_blocks=1 00:04:41.617 --rc geninfo_unexecuted_blocks=1 00:04:41.617 00:04:41.617 ' 00:04:41.617 11:06:37 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:41.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.617 --rc genhtml_branch_coverage=1 00:04:41.617 --rc genhtml_function_coverage=1 00:04:41.617 --rc genhtml_legend=1 00:04:41.617 --rc geninfo_all_blocks=1 00:04:41.617 --rc geninfo_unexecuted_blocks=1 00:04:41.617 00:04:41.617 ' 00:04:41.617 11:06:37 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:41.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.617 --rc genhtml_branch_coverage=1 00:04:41.617 --rc genhtml_function_coverage=1 00:04:41.617 --rc genhtml_legend=1 00:04:41.617 --rc geninfo_all_blocks=1 00:04:41.617 --rc geninfo_unexecuted_blocks=1 00:04:41.617 00:04:41.617 ' 00:04:41.617 11:06:37 version -- app/version.sh@17 -- # get_header_version major 00:04:41.617 11:06:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:41.617 11:06:37 version -- app/version.sh@14 -- # cut -f2 00:04:41.617 11:06:37 version -- app/version.sh@14 -- # tr -d '"' 00:04:41.617 11:06:37 version -- app/version.sh@17 -- # major=25 00:04:41.618 11:06:37 version -- app/version.sh@18 -- # get_header_version minor 00:04:41.618 11:06:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:41.618 11:06:37 version -- app/version.sh@14 -- # cut -f2 00:04:41.618 11:06:37 version -- app/version.sh@14 -- # tr -d '"' 00:04:41.618 11:06:37 version -- app/version.sh@18 -- # minor=1 00:04:41.618 11:06:37 version -- app/version.sh@19 -- # get_header_version patch 00:04:41.618 11:06:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:41.618 11:06:37 version -- app/version.sh@14 -- # cut -f2 00:04:41.618 11:06:37 version -- app/version.sh@14 -- # tr -d '"' 00:04:41.618 11:06:37 version -- app/version.sh@19 -- # patch=0 00:04:41.618 11:06:37 version -- app/version.sh@20 -- # get_header_version suffix 00:04:41.618 11:06:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:41.618 11:06:37 version -- app/version.sh@14 -- # cut -f2 00:04:41.618 11:06:37 version -- app/version.sh@14 -- # tr -d '"' 00:04:41.618 11:06:37 version -- app/version.sh@20 -- # suffix=-pre 00:04:41.618 11:06:37 version -- app/version.sh@22 -- # version=25.1 00:04:41.618 11:06:37 version -- app/version.sh@25 -- # (( patch != 0 )) 00:04:41.618 11:06:37 version -- app/version.sh@28 -- # version=25.1rc0 00:04:41.618 11:06:37 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:04:41.618 11:06:37 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:04:41.618 11:06:37 version -- app/version.sh@30 -- # py_version=25.1rc0 00:04:41.618 11:06:37 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:04:41.618 00:04:41.618 real 0m0.197s 00:04:41.618 user 0m0.132s 00:04:41.618 sys 0m0.092s 00:04:41.618 11:06:37 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.618 11:06:37 version -- common/autotest_common.sh@10 -- # set +x 00:04:41.618 ************************************ 00:04:41.618 END TEST version 00:04:41.618 ************************************ 00:04:41.618 11:06:37 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:04:41.618 11:06:37 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:04:41.618 11:06:37 -- spdk/autotest.sh@194 -- # uname -s 00:04:41.618 11:06:37 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:04:41.618 11:06:37 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:41.618 11:06:37 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:41.618 11:06:37 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:04:41.618 11:06:37 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:04:41.618 11:06:37 -- spdk/autotest.sh@260 -- # timing_exit lib 00:04:41.618 11:06:37 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:41.618 11:06:37 -- common/autotest_common.sh@10 -- # set +x 00:04:41.877 11:06:37 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:04:41.877 11:06:37 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:04:41.877 11:06:37 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:04:41.877 11:06:37 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:04:41.877 11:06:37 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:04:41.877 11:06:37 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:04:41.877 11:06:37 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:41.877 11:06:37 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:41.877 11:06:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.877 11:06:37 -- common/autotest_common.sh@10 -- # set +x 00:04:41.877 ************************************ 00:04:41.877 START TEST nvmf_tcp 00:04:41.877 ************************************ 00:04:41.877 11:06:37 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:41.877 * Looking for test storage... 00:04:41.877 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:41.877 11:06:37 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:41.877 11:06:37 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:41.877 11:06:37 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:41.877 11:06:37 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:41.877 11:06:37 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.877 11:06:37 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.877 11:06:37 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.877 11:06:37 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.877 11:06:37 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.877 11:06:37 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.877 11:06:37 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.877 11:06:37 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.877 11:06:37 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.877 11:06:37 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.877 11:06:37 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.877 11:06:37 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:41.877 11:06:37 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:04:41.877 11:06:37 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.877 11:06:37 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.877 11:06:37 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:41.877 11:06:37 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:04:41.877 11:06:37 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.877 11:06:37 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:04:41.877 11:06:37 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.877 11:06:37 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:41.877 11:06:37 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:04:41.877 11:06:37 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.877 11:06:37 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:04:41.877 11:06:37 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.877 11:06:37 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.877 11:06:37 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.877 11:06:37 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:04:41.877 11:06:37 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.878 11:06:37 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:41.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.878 --rc genhtml_branch_coverage=1 00:04:41.878 --rc genhtml_function_coverage=1 00:04:41.878 --rc genhtml_legend=1 00:04:41.878 --rc geninfo_all_blocks=1 00:04:41.878 --rc geninfo_unexecuted_blocks=1 00:04:41.878 00:04:41.878 ' 00:04:41.878 11:06:37 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:41.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.878 --rc genhtml_branch_coverage=1 00:04:41.878 --rc genhtml_function_coverage=1 00:04:41.878 --rc genhtml_legend=1 00:04:41.878 --rc geninfo_all_blocks=1 00:04:41.878 --rc geninfo_unexecuted_blocks=1 00:04:41.878 00:04:41.878 ' 00:04:41.878 11:06:37 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:41.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.878 --rc genhtml_branch_coverage=1 00:04:41.878 --rc genhtml_function_coverage=1 00:04:41.878 --rc genhtml_legend=1 00:04:41.878 --rc geninfo_all_blocks=1 00:04:41.878 --rc geninfo_unexecuted_blocks=1 00:04:41.878 00:04:41.878 ' 00:04:41.878 11:06:37 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:41.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.878 --rc genhtml_branch_coverage=1 00:04:41.878 --rc genhtml_function_coverage=1 00:04:41.878 --rc genhtml_legend=1 00:04:41.878 --rc geninfo_all_blocks=1 00:04:41.878 --rc geninfo_unexecuted_blocks=1 00:04:41.878 00:04:41.878 ' 00:04:41.878 11:06:37 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:04:41.878 11:06:37 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:41.878 11:06:37 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:41.878 11:06:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:41.878 11:06:37 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.878 11:06:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:41.878 ************************************ 00:04:41.878 START TEST nvmf_target_core 00:04:41.878 ************************************ 00:04:41.878 11:06:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:41.878 * Looking for test storage... 00:04:41.878 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:41.878 11:06:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:41.878 11:06:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:04:41.878 11:06:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:42.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.143 --rc genhtml_branch_coverage=1 00:04:42.143 --rc genhtml_function_coverage=1 00:04:42.143 --rc genhtml_legend=1 00:04:42.143 --rc geninfo_all_blocks=1 00:04:42.143 --rc geninfo_unexecuted_blocks=1 00:04:42.143 00:04:42.143 ' 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:42.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.143 --rc genhtml_branch_coverage=1 00:04:42.143 --rc genhtml_function_coverage=1 00:04:42.143 --rc genhtml_legend=1 00:04:42.143 --rc geninfo_all_blocks=1 00:04:42.143 --rc geninfo_unexecuted_blocks=1 00:04:42.143 00:04:42.143 ' 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:42.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.143 --rc genhtml_branch_coverage=1 00:04:42.143 --rc genhtml_function_coverage=1 00:04:42.143 --rc genhtml_legend=1 00:04:42.143 --rc geninfo_all_blocks=1 00:04:42.143 --rc geninfo_unexecuted_blocks=1 00:04:42.143 00:04:42.143 ' 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:42.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.143 --rc genhtml_branch_coverage=1 00:04:42.143 --rc genhtml_function_coverage=1 00:04:42.143 --rc genhtml_legend=1 00:04:42.143 --rc geninfo_all_blocks=1 00:04:42.143 --rc geninfo_unexecuted_blocks=1 00:04:42.143 00:04:42.143 ' 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:42.143 11:06:37 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:42.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:42.144 ************************************ 00:04:42.144 START TEST nvmf_abort 00:04:42.144 ************************************ 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:42.144 * Looking for test storage... 00:04:42.144 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:42.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.144 --rc genhtml_branch_coverage=1 00:04:42.144 --rc genhtml_function_coverage=1 00:04:42.144 --rc genhtml_legend=1 00:04:42.144 --rc geninfo_all_blocks=1 00:04:42.144 --rc geninfo_unexecuted_blocks=1 00:04:42.144 00:04:42.144 ' 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:42.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.144 --rc genhtml_branch_coverage=1 00:04:42.144 --rc genhtml_function_coverage=1 00:04:42.144 --rc genhtml_legend=1 00:04:42.144 --rc geninfo_all_blocks=1 00:04:42.144 --rc geninfo_unexecuted_blocks=1 00:04:42.144 00:04:42.144 ' 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:42.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.144 --rc genhtml_branch_coverage=1 00:04:42.144 --rc genhtml_function_coverage=1 00:04:42.144 --rc genhtml_legend=1 00:04:42.144 --rc geninfo_all_blocks=1 00:04:42.144 --rc geninfo_unexecuted_blocks=1 00:04:42.144 00:04:42.144 ' 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:42.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.144 --rc genhtml_branch_coverage=1 00:04:42.144 --rc genhtml_function_coverage=1 00:04:42.144 --rc genhtml_legend=1 00:04:42.144 --rc geninfo_all_blocks=1 00:04:42.144 --rc geninfo_unexecuted_blocks=1 00:04:42.144 00:04:42.144 ' 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:42.144 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:42.145 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.145 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.145 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.145 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:04:42.145 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.145 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:04:42.145 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:42.145 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:42.145 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:42.145 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:42.145 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:42.145 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:42.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:42.145 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:42.145 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:42.145 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:42.145 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:04:42.145 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:04:42.145 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:04:42.145 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:04:42.145 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:42.145 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:42.145 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:42.145 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:42.145 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:42.145 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:42.145 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:42.145 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:04:42.145 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:42.145 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:04:42.145 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:04:45.489 Found 0000:82:00.0 (0x8086 - 0x159b) 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:04:45.489 Found 0000:82:00.1 (0x8086 - 0x159b) 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:04:45.489 Found net devices under 0000:82:00.0: cvl_0_0 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:04:45.489 Found net devices under 0000:82:00.1: cvl_0_1 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:04:45.489 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:04:45.490 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:04:45.490 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:04:45.490 00:04:45.490 --- 10.0.0.2 ping statistics --- 00:04:45.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:45.490 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:04:45.490 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:04:45.490 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:04:45.490 00:04:45.490 --- 10.0.0.1 ping statistics --- 00:04:45.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:45.490 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2481634 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2481634 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2481634 ']' 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:45.490 [2024-11-19 11:06:40.464085] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:04:45.490 [2024-11-19 11:06:40.464185] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:04:45.490 [2024-11-19 11:06:40.547939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:45.490 [2024-11-19 11:06:40.608984] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:04:45.490 [2024-11-19 11:06:40.609045] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:04:45.490 [2024-11-19 11:06:40.609077] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:45.490 [2024-11-19 11:06:40.609089] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:45.490 [2024-11-19 11:06:40.609099] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:04:45.490 [2024-11-19 11:06:40.610883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:45.490 [2024-11-19 11:06:40.610947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:45.490 [2024-11-19 11:06:40.610951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:45.490 [2024-11-19 11:06:40.758606] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:45.490 Malloc0 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:45.490 Delay0 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:45.490 [2024-11-19 11:06:40.826942] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.490 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:04:45.490 [2024-11-19 11:06:40.942255] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:04:48.019 Initializing NVMe Controllers 00:04:48.019 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:04:48.019 controller IO queue size 128 less than required 00:04:48.019 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:04:48.019 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:04:48.019 Initialization complete. Launching workers. 00:04:48.019 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28272 00:04:48.019 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28333, failed to submit 62 00:04:48.019 success 28276, unsuccessful 57, failed 0 00:04:48.019 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:04:48.019 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.019 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:48.019 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.019 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:04:48.019 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:04:48.019 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:04:48.019 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:04:48.019 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:04:48.019 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:04:48.019 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:04:48.019 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:04:48.019 rmmod nvme_tcp 00:04:48.019 rmmod nvme_fabrics 00:04:48.019 rmmod nvme_keyring 00:04:48.019 11:06:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:04:48.019 11:06:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:04:48.019 11:06:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:04:48.019 11:06:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2481634 ']' 00:04:48.019 11:06:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2481634 00:04:48.019 11:06:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2481634 ']' 00:04:48.019 11:06:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2481634 00:04:48.019 11:06:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:04:48.019 11:06:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:48.019 11:06:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2481634 00:04:48.019 11:06:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:04:48.019 11:06:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:04:48.019 11:06:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2481634' 00:04:48.019 killing process with pid 2481634 00:04:48.019 11:06:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2481634 00:04:48.019 11:06:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2481634 00:04:48.019 11:06:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:04:48.019 11:06:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:04:48.019 11:06:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:04:48.019 11:06:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:04:48.019 11:06:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:04:48.019 11:06:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:04:48.019 11:06:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:04:48.019 11:06:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:04:48.019 11:06:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:04:48.019 11:06:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:48.019 11:06:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:48.019 11:06:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:49.929 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:04:49.929 00:04:49.929 real 0m7.876s 00:04:49.929 user 0m10.611s 00:04:49.929 sys 0m2.995s 00:04:49.929 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.929 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:49.929 ************************************ 00:04:49.929 END TEST nvmf_abort 00:04:49.929 ************************************ 00:04:49.929 11:06:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:04:49.929 11:06:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:49.929 11:06:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.929 11:06:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:49.929 ************************************ 00:04:49.929 START TEST nvmf_ns_hotplug_stress 00:04:49.929 ************************************ 00:04:49.929 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:04:50.189 * Looking for test storage... 00:04:50.189 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:50.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.189 --rc genhtml_branch_coverage=1 00:04:50.189 --rc genhtml_function_coverage=1 00:04:50.189 --rc genhtml_legend=1 00:04:50.189 --rc geninfo_all_blocks=1 00:04:50.189 --rc geninfo_unexecuted_blocks=1 00:04:50.189 00:04:50.189 ' 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:50.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.189 --rc genhtml_branch_coverage=1 00:04:50.189 --rc genhtml_function_coverage=1 00:04:50.189 --rc genhtml_legend=1 00:04:50.189 --rc geninfo_all_blocks=1 00:04:50.189 --rc geninfo_unexecuted_blocks=1 00:04:50.189 00:04:50.189 ' 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:50.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.189 --rc genhtml_branch_coverage=1 00:04:50.189 --rc genhtml_function_coverage=1 00:04:50.189 --rc genhtml_legend=1 00:04:50.189 --rc geninfo_all_blocks=1 00:04:50.189 --rc geninfo_unexecuted_blocks=1 00:04:50.189 00:04:50.189 ' 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:50.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.189 --rc genhtml_branch_coverage=1 00:04:50.189 --rc genhtml_function_coverage=1 00:04:50.189 --rc genhtml_legend=1 00:04:50.189 --rc geninfo_all_blocks=1 00:04:50.189 --rc geninfo_unexecuted_blocks=1 00:04:50.189 00:04:50.189 ' 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:50.189 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:04:50.190 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:50.190 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:50.190 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:50.190 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.190 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.190 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.190 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:04:50.190 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.190 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:04:50.190 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:50.190 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:50.190 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:50.190 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:50.190 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:50.190 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:50.190 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:50.190 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:50.190 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:50.190 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:50.190 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:50.190 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:04:50.190 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:04:50.190 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:50.190 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:50.190 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:50.190 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:50.190 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:50.190 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:50.190 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:50.190 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:04:50.190 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:50.190 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:04:50.190 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:04:53.476 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:04:53.476 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:04:53.476 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:04:53.476 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:04:53.476 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:04:53.476 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:04:53.476 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:04:53.476 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:04:53.476 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:04:53.476 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:04:53.476 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:04:53.476 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:04:53.476 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:04:53.476 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:04:53.476 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:04:53.476 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:04:53.476 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:04:53.476 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:04:53.476 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:04:53.476 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:04:53.476 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:04:53.476 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:04:53.476 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:04:53.476 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:04:53.476 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:04:53.476 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:04:53.476 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:04:53.476 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:04:53.476 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:04:53.476 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:04:53.476 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:04:53.476 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:04:53.476 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:04:53.476 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:53.476 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:04:53.476 Found 0000:82:00.0 (0x8086 - 0x159b) 00:04:53.476 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:53.476 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:53.476 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:53.476 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:53.476 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:53.476 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:53.476 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:04:53.476 Found 0000:82:00.1 (0x8086 - 0x159b) 00:04:53.476 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:53.476 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:53.476 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:04:53.477 Found net devices under 0000:82:00.0: cvl_0_0 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:04:53.477 Found net devices under 0000:82:00.1: cvl_0_1 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:04:53.477 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:04:53.477 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:04:53.477 00:04:53.477 --- 10.0.0.2 ping statistics --- 00:04:53.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:53.477 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:04:53.477 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:04:53.477 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:04:53.477 00:04:53.477 --- 10.0.0.1 ping statistics --- 00:04:53.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:53.477 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2484280 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2484280 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2484280 ']' 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:04:53.477 [2024-11-19 11:06:48.492969] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:04:53.477 [2024-11-19 11:06:48.493073] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:04:53.477 [2024-11-19 11:06:48.575359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:53.477 [2024-11-19 11:06:48.637831] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:04:53.477 [2024-11-19 11:06:48.637899] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:04:53.477 [2024-11-19 11:06:48.637915] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:53.477 [2024-11-19 11:06:48.637927] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:53.477 [2024-11-19 11:06:48.637937] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:04:53.477 [2024-11-19 11:06:48.639340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:53.477 [2024-11-19 11:06:48.639415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:53.477 [2024-11-19 11:06:48.639419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:04:53.477 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:04:53.478 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:04:53.736 [2024-11-19 11:06:49.020174] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:53.736 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:04:53.993 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:04:54.251 [2024-11-19 11:06:49.555027] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:04:54.251 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:04:54.509 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:04:54.767 Malloc0 00:04:54.767 11:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:04:55.025 Delay0 00:04:55.025 11:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:55.283 11:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:04:55.541 NULL1 00:04:55.541 11:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:04:55.799 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2484701 00:04:55.799 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:04:55.799 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2484701 00:04:55.799 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:57.172 Read completed with error (sct=0, sc=11) 00:04:57.172 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:57.172 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:57.172 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:57.172 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:57.172 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:57.172 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:57.430 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:57.430 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:57.430 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:57.430 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:04:57.430 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:04:57.688 true 00:04:57.688 11:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2484701 00:04:57.688 11:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:58.621 11:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:58.621 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:04:58.621 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:04:58.879 true 00:04:58.879 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2484701 00:04:58.879 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:59.137 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:59.395 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:04:59.395 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:04:59.653 true 00:04:59.653 11:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2484701 00:04:59.653 11:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:00.218 11:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:00.219 11:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:00.219 11:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:00.477 true 00:05:00.477 11:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2484701 00:05:00.477 11:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:01.851 11:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:01.851 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:01.851 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:02.108 true 00:05:02.108 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2484701 00:05:02.108 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:02.366 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:02.624 11:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:02.624 11:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:02.882 true 00:05:02.882 11:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2484701 00:05:02.882 11:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:03.140 11:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:03.397 11:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:03.397 11:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:03.655 true 00:05:03.655 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2484701 00:05:03.655 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:05.028 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:05.028 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:05.028 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:05.286 true 00:05:05.286 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2484701 00:05:05.286 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:05.544 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:05.801 11:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:05.801 11:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:06.059 true 00:05:06.059 11:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2484701 00:05:06.059 11:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:06.359 11:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:06.640 11:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:06.640 11:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:06.898 true 00:05:06.898 11:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2484701 00:05:06.898 11:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:07.832 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:07.832 11:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:07.832 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:08.090 11:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:08.090 11:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:08.348 true 00:05:08.348 11:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2484701 00:05:08.348 11:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:08.606 11:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:08.864 11:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:08.864 11:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:09.121 true 00:05:09.121 11:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2484701 00:05:09.121 11:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:09.378 11:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:09.636 11:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:09.636 11:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:09.893 true 00:05:10.152 11:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2484701 00:05:10.152 11:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:11.086 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:11.086 11:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:11.344 11:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:11.344 11:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:11.602 true 00:05:11.602 11:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2484701 00:05:11.602 11:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:11.860 11:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:12.118 11:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:12.118 11:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:12.375 true 00:05:12.375 11:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2484701 00:05:12.375 11:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:12.631 11:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:12.889 11:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:12.889 11:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:13.147 true 00:05:13.147 11:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2484701 00:05:13.147 11:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:14.521 11:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:14.521 11:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:14.521 11:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:14.780 true 00:05:14.780 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2484701 00:05:14.780 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:15.039 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:15.296 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:15.296 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:15.554 true 00:05:15.554 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2484701 00:05:15.554 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:15.812 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:16.069 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:16.069 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:16.328 true 00:05:16.328 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2484701 00:05:16.328 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:17.261 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:17.518 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:17.518 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:17.775 true 00:05:17.775 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2484701 00:05:17.775 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:18.033 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:18.291 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:18.291 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:18.549 true 00:05:18.549 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2484701 00:05:18.549 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:18.807 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:19.065 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:19.065 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:19.323 true 00:05:19.323 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2484701 00:05:19.323 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:20.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.257 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:20.515 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.515 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.774 11:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:20.774 11:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:21.031 true 00:05:21.031 11:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2484701 00:05:21.031 11:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:21.289 11:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:21.547 11:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:21.547 11:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:21.805 true 00:05:21.805 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2484701 00:05:21.805 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:22.738 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:22.738 11:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:22.738 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:22.996 11:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:22.996 11:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:23.254 true 00:05:23.254 11:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2484701 00:05:23.254 11:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:23.512 11:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:23.769 11:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:23.770 11:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:24.027 true 00:05:24.027 11:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2484701 00:05:24.027 11:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:24.285 11:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:24.543 11:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:24.543 11:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:24.801 true 00:05:25.060 11:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2484701 00:05:25.060 11:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:26.034 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:26.034 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:26.291 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:26.291 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:26.291 Initializing NVMe Controllers 00:05:26.291 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:26.291 Controller IO queue size 128, less than required. 00:05:26.291 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:26.291 Controller IO queue size 128, less than required. 00:05:26.291 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:26.291 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:26.291 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:05:26.291 Initialization complete. Launching workers. 00:05:26.291 ======================================================== 00:05:26.291 Latency(us) 00:05:26.291 Device Information : IOPS MiB/s Average min max 00:05:26.291 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 628.95 0.31 83987.57 2471.57 1013023.09 00:05:26.291 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8772.05 4.28 14592.86 3063.30 540690.35 00:05:26.291 ======================================================== 00:05:26.291 Total : 9401.00 4.59 19235.55 2471.57 1013023.09 00:05:26.291 00:05:26.549 true 00:05:26.549 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2484701 00:05:26.549 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2484701) - No such process 00:05:26.549 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2484701 00:05:26.549 11:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:26.806 11:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:27.064 11:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:05:27.064 11:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:05:27.064 11:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:05:27.064 11:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:27.064 11:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:05:27.321 null0 00:05:27.321 11:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:27.321 11:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:27.321 11:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:05:27.578 null1 00:05:27.578 11:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:27.578 11:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:27.578 11:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:05:27.835 null2 00:05:27.835 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:27.835 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:27.835 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:05:28.092 null3 00:05:28.092 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:28.092 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:28.092 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:05:28.349 null4 00:05:28.349 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:28.349 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:28.349 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:05:28.607 null5 00:05:28.607 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:28.607 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:28.607 11:07:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:05:28.864 null6 00:05:28.864 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:28.864 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:28.864 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:05:29.122 null7 00:05:29.122 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:29.122 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:29.122 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:05:29.122 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:29.122 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:29.122 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:05:29.122 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:29.122 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:05:29.122 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:29.122 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:29.122 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:29.122 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:29.122 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:29.122 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:29.122 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2488764 2488765 2488767 2488769 2488771 2488773 2488775 2488777 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:29.123 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:29.382 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:29.382 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:29.382 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:29.382 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:29.382 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:29.382 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:29.382 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:29.382 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.640 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:29.640 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:29.640 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:29.640 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:29.640 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:29.640 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:29.640 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:29.640 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:29.640 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:29.640 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:29.640 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:29.640 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:29.640 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:29.640 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:29.640 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:29.640 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:29.640 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:29.640 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:29.640 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:29.640 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:29.640 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:29.640 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:29.640 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:29.640 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:29.899 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:29.899 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:29.899 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:29.899 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:29.899 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:29.899 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:30.158 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:30.158 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.417 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.417 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.417 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:30.417 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.417 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.417 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:30.417 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.417 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.417 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:30.417 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.417 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.417 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:30.417 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.417 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.417 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:30.417 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.417 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.417 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:30.417 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.417 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.417 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:30.417 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.417 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.417 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:30.675 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:30.675 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:30.675 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:30.675 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:30.675 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:30.675 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:30.675 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:30.675 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.933 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.933 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.934 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:30.934 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.934 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.934 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:30.934 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.934 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.934 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:30.934 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.934 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.934 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:30.934 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.934 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.934 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:30.934 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.934 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.934 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.934 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.934 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:30.934 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:30.934 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.934 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.934 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:31.192 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:31.192 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:31.192 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:31.192 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:31.192 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:31.192 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:31.192 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.192 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:31.450 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.450 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.450 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:31.450 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.450 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.450 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:31.450 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.450 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.450 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:31.450 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.450 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.450 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:31.450 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.450 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.450 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:31.450 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.450 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.451 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:31.451 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.451 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.451 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:31.451 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.451 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.451 11:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:31.709 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:31.709 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:31.709 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:31.709 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:31.709 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:31.709 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.709 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:31.709 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:32.275 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.275 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.275 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:32.275 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.275 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.276 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:32.276 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.276 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.276 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:32.276 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.276 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.276 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:32.276 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.276 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.276 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.276 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.276 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:32.276 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:32.276 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.276 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.276 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:32.276 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.276 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.276 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:32.533 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:32.533 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:32.533 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:32.533 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:32.533 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:32.534 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:32.534 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:32.534 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.791 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.791 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.791 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:32.791 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.791 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.791 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:32.791 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.791 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.791 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:32.791 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.791 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.791 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:32.791 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.791 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.791 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:32.791 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.791 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.791 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:32.791 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.791 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.791 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:32.791 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.791 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.791 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:33.048 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:33.048 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:33.048 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:33.048 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:33.048 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:33.048 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:33.048 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.048 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:33.306 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.306 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.306 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:33.306 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.306 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.306 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:33.306 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.306 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.306 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:33.306 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.306 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.306 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:33.306 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.306 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.306 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:33.306 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.306 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.306 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:33.306 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.306 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.306 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:33.306 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.306 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.306 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:33.580 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:33.580 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:33.580 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:33.580 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.580 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:33.580 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:33.580 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:33.580 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:33.846 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.846 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.846 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:33.846 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.846 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.846 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:33.846 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.846 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.846 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:33.846 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.846 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.846 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:33.846 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.846 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.846 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:33.846 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.846 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.846 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:33.846 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.846 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.846 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:33.846 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.846 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.846 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:34.105 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:34.105 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:34.105 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:34.105 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:34.105 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.105 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:34.105 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:34.105 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:34.362 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.620 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.620 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:34.620 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.620 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.620 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:34.620 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.620 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.620 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:34.620 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.620 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.620 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:34.620 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.620 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.620 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:34.620 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.620 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.620 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:34.620 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.620 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.620 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:34.620 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.620 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.620 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:34.877 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:34.877 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:34.877 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:34.877 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:34.877 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:34.877 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:34.877 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:34.877 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.135 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.135 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.135 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.135 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.135 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.135 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.135 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.135 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.135 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.135 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.135 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.135 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.135 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.135 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.135 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.135 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.135 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:05:35.135 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:05:35.135 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:35.135 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:05:35.135 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:35.135 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:05:35.135 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:35.135 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:35.135 rmmod nvme_tcp 00:05:35.135 rmmod nvme_fabrics 00:05:35.135 rmmod nvme_keyring 00:05:35.135 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:35.135 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:05:35.135 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:05:35.135 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2484280 ']' 00:05:35.135 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2484280 00:05:35.135 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2484280 ']' 00:05:35.135 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2484280 00:05:35.135 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:05:35.135 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:35.135 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2484280 00:05:35.135 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:35.135 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:35.135 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2484280' 00:05:35.135 killing process with pid 2484280 00:05:35.135 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2484280 00:05:35.135 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2484280 00:05:35.395 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:35.395 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:35.395 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:35.395 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:05:35.395 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:05:35.395 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:35.395 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:05:35.395 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:35.395 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:35.395 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:35.395 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:35.395 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:37.935 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:37.935 00:05:37.935 real 0m47.467s 00:05:37.935 user 3m38.422s 00:05:37.935 sys 0m16.584s 00:05:37.935 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.935 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:37.935 ************************************ 00:05:37.935 END TEST nvmf_ns_hotplug_stress 00:05:37.935 ************************************ 00:05:37.935 11:07:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:37.935 11:07:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:37.935 11:07:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.935 11:07:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:37.935 ************************************ 00:05:37.935 START TEST nvmf_delete_subsystem 00:05:37.935 ************************************ 00:05:37.935 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:37.935 * Looking for test storage... 00:05:37.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:37.935 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:37.935 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:05:37.935 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:37.935 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:37.935 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.935 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.935 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.935 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.935 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.935 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.935 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.935 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.935 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.935 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.935 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.935 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:05:37.935 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:05:37.935 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.935 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.935 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:05:37.935 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:05:37.935 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.935 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:05:37.935 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.935 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:05:37.935 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:05:37.935 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.935 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:05:37.935 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.935 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.935 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.935 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:05:37.935 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.935 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:37.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.935 --rc genhtml_branch_coverage=1 00:05:37.935 --rc genhtml_function_coverage=1 00:05:37.935 --rc genhtml_legend=1 00:05:37.935 --rc geninfo_all_blocks=1 00:05:37.935 --rc geninfo_unexecuted_blocks=1 00:05:37.935 00:05:37.935 ' 00:05:37.935 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:37.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.935 --rc genhtml_branch_coverage=1 00:05:37.935 --rc genhtml_function_coverage=1 00:05:37.935 --rc genhtml_legend=1 00:05:37.935 --rc geninfo_all_blocks=1 00:05:37.935 --rc geninfo_unexecuted_blocks=1 00:05:37.935 00:05:37.935 ' 00:05:37.935 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:37.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.935 --rc genhtml_branch_coverage=1 00:05:37.935 --rc genhtml_function_coverage=1 00:05:37.935 --rc genhtml_legend=1 00:05:37.935 --rc geninfo_all_blocks=1 00:05:37.935 --rc geninfo_unexecuted_blocks=1 00:05:37.935 00:05:37.935 ' 00:05:37.935 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:37.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.935 --rc genhtml_branch_coverage=1 00:05:37.935 --rc genhtml_function_coverage=1 00:05:37.935 --rc genhtml_legend=1 00:05:37.935 --rc geninfo_all_blocks=1 00:05:37.935 --rc geninfo_unexecuted_blocks=1 00:05:37.935 00:05:37.935 ' 00:05:37.935 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:37.935 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:05:37.935 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:37.935 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:37.935 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:37.935 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:37.935 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:37.935 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:37.935 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:37.936 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:37.936 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:37.936 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:37.936 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:05:37.936 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:05:37.936 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:37.936 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:37.936 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:37.936 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:37.936 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:37.936 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:05:37.936 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:37.936 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:37.936 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:37.936 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.936 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.936 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.936 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:05:37.936 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.936 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:05:37.936 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:37.936 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:37.936 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:37.936 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:37.936 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:37.936 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:37.936 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:37.936 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:37.936 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:37.936 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:37.936 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:05:37.936 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:37.936 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:37.936 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:37.936 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:37.936 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:37.936 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:37.936 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:37.936 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:37.936 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:37.936 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:37.936 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:05:37.936 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:40.469 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:40.469 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:05:40.469 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:40.469 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:40.469 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:40.469 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:40.469 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:40.469 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:05:40.469 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:40.469 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:05:40.469 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:05:40.469 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:05:40.469 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:05:40.469 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:05:40.469 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:05:40.469 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:40.469 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:40.469 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:40.469 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:40.469 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:40.469 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:40.469 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:40.469 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:05:40.470 Found 0000:82:00.0 (0x8086 - 0x159b) 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:05:40.470 Found 0000:82:00.1 (0x8086 - 0x159b) 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:05:40.470 Found net devices under 0000:82:00.0: cvl_0_0 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:05:40.470 Found net devices under 0000:82:00.1: cvl_0_1 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:40.470 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:40.470 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:05:40.470 00:05:40.470 --- 10.0.0.2 ping statistics --- 00:05:40.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:40.470 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:40.470 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:40.470 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:05:40.470 00:05:40.470 --- 10.0.0.1 ping statistics --- 00:05:40.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:40.470 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:40.470 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:40.471 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:40.471 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:40.471 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:40.471 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:40.471 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:05:40.471 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:40.471 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:40.471 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:40.471 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2491958 00:05:40.471 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:05:40.471 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2491958 00:05:40.471 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2491958 ']' 00:05:40.471 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.471 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.471 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.471 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.471 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:40.471 [2024-11-19 11:07:35.944136] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:05:40.471 [2024-11-19 11:07:35.944202] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:40.728 [2024-11-19 11:07:36.026702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:40.728 [2024-11-19 11:07:36.084319] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:40.728 [2024-11-19 11:07:36.084422] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:40.728 [2024-11-19 11:07:36.084452] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:40.728 [2024-11-19 11:07:36.084464] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:40.728 [2024-11-19 11:07:36.084473] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:40.728 [2024-11-19 11:07:36.086053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.728 [2024-11-19 11:07:36.086058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.728 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.728 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:05:40.728 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:40.728 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:40.728 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:40.728 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:40.728 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:40.728 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.728 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:40.986 [2024-11-19 11:07:36.229845] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:40.986 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.986 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:40.986 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.986 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:40.986 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.986 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:40.986 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.986 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:40.986 [2024-11-19 11:07:36.246089] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:40.987 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.987 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:05:40.987 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.987 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:40.987 NULL1 00:05:40.987 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.987 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:40.987 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.987 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:40.987 Delay0 00:05:40.987 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.987 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.987 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.987 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:40.987 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.987 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2491993 00:05:40.987 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:05:40.987 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:05:40.987 [2024-11-19 11:07:36.330860] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:05:42.885 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:05:42.885 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.885 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:43.143 Write completed with error (sct=0, sc=8) 00:05:43.143 starting I/O failed: -6 00:05:43.143 Write completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Write completed with error (sct=0, sc=8) 00:05:43.143 Write completed with error (sct=0, sc=8) 00:05:43.143 starting I/O failed: -6 00:05:43.143 Write completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 starting I/O failed: -6 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Write completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Write completed with error (sct=0, sc=8) 00:05:43.143 starting I/O failed: -6 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 starting I/O failed: -6 00:05:43.143 Write completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 starting I/O failed: -6 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Write completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 starting I/O failed: -6 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Write completed with error (sct=0, sc=8) 00:05:43.143 starting I/O failed: -6 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 starting I/O failed: -6 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Write completed with error (sct=0, sc=8) 00:05:43.143 starting I/O failed: -6 00:05:43.143 Write completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Write completed with error (sct=0, sc=8) 00:05:43.143 starting I/O failed: -6 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 starting I/O failed: -6 00:05:43.143 Write completed with error (sct=0, sc=8) 00:05:43.143 [2024-11-19 11:07:38.543392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f4a0 is same with the state(6) to be set 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 starting I/O failed: -6 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Write completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 starting I/O failed: -6 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Write completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Write completed with error (sct=0, sc=8) 00:05:43.143 Write completed with error (sct=0, sc=8) 00:05:43.143 Write completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Write completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Write completed with error (sct=0, sc=8) 00:05:43.143 Write completed with error (sct=0, sc=8) 00:05:43.143 starting I/O failed: -6 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Write completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Write completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Write completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Write completed with error (sct=0, sc=8) 00:05:43.143 starting I/O failed: -6 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Write completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Write completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Write completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 starting I/O failed: -6 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Write completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Write completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Read completed with error (sct=0, sc=8) 00:05:43.143 Write completed with error (sct=0, sc=8) 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 starting I/O failed: -6 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Write completed with error (sct=0, sc=8) 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Write completed with error (sct=0, sc=8) 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Write completed with error (sct=0, sc=8) 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Write completed with error (sct=0, sc=8) 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Write completed with error (sct=0, sc=8) 00:05:43.144 starting I/O failed: -6 00:05:43.144 Write completed with error (sct=0, sc=8) 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Write completed with error (sct=0, sc=8) 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 starting I/O failed: -6 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Write completed with error (sct=0, sc=8) 00:05:43.144 Write completed with error (sct=0, sc=8) 00:05:43.144 Write completed with error (sct=0, sc=8) 00:05:43.144 starting I/O failed: -6 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Write completed with error (sct=0, sc=8) 00:05:43.144 Write completed with error (sct=0, sc=8) 00:05:43.144 starting I/O failed: -6 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Write completed with error (sct=0, sc=8) 00:05:43.144 [2024-11-19 11:07:38.544223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f292c00d4b0 is same with the state(6) to be set 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Write completed with error (sct=0, sc=8) 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Write completed with error (sct=0, sc=8) 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Write completed with error (sct=0, sc=8) 00:05:43.144 Write completed with error (sct=0, sc=8) 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Write completed with error (sct=0, sc=8) 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Write completed with error (sct=0, sc=8) 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Write completed with error (sct=0, sc=8) 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Write completed with error (sct=0, sc=8) 00:05:43.144 Write completed with error (sct=0, sc=8) 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Write completed with error (sct=0, sc=8) 00:05:43.144 Write completed with error (sct=0, sc=8) 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Write completed with error (sct=0, sc=8) 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Write completed with error (sct=0, sc=8) 00:05:43.144 Write completed with error (sct=0, sc=8) 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Write completed with error (sct=0, sc=8) 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Write completed with error (sct=0, sc=8) 00:05:43.144 Write completed with error (sct=0, sc=8) 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Read completed with error (sct=0, sc=8) 00:05:43.144 Write completed with error (sct=0, sc=8) 00:05:44.074 [2024-11-19 11:07:39.509182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19109a0 is same with the state(6) to be set 00:05:44.074 Write completed with error (sct=0, sc=8) 00:05:44.074 Write completed with error (sct=0, sc=8) 00:05:44.074 Write completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Write completed with error (sct=0, sc=8) 00:05:44.074 Write completed with error (sct=0, sc=8) 00:05:44.074 Write completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Write completed with error (sct=0, sc=8) 00:05:44.074 Write completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Write completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Write completed with error (sct=0, sc=8) 00:05:44.074 Write completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 [2024-11-19 11:07:39.543686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f292c00d7e0 is same with the state(6) to be set 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Write completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Write completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Write completed with error (sct=0, sc=8) 00:05:44.074 Write completed with error (sct=0, sc=8) 00:05:44.074 Write completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Write completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Write completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Write completed with error (sct=0, sc=8) 00:05:44.074 Write completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Write completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Write completed with error (sct=0, sc=8) 00:05:44.074 Write completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Write completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 [2024-11-19 11:07:39.544021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f292c00d020 is same with the state(6) to be set 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Write completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Write completed with error (sct=0, sc=8) 00:05:44.074 Write completed with error (sct=0, sc=8) 00:05:44.074 Write completed with error (sct=0, sc=8) 00:05:44.074 Write completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Write completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Write completed with error (sct=0, sc=8) 00:05:44.074 Write completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Write completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Write completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 [2024-11-19 11:07:39.545382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f680 is same with the state(6) to be set 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Write completed with error (sct=0, sc=8) 00:05:44.074 Write completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Write completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Write completed with error (sct=0, sc=8) 00:05:44.074 Write completed with error (sct=0, sc=8) 00:05:44.074 Write completed with error (sct=0, sc=8) 00:05:44.074 Write completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Write completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Write completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Write completed with error (sct=0, sc=8) 00:05:44.074 Write completed with error (sct=0, sc=8) 00:05:44.074 Write completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 Read completed with error (sct=0, sc=8) 00:05:44.074 [2024-11-19 11:07:39.546044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f2c0 is same with the state(6) to be set 00:05:44.074 Initializing NVMe Controllers 00:05:44.074 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:44.074 Controller IO queue size 128, less than required. 00:05:44.074 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:44.074 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:05:44.074 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:05:44.074 Initialization complete. Launching workers. 00:05:44.074 ======================================================== 00:05:44.074 Latency(us) 00:05:44.074 Device Information : IOPS MiB/s Average min max 00:05:44.074 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 172.13 0.08 893095.47 757.57 1012773.75 00:05:44.074 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 156.26 0.08 1018512.43 353.83 2001998.54 00:05:44.075 ======================================================== 00:05:44.075 Total : 328.38 0.16 952772.73 353.83 2001998.54 00:05:44.075 00:05:44.075 [2024-11-19 11:07:39.546509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19109a0 (9): Bad file descriptor 00:05:44.075 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:05:44.075 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.075 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:05:44.075 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2491993 00:05:44.075 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:05:44.640 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:05:44.640 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2491993 00:05:44.640 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2491993) - No such process 00:05:44.640 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2491993 00:05:44.640 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:05:44.640 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2491993 00:05:44.640 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:05:44.640 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.640 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:05:44.640 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.640 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2491993 00:05:44.640 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:05:44.640 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:44.640 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:44.640 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:44.640 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:44.640 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.640 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:44.640 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.640 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:44.640 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.640 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:44.640 [2024-11-19 11:07:40.070169] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:44.640 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.640 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.640 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.640 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:44.640 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.640 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2492475 00:05:44.640 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:05:44.640 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:05:44.640 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2492475 00:05:44.640 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:44.898 [2024-11-19 11:07:40.143269] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:05:45.155 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:45.155 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2492475 00:05:45.155 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:45.719 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:45.719 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2492475 00:05:45.719 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:46.331 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:46.331 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2492475 00:05:46.331 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:46.896 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:46.896 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2492475 00:05:46.896 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:47.153 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:47.153 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2492475 00:05:47.153 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:47.716 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:47.716 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2492475 00:05:47.716 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:47.973 Initializing NVMe Controllers 00:05:47.973 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:47.973 Controller IO queue size 128, less than required. 00:05:47.973 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:47.973 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:05:47.973 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:05:47.973 Initialization complete. Launching workers. 00:05:47.973 ======================================================== 00:05:47.973 Latency(us) 00:05:47.973 Device Information : IOPS MiB/s Average min max 00:05:47.973 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003683.27 1000178.89 1011773.80 00:05:47.973 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004449.72 1000222.92 1041418.11 00:05:47.973 ======================================================== 00:05:47.973 Total : 256.00 0.12 1004066.50 1000178.89 1041418.11 00:05:47.973 00:05:48.231 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:48.231 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2492475 00:05:48.231 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2492475) - No such process 00:05:48.231 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2492475 00:05:48.231 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:48.231 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:05:48.231 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:48.231 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:05:48.231 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:48.231 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:05:48.231 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:48.231 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:48.231 rmmod nvme_tcp 00:05:48.231 rmmod nvme_fabrics 00:05:48.231 rmmod nvme_keyring 00:05:48.231 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:48.231 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:05:48.231 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:05:48.231 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2491958 ']' 00:05:48.231 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2491958 00:05:48.231 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2491958 ']' 00:05:48.231 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2491958 00:05:48.231 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:05:48.231 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:48.231 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2491958 00:05:48.231 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:48.231 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:48.231 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2491958' 00:05:48.231 killing process with pid 2491958 00:05:48.231 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2491958 00:05:48.231 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2491958 00:05:48.489 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:48.489 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:48.489 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:48.489 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:05:48.489 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:05:48.489 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:48.489 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:05:48.489 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:48.489 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:48.489 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:48.489 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:48.489 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:51.024 11:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:51.024 00:05:51.024 real 0m13.041s 00:05:51.024 user 0m28.422s 00:05:51.024 sys 0m3.331s 00:05:51.024 11:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.024 11:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:51.024 ************************************ 00:05:51.024 END TEST nvmf_delete_subsystem 00:05:51.024 ************************************ 00:05:51.024 11:07:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:05:51.024 11:07:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:51.024 11:07:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.024 11:07:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:51.024 ************************************ 00:05:51.024 START TEST nvmf_host_management 00:05:51.024 ************************************ 00:05:51.024 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:05:51.024 * Looking for test storage... 00:05:51.024 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:51.024 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:51.024 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:05:51.024 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:51.024 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:51.024 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.024 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.024 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.024 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.024 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.024 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.024 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.024 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.024 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.024 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.024 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.024 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:05:51.024 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:05:51.024 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.024 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.024 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:05:51.024 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:05:51.024 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.024 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:05:51.024 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.024 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:05:51.024 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:05:51.024 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.024 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:05:51.024 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.024 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.024 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.024 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:05:51.024 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.024 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:51.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.024 --rc genhtml_branch_coverage=1 00:05:51.024 --rc genhtml_function_coverage=1 00:05:51.024 --rc genhtml_legend=1 00:05:51.024 --rc geninfo_all_blocks=1 00:05:51.024 --rc geninfo_unexecuted_blocks=1 00:05:51.024 00:05:51.024 ' 00:05:51.024 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:51.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.024 --rc genhtml_branch_coverage=1 00:05:51.024 --rc genhtml_function_coverage=1 00:05:51.024 --rc genhtml_legend=1 00:05:51.024 --rc geninfo_all_blocks=1 00:05:51.024 --rc geninfo_unexecuted_blocks=1 00:05:51.024 00:05:51.024 ' 00:05:51.024 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:51.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.024 --rc genhtml_branch_coverage=1 00:05:51.024 --rc genhtml_function_coverage=1 00:05:51.024 --rc genhtml_legend=1 00:05:51.024 --rc geninfo_all_blocks=1 00:05:51.024 --rc geninfo_unexecuted_blocks=1 00:05:51.024 00:05:51.024 ' 00:05:51.024 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:51.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.025 --rc genhtml_branch_coverage=1 00:05:51.025 --rc genhtml_function_coverage=1 00:05:51.025 --rc genhtml_legend=1 00:05:51.025 --rc geninfo_all_blocks=1 00:05:51.025 --rc geninfo_unexecuted_blocks=1 00:05:51.025 00:05:51.025 ' 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:51.025 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:05:51.025 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:05:53.560 Found 0000:82:00.0 (0x8086 - 0x159b) 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:05:53.560 Found 0000:82:00.1 (0x8086 - 0x159b) 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:05:53.560 Found net devices under 0000:82:00.0: cvl_0_0 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:05:53.560 Found net devices under 0000:82:00.1: cvl_0_1 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:53.560 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:53.561 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:53.561 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:53.561 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:53.561 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:53.561 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:53.561 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:53.561 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:53.561 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:53.561 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:53.561 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:53.561 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:53.561 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:53.561 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:53.561 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:53.561 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:53.819 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:53.819 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:53.819 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:53.819 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:53.819 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:05:53.819 00:05:53.819 --- 10.0.0.2 ping statistics --- 00:05:53.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:53.819 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:05:53.819 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:53.819 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:53.819 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:05:53.819 00:05:53.819 --- 10.0.0.1 ping statistics --- 00:05:53.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:53.819 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:05:53.819 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:53.819 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:05:53.819 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:53.819 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:53.819 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:53.819 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:53.819 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:53.819 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:53.819 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:53.819 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:05:53.820 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:05:53.820 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:05:53.820 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:53.820 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:53.820 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:53.820 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2495174 00:05:53.820 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2495174 00:05:53.820 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2495174 ']' 00:05:53.820 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.820 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.820 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.820 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:05:53.820 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.820 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:53.820 [2024-11-19 11:07:49.153754] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:05:53.820 [2024-11-19 11:07:49.153829] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:53.820 [2024-11-19 11:07:49.240064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:53.820 [2024-11-19 11:07:49.297153] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:53.820 [2024-11-19 11:07:49.297212] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:53.820 [2024-11-19 11:07:49.297240] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:53.820 [2024-11-19 11:07:49.297252] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:53.820 [2024-11-19 11:07:49.297261] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:53.820 [2024-11-19 11:07:49.298902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:53.820 [2024-11-19 11:07:49.298948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:53.820 [2024-11-19 11:07:49.299008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:53.820 [2024-11-19 11:07:49.299010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.078 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.078 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:05:54.078 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:54.078 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:54.078 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:54.078 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:54.078 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:54.078 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.078 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:54.078 [2024-11-19 11:07:49.447974] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:54.078 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.078 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:05:54.078 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:54.078 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:54.078 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:05:54.078 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:05:54.078 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:05:54.078 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.078 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:54.078 Malloc0 00:05:54.078 [2024-11-19 11:07:49.531789] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:54.078 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.078 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:05:54.078 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:54.078 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:54.078 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2495336 00:05:54.078 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2495336 /var/tmp/bdevperf.sock 00:05:54.078 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2495336 ']' 00:05:54.078 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:05:54.078 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:05:54.078 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:05:54.078 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.078 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:05:54.078 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:05:54.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:05:54.078 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.078 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:05:54.078 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:54.078 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:05:54.078 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:05:54.078 { 00:05:54.078 "params": { 00:05:54.078 "name": "Nvme$subsystem", 00:05:54.078 "trtype": "$TEST_TRANSPORT", 00:05:54.078 "traddr": "$NVMF_FIRST_TARGET_IP", 00:05:54.078 "adrfam": "ipv4", 00:05:54.078 "trsvcid": "$NVMF_PORT", 00:05:54.078 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:05:54.078 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:05:54.078 "hdgst": ${hdgst:-false}, 00:05:54.078 "ddgst": ${ddgst:-false} 00:05:54.078 }, 00:05:54.078 "method": "bdev_nvme_attach_controller" 00:05:54.078 } 00:05:54.078 EOF 00:05:54.078 )") 00:05:54.078 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:05:54.078 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:05:54.078 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:05:54.078 11:07:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:05:54.078 "params": { 00:05:54.078 "name": "Nvme0", 00:05:54.078 "trtype": "tcp", 00:05:54.078 "traddr": "10.0.0.2", 00:05:54.078 "adrfam": "ipv4", 00:05:54.078 "trsvcid": "4420", 00:05:54.078 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:05:54.078 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:05:54.078 "hdgst": false, 00:05:54.078 "ddgst": false 00:05:54.078 }, 00:05:54.078 "method": "bdev_nvme_attach_controller" 00:05:54.078 }' 00:05:54.336 [2024-11-19 11:07:49.614693] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:05:54.336 [2024-11-19 11:07:49.614765] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2495336 ] 00:05:54.336 [2024-11-19 11:07:49.693237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.336 [2024-11-19 11:07:49.752400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.902 Running I/O for 10 seconds... 00:05:54.902 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.902 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:05:54.902 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:05:54.902 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.902 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:54.902 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.902 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:05:54.902 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:05:54.902 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:05:54.902 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:05:54.902 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:05:54.902 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:05:54.902 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:05:54.902 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:05:54.902 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:05:54.902 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:05:54.902 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.902 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:54.902 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.902 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:05:54.902 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:05:54.902 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:05:55.162 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:05:55.162 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:05:55.162 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:05:55.162 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:05:55.162 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.162 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:55.162 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.162 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=545 00:05:55.162 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 545 -ge 100 ']' 00:05:55.162 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:05:55.162 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:05:55.162 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:05:55.162 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:05:55.162 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.162 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:55.162 [2024-11-19 11:07:50.506976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.162 [2024-11-19 11:07:50.507125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.162 [2024-11-19 11:07:50.507142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.162 [2024-11-19 11:07:50.507156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.162 [2024-11-19 11:07:50.507168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.162 [2024-11-19 11:07:50.507180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.162 [2024-11-19 11:07:50.507192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.162 [2024-11-19 11:07:50.507203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.162 [2024-11-19 11:07:50.507215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.162 [2024-11-19 11:07:50.507227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.162 [2024-11-19 11:07:50.507239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.162 [2024-11-19 11:07:50.507251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.162 [2024-11-19 11:07:50.507263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.162 [2024-11-19 11:07:50.507275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.162 [2024-11-19 11:07:50.507287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.162 [2024-11-19 11:07:50.507299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.162 [2024-11-19 11:07:50.507311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.162 [2024-11-19 11:07:50.507323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.162 [2024-11-19 11:07:50.507334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.162 [2024-11-19 11:07:50.507352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.162 [2024-11-19 11:07:50.507383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.162 [2024-11-19 11:07:50.507397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.162 [2024-11-19 11:07:50.507412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.162 [2024-11-19 11:07:50.507423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.162 [2024-11-19 11:07:50.507435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.162 [2024-11-19 11:07:50.507446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.162 [2024-11-19 11:07:50.507458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.162 [2024-11-19 11:07:50.507470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.162 [2024-11-19 11:07:50.507482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.162 [2024-11-19 11:07:50.507493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.162 [2024-11-19 11:07:50.507505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.162 [2024-11-19 11:07:50.507516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.162 [2024-11-19 11:07:50.507528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.162 [2024-11-19 11:07:50.507539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.162 [2024-11-19 11:07:50.507551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.162 [2024-11-19 11:07:50.507563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.163 [2024-11-19 11:07:50.507574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.163 [2024-11-19 11:07:50.507586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.163 [2024-11-19 11:07:50.507597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.163 [2024-11-19 11:07:50.507615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.163 [2024-11-19 11:07:50.507627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.163 [2024-11-19 11:07:50.507649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.163 [2024-11-19 11:07:50.507661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.163 [2024-11-19 11:07:50.507672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.163 [2024-11-19 11:07:50.507684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.163 [2024-11-19 11:07:50.507695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.163 [2024-11-19 11:07:50.507712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.163 [2024-11-19 11:07:50.507728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.163 [2024-11-19 11:07:50.507741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.163 [2024-11-19 11:07:50.507753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.163 [2024-11-19 11:07:50.507765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.163 [2024-11-19 11:07:50.507777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.163 [2024-11-19 11:07:50.507788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.163 [2024-11-19 11:07:50.507800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.163 [2024-11-19 11:07:50.507812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.163 [2024-11-19 11:07:50.507824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.163 [2024-11-19 11:07:50.507835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.163 [2024-11-19 11:07:50.507847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.163 [2024-11-19 11:07:50.507859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.163 [2024-11-19 11:07:50.507871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.163 [2024-11-19 11:07:50.507882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.163 [2024-11-19 11:07:50.507893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.163 [2024-11-19 11:07:50.507905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cff10 is same with the state(6) to be set 00:05:55.163 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.163 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:05:55.163 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.163 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:05:55.163 [2024-11-19 11:07:50.512300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.163 [2024-11-19 11:07:50.512344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.163 [2024-11-19 11:07:50.512380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.163 [2024-11-19 11:07:50.512399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.163 [2024-11-19 11:07:50.512427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.163 [2024-11-19 11:07:50.512442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.163 [2024-11-19 11:07:50.512458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.163 [2024-11-19 11:07:50.512478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.163 [2024-11-19 11:07:50.512506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.163 [2024-11-19 11:07:50.512520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.163 [2024-11-19 11:07:50.512536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.163 [2024-11-19 11:07:50.512551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.163 [2024-11-19 11:07:50.512567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.163 [2024-11-19 11:07:50.512581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.163 [2024-11-19 11:07:50.512596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.163 [2024-11-19 11:07:50.512611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.163 [2024-11-19 11:07:50.512626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.163 [2024-11-19 11:07:50.512641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.163 [2024-11-19 11:07:50.512656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.163 [2024-11-19 11:07:50.512674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.163 [2024-11-19 11:07:50.512690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.163 [2024-11-19 11:07:50.512704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.163 [2024-11-19 11:07:50.512720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.163 [2024-11-19 11:07:50.512734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.163 [2024-11-19 11:07:50.512750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.163 [2024-11-19 11:07:50.512764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.163 [2024-11-19 11:07:50.512781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.163 [2024-11-19 11:07:50.512795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.164 [2024-11-19 11:07:50.512810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.164 [2024-11-19 11:07:50.512825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.164 [2024-11-19 11:07:50.512841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.164 [2024-11-19 11:07:50.512856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.164 [2024-11-19 11:07:50.512876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.164 [2024-11-19 11:07:50.512891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.164 [2024-11-19 11:07:50.512907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.164 [2024-11-19 11:07:50.512922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.164 [2024-11-19 11:07:50.512937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.164 [2024-11-19 11:07:50.512953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.164 [2024-11-19 11:07:50.512969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.164 [2024-11-19 11:07:50.512983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.164 [2024-11-19 11:07:50.512998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.164 [2024-11-19 11:07:50.513013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.164 [2024-11-19 11:07:50.513028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.164 [2024-11-19 11:07:50.513042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.164 [2024-11-19 11:07:50.513058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.164 [2024-11-19 11:07:50.513072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.164 [2024-11-19 11:07:50.513088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.164 [2024-11-19 11:07:50.513102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.164 [2024-11-19 11:07:50.513118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.164 [2024-11-19 11:07:50.513132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.164 [2024-11-19 11:07:50.513154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.164 [2024-11-19 11:07:50.513169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.164 [2024-11-19 11:07:50.513185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.164 [2024-11-19 11:07:50.513199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.164 [2024-11-19 11:07:50.513215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.164 [2024-11-19 11:07:50.513229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.164 [2024-11-19 11:07:50.513244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.164 [2024-11-19 11:07:50.513267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.164 [2024-11-19 11:07:50.513283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.164 [2024-11-19 11:07:50.513298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.164 [2024-11-19 11:07:50.513314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.164 [2024-11-19 11:07:50.513328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.164 [2024-11-19 11:07:50.513344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.164 [2024-11-19 11:07:50.513359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.164 [2024-11-19 11:07:50.513382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.164 [2024-11-19 11:07:50.513398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.164 [2024-11-19 11:07:50.513423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.164 [2024-11-19 11:07:50.513438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.164 [2024-11-19 11:07:50.513453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.164 [2024-11-19 11:07:50.513467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.164 [2024-11-19 11:07:50.513483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.164 [2024-11-19 11:07:50.513496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.164 [2024-11-19 11:07:50.513512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.164 [2024-11-19 11:07:50.513526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.164 [2024-11-19 11:07:50.513541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.164 [2024-11-19 11:07:50.513556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.164 [2024-11-19 11:07:50.513571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.164 [2024-11-19 11:07:50.513585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.164 [2024-11-19 11:07:50.513601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.164 [2024-11-19 11:07:50.513615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.164 [2024-11-19 11:07:50.513630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.165 [2024-11-19 11:07:50.513644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.165 [2024-11-19 11:07:50.513694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.165 [2024-11-19 11:07:50.513710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.165 [2024-11-19 11:07:50.513729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.165 [2024-11-19 11:07:50.513743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.165 [2024-11-19 11:07:50.513758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.165 [2024-11-19 11:07:50.513772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.165 [2024-11-19 11:07:50.513787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.165 [2024-11-19 11:07:50.513801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.165 [2024-11-19 11:07:50.513816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.165 [2024-11-19 11:07:50.513830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.165 [2024-11-19 11:07:50.513845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.165 [2024-11-19 11:07:50.513859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.165 [2024-11-19 11:07:50.513873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.165 [2024-11-19 11:07:50.513887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.165 [2024-11-19 11:07:50.513903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.165 [2024-11-19 11:07:50.513916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.165 [2024-11-19 11:07:50.513931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.165 [2024-11-19 11:07:50.513945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.165 [2024-11-19 11:07:50.513960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.165 [2024-11-19 11:07:50.513974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.165 [2024-11-19 11:07:50.513988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.165 [2024-11-19 11:07:50.514002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.165 [2024-11-19 11:07:50.514017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.165 [2024-11-19 11:07:50.514031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.165 [2024-11-19 11:07:50.514046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.165 [2024-11-19 11:07:50.514064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.165 [2024-11-19 11:07:50.514080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.165 [2024-11-19 11:07:50.514094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.165 [2024-11-19 11:07:50.514117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.165 [2024-11-19 11:07:50.514141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.165 [2024-11-19 11:07:50.514161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.165 [2024-11-19 11:07:50.514176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.165 [2024-11-19 11:07:50.514199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.165 [2024-11-19 11:07:50.514213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.165 [2024-11-19 11:07:50.514228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.165 [2024-11-19 11:07:50.514242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.165 [2024-11-19 11:07:50.514257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.165 [2024-11-19 11:07:50.514271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.165 [2024-11-19 11:07:50.514286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.165 [2024-11-19 11:07:50.514300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.165 [2024-11-19 11:07:50.514314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.165 [2024-11-19 11:07:50.514328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.165 [2024-11-19 11:07:50.514357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.165 [2024-11-19 11:07:50.514383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.165 [2024-11-19 11:07:50.514401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:05:55.165 [2024-11-19 11:07:50.514425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.165 [2024-11-19 11:07:50.514582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:05:55.165 [2024-11-19 11:07:50.514605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.165 [2024-11-19 11:07:50.514620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:05:55.165 [2024-11-19 11:07:50.514634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.165 [2024-11-19 11:07:50.514654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:05:55.165 [2024-11-19 11:07:50.514668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.165 [2024-11-19 11:07:50.514682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:05:55.166 [2024-11-19 11:07:50.514706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:05:55.166 [2024-11-19 11:07:50.514719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ca40 is same with the state(6) to be set 00:05:55.166 [2024-11-19 11:07:50.516030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:05:55.166 task offset: 76672 on job bdev=Nvme0n1 fails 00:05:55.166 00:05:55.166 Latency(us) 00:05:55.166 [2024-11-19T10:07:50.663Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:55.166 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:05:55.166 Job: Nvme0n1 ended in about 0.41 seconds with error 00:05:55.166 Verification LBA range: start 0x0 length 0x400 00:05:55.166 Nvme0n1 : 0.41 1457.70 91.11 155.75 0.00 38564.20 2815.62 33981.63 00:05:55.166 [2024-11-19T10:07:50.663Z] =================================================================================================================== 00:05:55.166 [2024-11-19T10:07:50.663Z] Total : 1457.70 91.11 155.75 0.00 38564.20 2815.62 33981.63 00:05:55.166 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.166 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:05:55.166 [2024-11-19 11:07:50.519279] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:55.166 [2024-11-19 11:07:50.519312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229ca40 (9): Bad file descriptor 00:05:55.166 [2024-11-19 11:07:50.652515] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:05:56.099 11:07:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2495336 00:05:56.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2495336) - No such process 00:05:56.099 11:07:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:05:56.099 11:07:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:05:56.099 11:07:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:05:56.099 11:07:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:05:56.099 11:07:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:05:56.099 11:07:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:05:56.099 11:07:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:05:56.099 11:07:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:05:56.099 { 00:05:56.099 "params": { 00:05:56.099 "name": "Nvme$subsystem", 00:05:56.099 "trtype": "$TEST_TRANSPORT", 00:05:56.099 "traddr": "$NVMF_FIRST_TARGET_IP", 00:05:56.099 "adrfam": "ipv4", 00:05:56.099 "trsvcid": "$NVMF_PORT", 00:05:56.099 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:05:56.099 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:05:56.099 "hdgst": ${hdgst:-false}, 00:05:56.099 "ddgst": ${ddgst:-false} 00:05:56.099 }, 00:05:56.099 "method": "bdev_nvme_attach_controller" 00:05:56.099 } 00:05:56.099 EOF 00:05:56.099 )") 00:05:56.099 11:07:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:05:56.099 11:07:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:05:56.099 11:07:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:05:56.099 11:07:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:05:56.099 "params": { 00:05:56.099 "name": "Nvme0", 00:05:56.099 "trtype": "tcp", 00:05:56.099 "traddr": "10.0.0.2", 00:05:56.099 "adrfam": "ipv4", 00:05:56.099 "trsvcid": "4420", 00:05:56.099 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:05:56.099 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:05:56.099 "hdgst": false, 00:05:56.099 "ddgst": false 00:05:56.099 }, 00:05:56.099 "method": "bdev_nvme_attach_controller" 00:05:56.099 }' 00:05:56.099 [2024-11-19 11:07:51.571951] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:05:56.099 [2024-11-19 11:07:51.572025] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2495616 ] 00:05:56.359 [2024-11-19 11:07:51.651082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.360 [2024-11-19 11:07:51.711072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.618 Running I/O for 1 seconds... 00:05:57.550 1536.00 IOPS, 96.00 MiB/s 00:05:57.550 Latency(us) 00:05:57.550 [2024-11-19T10:07:53.047Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:57.550 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:05:57.550 Verification LBA range: start 0x0 length 0x400 00:05:57.550 Nvme0n1 : 1.01 1587.17 99.20 0.00 0.00 39679.64 6359.42 33981.63 00:05:57.550 [2024-11-19T10:07:53.047Z] =================================================================================================================== 00:05:57.550 [2024-11-19T10:07:53.047Z] Total : 1587.17 99.20 0.00 0.00 39679.64 6359.42 33981.63 00:05:57.807 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:05:57.807 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:05:57.807 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:05:57.807 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:05:57.807 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:05:57.807 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:57.807 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:05:57.807 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:57.807 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:05:57.807 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:57.807 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:57.807 rmmod nvme_tcp 00:05:57.807 rmmod nvme_fabrics 00:05:57.807 rmmod nvme_keyring 00:05:57.807 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:57.807 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:05:57.807 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:05:57.807 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2495174 ']' 00:05:57.807 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2495174 00:05:57.807 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2495174 ']' 00:05:57.807 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2495174 00:05:57.807 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:05:57.807 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.807 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2495174 00:05:57.807 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:57.807 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:57.807 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2495174' 00:05:57.807 killing process with pid 2495174 00:05:57.807 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2495174 00:05:57.807 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2495174 00:05:58.065 [2024-11-19 11:07:53.503333] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:05:58.065 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:58.065 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:58.065 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:58.065 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:05:58.065 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:05:58.065 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:58.065 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:05:58.065 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:58.065 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:58.065 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:58.065 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:58.065 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:00.600 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:00.600 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:00.600 00:06:00.600 real 0m9.562s 00:06:00.600 user 0m20.270s 00:06:00.600 sys 0m3.384s 00:06:00.600 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.600 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:00.600 ************************************ 00:06:00.600 END TEST nvmf_host_management 00:06:00.600 ************************************ 00:06:00.600 11:07:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:00.600 11:07:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:00.600 11:07:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.600 11:07:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:00.600 ************************************ 00:06:00.600 START TEST nvmf_lvol 00:06:00.600 ************************************ 00:06:00.600 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:00.600 * Looking for test storage... 00:06:00.600 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:00.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.601 --rc genhtml_branch_coverage=1 00:06:00.601 --rc genhtml_function_coverage=1 00:06:00.601 --rc genhtml_legend=1 00:06:00.601 --rc geninfo_all_blocks=1 00:06:00.601 --rc geninfo_unexecuted_blocks=1 00:06:00.601 00:06:00.601 ' 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:00.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.601 --rc genhtml_branch_coverage=1 00:06:00.601 --rc genhtml_function_coverage=1 00:06:00.601 --rc genhtml_legend=1 00:06:00.601 --rc geninfo_all_blocks=1 00:06:00.601 --rc geninfo_unexecuted_blocks=1 00:06:00.601 00:06:00.601 ' 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:00.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.601 --rc genhtml_branch_coverage=1 00:06:00.601 --rc genhtml_function_coverage=1 00:06:00.601 --rc genhtml_legend=1 00:06:00.601 --rc geninfo_all_blocks=1 00:06:00.601 --rc geninfo_unexecuted_blocks=1 00:06:00.601 00:06:00.601 ' 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:00.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.601 --rc genhtml_branch_coverage=1 00:06:00.601 --rc genhtml_function_coverage=1 00:06:00.601 --rc genhtml_legend=1 00:06:00.601 --rc geninfo_all_blocks=1 00:06:00.601 --rc geninfo_unexecuted_blocks=1 00:06:00.601 00:06:00.601 ' 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.601 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:00.602 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.602 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:00.602 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:00.602 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:00.602 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:00.602 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:00.602 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:00.602 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:00.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:00.602 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:00.602 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:00.602 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:00.602 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:00.602 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:00.602 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:00.602 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:00.602 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:00.602 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:00.602 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:00.602 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:00.602 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:00.602 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:00.602 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:00.602 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:00.602 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:00.602 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:00.602 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:00.602 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:00.602 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:00.602 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:03.137 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:03.137 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:03.137 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:03.137 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:03.137 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:03.137 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:03.137 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:03.137 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:03.137 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:03.137 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:03.137 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:03.137 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:03.137 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:03.137 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:03.137 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:03.137 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:03.137 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:03.137 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:03.137 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:03.137 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:03.137 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:03.137 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:03.137 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:03.137 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:03.137 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:03.137 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:03.137 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:03.137 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:03.137 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:03.137 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:03.137 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:03.137 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:03.137 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:03.137 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:06:03.138 Found 0000:82:00.0 (0x8086 - 0x159b) 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:06:03.138 Found 0000:82:00.1 (0x8086 - 0x159b) 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:06:03.138 Found net devices under 0000:82:00.0: cvl_0_0 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:06:03.138 Found net devices under 0000:82:00.1: cvl_0_1 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:03.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:03.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:06:03.138 00:06:03.138 --- 10.0.0.2 ping statistics --- 00:06:03.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:03.138 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:03.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:03.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:06:03.138 00:06:03.138 --- 10.0.0.1 ping statistics --- 00:06:03.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:03.138 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2498126 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2498126 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2498126 ']' 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.138 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:03.138 [2024-11-19 11:07:58.569201] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:06:03.138 [2024-11-19 11:07:58.569290] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:03.397 [2024-11-19 11:07:58.649841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:03.397 [2024-11-19 11:07:58.704711] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:03.397 [2024-11-19 11:07:58.704766] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:03.397 [2024-11-19 11:07:58.704795] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:03.397 [2024-11-19 11:07:58.704806] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:03.397 [2024-11-19 11:07:58.704816] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:03.397 [2024-11-19 11:07:58.706289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.397 [2024-11-19 11:07:58.706449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.397 [2024-11-19 11:07:58.706454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.397 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.397 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:03.397 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:03.397 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:03.397 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:03.397 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:03.397 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:03.655 [2024-11-19 11:07:59.099844] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:03.655 11:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:04.220 11:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:04.220 11:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:04.220 11:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:04.220 11:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:04.785 11:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:05.044 11:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=696930e4-bb6a-4f31-b198-abcd43ff7344 00:06:05.044 11:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 696930e4-bb6a-4f31-b198-abcd43ff7344 lvol 20 00:06:05.304 11:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=ffc4f0ea-2209-4139-922a-783bd9a5a6b8 00:06:05.304 11:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:05.562 11:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ffc4f0ea-2209-4139-922a-783bd9a5a6b8 00:06:05.819 11:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:06.077 [2024-11-19 11:08:01.352743] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:06.077 11:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:06.334 11:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2498547 00:06:06.334 11:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:06.334 11:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:07.266 11:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot ffc4f0ea-2209-4139-922a-783bd9a5a6b8 MY_SNAPSHOT 00:06:07.524 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=0b0a39f0-4943-458e-999d-284b182c2880 00:06:07.524 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize ffc4f0ea-2209-4139-922a-783bd9a5a6b8 30 00:06:08.091 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 0b0a39f0-4943-458e-999d-284b182c2880 MY_CLONE 00:06:08.350 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=477318ae-02ec-47bb-b45b-cc0f113b97f8 00:06:08.350 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 477318ae-02ec-47bb-b45b-cc0f113b97f8 00:06:09.349 11:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2498547 00:06:17.490 Initializing NVMe Controllers 00:06:17.490 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:17.490 Controller IO queue size 128, less than required. 00:06:17.490 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:17.490 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:17.490 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:17.490 Initialization complete. Launching workers. 00:06:17.490 ======================================================== 00:06:17.490 Latency(us) 00:06:17.490 Device Information : IOPS MiB/s Average min max 00:06:17.490 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10310.40 40.27 12413.96 1934.25 60922.39 00:06:17.490 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10219.70 39.92 12526.97 1992.98 56585.73 00:06:17.490 ======================================================== 00:06:17.490 Total : 20530.10 80.20 12470.21 1934.25 60922.39 00:06:17.490 00:06:17.490 11:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:17.490 11:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ffc4f0ea-2209-4139-922a-783bd9a5a6b8 00:06:17.490 11:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 696930e4-bb6a-4f31-b198-abcd43ff7344 00:06:17.490 11:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:17.490 11:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:17.490 11:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:17.490 11:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:17.490 11:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:17.490 11:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:17.490 11:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:17.490 11:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:17.490 11:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:17.490 rmmod nvme_tcp 00:06:17.490 rmmod nvme_fabrics 00:06:17.490 rmmod nvme_keyring 00:06:17.490 11:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:17.490 11:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:17.490 11:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:17.490 11:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2498126 ']' 00:06:17.490 11:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2498126 00:06:17.490 11:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2498126 ']' 00:06:17.490 11:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2498126 00:06:17.490 11:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:06:17.490 11:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:17.490 11:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2498126 00:06:17.749 11:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:17.749 11:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:17.749 11:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2498126' 00:06:17.749 killing process with pid 2498126 00:06:17.749 11:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2498126 00:06:17.749 11:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2498126 00:06:18.008 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:18.008 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:18.008 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:18.008 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:18.008 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:06:18.008 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:18.008 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:06:18.008 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:18.008 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:18.008 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:18.008 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:18.008 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:19.918 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:19.918 00:06:19.918 real 0m19.680s 00:06:19.918 user 1m5.977s 00:06:19.918 sys 0m5.835s 00:06:19.918 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.918 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:19.918 ************************************ 00:06:19.918 END TEST nvmf_lvol 00:06:19.918 ************************************ 00:06:19.918 11:08:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:19.918 11:08:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:19.918 11:08:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.918 11:08:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:19.918 ************************************ 00:06:19.918 START TEST nvmf_lvs_grow 00:06:19.918 ************************************ 00:06:19.918 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:20.179 * Looking for test storage... 00:06:20.179 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:20.179 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:20.179 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:06:20.179 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:20.179 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:20.179 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.179 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.179 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.179 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.179 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.179 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.179 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.179 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.179 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.179 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.179 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.179 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:06:20.179 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:06:20.179 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.179 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.179 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:06:20.179 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:06:20.179 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.179 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:06:20.179 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.179 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:06:20.179 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:06:20.179 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.179 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:06:20.179 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.179 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.179 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.179 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:06:20.179 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.179 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:20.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.179 --rc genhtml_branch_coverage=1 00:06:20.179 --rc genhtml_function_coverage=1 00:06:20.179 --rc genhtml_legend=1 00:06:20.179 --rc geninfo_all_blocks=1 00:06:20.179 --rc geninfo_unexecuted_blocks=1 00:06:20.179 00:06:20.179 ' 00:06:20.179 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:20.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.179 --rc genhtml_branch_coverage=1 00:06:20.179 --rc genhtml_function_coverage=1 00:06:20.179 --rc genhtml_legend=1 00:06:20.179 --rc geninfo_all_blocks=1 00:06:20.179 --rc geninfo_unexecuted_blocks=1 00:06:20.179 00:06:20.179 ' 00:06:20.179 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:20.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.180 --rc genhtml_branch_coverage=1 00:06:20.180 --rc genhtml_function_coverage=1 00:06:20.180 --rc genhtml_legend=1 00:06:20.180 --rc geninfo_all_blocks=1 00:06:20.180 --rc geninfo_unexecuted_blocks=1 00:06:20.180 00:06:20.180 ' 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:20.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.180 --rc genhtml_branch_coverage=1 00:06:20.180 --rc genhtml_function_coverage=1 00:06:20.180 --rc genhtml_legend=1 00:06:20.180 --rc geninfo_all_blocks=1 00:06:20.180 --rc geninfo_unexecuted_blocks=1 00:06:20.180 00:06:20.180 ' 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:20.180 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:20.180 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:20.181 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:06:20.181 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:22.718 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:22.718 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:06:22.718 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:22.718 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:22.718 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:22.718 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:22.718 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:22.718 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:06:22.718 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:22.718 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:06:22.718 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:06:22.718 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:06:22.718 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:06:22.718 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:06:22.718 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:06:22.718 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:22.718 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:22.718 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:22.718 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:22.718 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:22.718 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:22.718 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:22.718 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:22.718 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:22.718 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:22.718 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:22.718 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:22.718 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:22.718 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:22.718 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:22.718 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:22.718 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:22.718 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:22.718 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:22.718 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:06:22.718 Found 0000:82:00.0 (0x8086 - 0x159b) 00:06:22.718 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:22.718 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:22.718 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:22.718 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:22.718 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:22.718 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:22.718 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:06:22.718 Found 0000:82:00.1 (0x8086 - 0x159b) 00:06:22.718 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:22.718 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:22.718 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:22.718 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:22.718 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:22.718 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:22.719 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:22.719 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:22.719 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:22.719 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:22.719 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:22.719 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:22.719 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:22.719 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:22.719 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:22.719 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:06:22.719 Found net devices under 0000:82:00.0: cvl_0_0 00:06:22.719 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:22.719 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:22.719 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:22.719 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:22.719 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:22.719 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:22.719 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:22.719 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:22.719 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:06:22.719 Found net devices under 0000:82:00.1: cvl_0_1 00:06:22.719 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:22.719 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:22.719 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:06:22.719 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:22.719 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:22.719 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:22.719 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:22.719 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:22.719 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:22.719 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:22.719 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:22.719 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:22.719 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:22.719 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:22.719 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:22.719 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:22.719 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:22.719 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:22.719 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:22.719 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:22.719 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:22.978 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:22.978 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:22.978 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:22.978 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:22.978 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:22.978 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:22.978 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:22.978 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:22.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:22.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:06:22.978 00:06:22.978 --- 10.0.0.2 ping statistics --- 00:06:22.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:22.978 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:06:22.978 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:22.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:22.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:06:22.978 00:06:22.978 --- 10.0.0.1 ping statistics --- 00:06:22.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:22.978 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:06:22.978 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:22.978 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:06:22.978 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:22.978 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:22.978 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:22.978 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:22.978 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:22.978 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:22.978 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:22.978 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:06:22.978 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:22.978 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:22.978 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:22.978 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2502132 00:06:22.978 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:06:22.978 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2502132 00:06:22.978 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2502132 ']' 00:06:22.978 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.978 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.978 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.978 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.978 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:22.978 [2024-11-19 11:08:18.382186] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:06:22.978 [2024-11-19 11:08:18.382261] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:22.978 [2024-11-19 11:08:18.465121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.237 [2024-11-19 11:08:18.523907] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:23.237 [2024-11-19 11:08:18.523974] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:23.237 [2024-11-19 11:08:18.524002] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:23.237 [2024-11-19 11:08:18.524014] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:23.237 [2024-11-19 11:08:18.524023] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:23.237 [2024-11-19 11:08:18.524716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.237 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.237 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:06:23.237 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:23.237 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:23.237 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:23.237 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:23.237 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:23.495 [2024-11-19 11:08:18.909106] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:23.495 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:06:23.495 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.495 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.495 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:23.495 ************************************ 00:06:23.495 START TEST lvs_grow_clean 00:06:23.495 ************************************ 00:06:23.495 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:06:23.495 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:23.495 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:23.495 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:23.495 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:23.495 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:23.495 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:23.495 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:23.495 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:23.495 11:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:23.753 11:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:23.753 11:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:24.319 11:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=35d8c9b4-a6a0-4d28-b48c-382c4ff13265 00:06:24.319 11:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 35d8c9b4-a6a0-4d28-b48c-382c4ff13265 00:06:24.319 11:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:24.319 11:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:24.319 11:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:24.319 11:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 35d8c9b4-a6a0-4d28-b48c-382c4ff13265 lvol 150 00:06:24.577 11:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=85336be6-d556-4e32-8da8-34ae7a76f8a1 00:06:24.577 11:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:24.577 11:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:24.835 [2024-11-19 11:08:20.328839] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:24.835 [2024-11-19 11:08:20.328961] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:25.093 true 00:06:25.093 11:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 35d8c9b4-a6a0-4d28-b48c-382c4ff13265 00:06:25.093 11:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:25.351 11:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:25.351 11:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:25.610 11:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 85336be6-d556-4e32-8da8-34ae7a76f8a1 00:06:25.868 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:26.188 [2024-11-19 11:08:21.420132] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:26.188 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:26.462 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2502572 00:06:26.462 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:26.462 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:26.462 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2502572 /var/tmp/bdevperf.sock 00:06:26.462 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2502572 ']' 00:06:26.462 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:26.462 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.462 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:26.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:26.462 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.462 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:26.462 [2024-11-19 11:08:21.742574] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:06:26.462 [2024-11-19 11:08:21.742656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2502572 ] 00:06:26.462 [2024-11-19 11:08:21.821229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.462 [2024-11-19 11:08:21.877896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.721 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.721 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:06:26.721 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:26.978 Nvme0n1 00:06:27.236 11:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:06:27.236 [ 00:06:27.236 { 00:06:27.236 "name": "Nvme0n1", 00:06:27.236 "aliases": [ 00:06:27.236 "85336be6-d556-4e32-8da8-34ae7a76f8a1" 00:06:27.236 ], 00:06:27.236 "product_name": "NVMe disk", 00:06:27.236 "block_size": 4096, 00:06:27.236 "num_blocks": 38912, 00:06:27.236 "uuid": "85336be6-d556-4e32-8da8-34ae7a76f8a1", 00:06:27.236 "numa_id": 1, 00:06:27.236 "assigned_rate_limits": { 00:06:27.236 "rw_ios_per_sec": 0, 00:06:27.236 "rw_mbytes_per_sec": 0, 00:06:27.236 "r_mbytes_per_sec": 0, 00:06:27.236 "w_mbytes_per_sec": 0 00:06:27.236 }, 00:06:27.236 "claimed": false, 00:06:27.236 "zoned": false, 00:06:27.236 "supported_io_types": { 00:06:27.236 "read": true, 00:06:27.236 "write": true, 00:06:27.236 "unmap": true, 00:06:27.236 "flush": true, 00:06:27.236 "reset": true, 00:06:27.236 "nvme_admin": true, 00:06:27.236 "nvme_io": true, 00:06:27.236 "nvme_io_md": false, 00:06:27.236 "write_zeroes": true, 00:06:27.236 "zcopy": false, 00:06:27.236 "get_zone_info": false, 00:06:27.236 "zone_management": false, 00:06:27.237 "zone_append": false, 00:06:27.237 "compare": true, 00:06:27.237 "compare_and_write": true, 00:06:27.237 "abort": true, 00:06:27.237 "seek_hole": false, 00:06:27.237 "seek_data": false, 00:06:27.237 "copy": true, 00:06:27.237 "nvme_iov_md": false 00:06:27.237 }, 00:06:27.237 "memory_domains": [ 00:06:27.237 { 00:06:27.237 "dma_device_id": "system", 00:06:27.237 "dma_device_type": 1 00:06:27.237 } 00:06:27.237 ], 00:06:27.237 "driver_specific": { 00:06:27.237 "nvme": [ 00:06:27.237 { 00:06:27.237 "trid": { 00:06:27.237 "trtype": "TCP", 00:06:27.237 "adrfam": "IPv4", 00:06:27.237 "traddr": "10.0.0.2", 00:06:27.237 "trsvcid": "4420", 00:06:27.237 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:06:27.237 }, 00:06:27.237 "ctrlr_data": { 00:06:27.237 "cntlid": 1, 00:06:27.237 "vendor_id": "0x8086", 00:06:27.237 "model_number": "SPDK bdev Controller", 00:06:27.237 "serial_number": "SPDK0", 00:06:27.237 "firmware_revision": "25.01", 00:06:27.237 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:27.237 "oacs": { 00:06:27.237 "security": 0, 00:06:27.237 "format": 0, 00:06:27.237 "firmware": 0, 00:06:27.237 "ns_manage": 0 00:06:27.237 }, 00:06:27.237 "multi_ctrlr": true, 00:06:27.237 "ana_reporting": false 00:06:27.237 }, 00:06:27.237 "vs": { 00:06:27.237 "nvme_version": "1.3" 00:06:27.237 }, 00:06:27.237 "ns_data": { 00:06:27.237 "id": 1, 00:06:27.237 "can_share": true 00:06:27.237 } 00:06:27.237 } 00:06:27.237 ], 00:06:27.237 "mp_policy": "active_passive" 00:06:27.237 } 00:06:27.237 } 00:06:27.237 ] 00:06:27.495 11:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2502705 00:06:27.495 11:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:06:27.495 11:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:27.495 Running I/O for 10 seconds... 00:06:28.429 Latency(us) 00:06:28.429 [2024-11-19T10:08:23.926Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:28.429 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:28.429 Nvme0n1 : 1.00 16325.00 63.77 0.00 0.00 0.00 0.00 0.00 00:06:28.429 [2024-11-19T10:08:23.926Z] =================================================================================================================== 00:06:28.429 [2024-11-19T10:08:23.926Z] Total : 16325.00 63.77 0.00 0.00 0.00 0.00 0.00 00:06:28.429 00:06:29.364 11:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 35d8c9b4-a6a0-4d28-b48c-382c4ff13265 00:06:29.364 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:29.364 Nvme0n1 : 2.00 16481.00 64.38 0.00 0.00 0.00 0.00 0.00 00:06:29.364 [2024-11-19T10:08:24.861Z] =================================================================================================================== 00:06:29.364 [2024-11-19T10:08:24.861Z] Total : 16481.00 64.38 0.00 0.00 0.00 0.00 0.00 00:06:29.364 00:06:29.623 true 00:06:29.623 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 35d8c9b4-a6a0-4d28-b48c-382c4ff13265 00:06:29.623 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:06:29.881 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:06:29.881 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:06:29.881 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2502705 00:06:30.447 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:30.447 Nvme0n1 : 3.00 16575.33 64.75 0.00 0.00 0.00 0.00 0.00 00:06:30.447 [2024-11-19T10:08:25.944Z] =================================================================================================================== 00:06:30.447 [2024-11-19T10:08:25.944Z] Total : 16575.33 64.75 0.00 0.00 0.00 0.00 0.00 00:06:30.447 00:06:31.381 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:31.381 Nvme0n1 : 4.00 16654.25 65.06 0.00 0.00 0.00 0.00 0.00 00:06:31.381 [2024-11-19T10:08:26.878Z] =================================================================================================================== 00:06:31.381 [2024-11-19T10:08:26.878Z] Total : 16654.25 65.06 0.00 0.00 0.00 0.00 0.00 00:06:31.381 00:06:32.757 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:32.757 Nvme0n1 : 5.00 16714.60 65.29 0.00 0.00 0.00 0.00 0.00 00:06:32.757 [2024-11-19T10:08:28.254Z] =================================================================================================================== 00:06:32.757 [2024-11-19T10:08:28.254Z] Total : 16714.60 65.29 0.00 0.00 0.00 0.00 0.00 00:06:32.757 00:06:33.692 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:33.692 Nvme0n1 : 6.00 16765.17 65.49 0.00 0.00 0.00 0.00 0.00 00:06:33.692 [2024-11-19T10:08:29.189Z] =================================================================================================================== 00:06:33.692 [2024-11-19T10:08:29.189Z] Total : 16765.17 65.49 0.00 0.00 0.00 0.00 0.00 00:06:33.692 00:06:34.625 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:34.625 Nvme0n1 : 7.00 16828.71 65.74 0.00 0.00 0.00 0.00 0.00 00:06:34.625 [2024-11-19T10:08:30.122Z] =================================================================================================================== 00:06:34.625 [2024-11-19T10:08:30.122Z] Total : 16828.71 65.74 0.00 0.00 0.00 0.00 0.00 00:06:34.625 00:06:35.559 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:35.559 Nvme0n1 : 8.00 16797.00 65.61 0.00 0.00 0.00 0.00 0.00 00:06:35.559 [2024-11-19T10:08:31.056Z] =================================================================================================================== 00:06:35.559 [2024-11-19T10:08:31.056Z] Total : 16797.00 65.61 0.00 0.00 0.00 0.00 0.00 00:06:35.559 00:06:36.494 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:36.494 Nvme0n1 : 9.00 16843.56 65.80 0.00 0.00 0.00 0.00 0.00 00:06:36.494 [2024-11-19T10:08:31.991Z] =================================================================================================================== 00:06:36.494 [2024-11-19T10:08:31.991Z] Total : 16843.56 65.80 0.00 0.00 0.00 0.00 0.00 00:06:36.494 00:06:37.429 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:37.429 Nvme0n1 : 10.00 16881.10 65.94 0.00 0.00 0.00 0.00 0.00 00:06:37.429 [2024-11-19T10:08:32.926Z] =================================================================================================================== 00:06:37.429 [2024-11-19T10:08:32.926Z] Total : 16881.10 65.94 0.00 0.00 0.00 0.00 0.00 00:06:37.429 00:06:37.429 00:06:37.429 Latency(us) 00:06:37.429 [2024-11-19T10:08:32.926Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:37.429 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:37.429 Nvme0n1 : 10.00 16887.60 65.97 0.00 0.00 7575.95 4417.61 15340.28 00:06:37.429 [2024-11-19T10:08:32.926Z] =================================================================================================================== 00:06:37.429 [2024-11-19T10:08:32.926Z] Total : 16887.60 65.97 0.00 0.00 7575.95 4417.61 15340.28 00:06:37.429 { 00:06:37.429 "results": [ 00:06:37.429 { 00:06:37.429 "job": "Nvme0n1", 00:06:37.429 "core_mask": "0x2", 00:06:37.429 "workload": "randwrite", 00:06:37.429 "status": "finished", 00:06:37.429 "queue_depth": 128, 00:06:37.429 "io_size": 4096, 00:06:37.429 "runtime": 10.003731, 00:06:37.429 "iops": 16887.599236724778, 00:06:37.429 "mibps": 65.96718451845616, 00:06:37.429 "io_failed": 0, 00:06:37.429 "io_timeout": 0, 00:06:37.429 "avg_latency_us": 7575.946559926408, 00:06:37.429 "min_latency_us": 4417.6118518518515, 00:06:37.429 "max_latency_us": 15340.278518518518 00:06:37.429 } 00:06:37.429 ], 00:06:37.429 "core_count": 1 00:06:37.429 } 00:06:37.429 11:08:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2502572 00:06:37.429 11:08:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2502572 ']' 00:06:37.429 11:08:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2502572 00:06:37.429 11:08:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:06:37.429 11:08:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.429 11:08:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2502572 00:06:37.687 11:08:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:37.687 11:08:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:37.687 11:08:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2502572' 00:06:37.687 killing process with pid 2502572 00:06:37.687 11:08:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2502572 00:06:37.687 Received shutdown signal, test time was about 10.000000 seconds 00:06:37.687 00:06:37.687 Latency(us) 00:06:37.687 [2024-11-19T10:08:33.184Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:37.687 [2024-11-19T10:08:33.184Z] =================================================================================================================== 00:06:37.687 [2024-11-19T10:08:33.184Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:37.687 11:08:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2502572 00:06:37.687 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:37.946 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:38.512 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 35d8c9b4-a6a0-4d28-b48c-382c4ff13265 00:06:38.512 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:06:38.512 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:06:38.512 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:06:38.512 11:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:06:38.769 [2024-11-19 11:08:34.233228] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:06:38.770 11:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 35d8c9b4-a6a0-4d28-b48c-382c4ff13265 00:06:38.770 11:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:06:38.770 11:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 35d8c9b4-a6a0-4d28-b48c-382c4ff13265 00:06:38.770 11:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:38.770 11:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:38.770 11:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:38.770 11:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:38.770 11:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:38.770 11:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:38.770 11:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:38.770 11:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:38.770 11:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 35d8c9b4-a6a0-4d28-b48c-382c4ff13265 00:06:39.028 request: 00:06:39.028 { 00:06:39.028 "uuid": "35d8c9b4-a6a0-4d28-b48c-382c4ff13265", 00:06:39.028 "method": "bdev_lvol_get_lvstores", 00:06:39.028 "req_id": 1 00:06:39.028 } 00:06:39.028 Got JSON-RPC error response 00:06:39.028 response: 00:06:39.028 { 00:06:39.028 "code": -19, 00:06:39.028 "message": "No such device" 00:06:39.028 } 00:06:39.286 11:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:06:39.286 11:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:39.286 11:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:39.286 11:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:39.286 11:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:39.544 aio_bdev 00:06:39.544 11:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 85336be6-d556-4e32-8da8-34ae7a76f8a1 00:06:39.544 11:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=85336be6-d556-4e32-8da8-34ae7a76f8a1 00:06:39.544 11:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:39.544 11:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:06:39.544 11:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:39.544 11:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:39.544 11:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:06:39.802 11:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 85336be6-d556-4e32-8da8-34ae7a76f8a1 -t 2000 00:06:40.060 [ 00:06:40.060 { 00:06:40.060 "name": "85336be6-d556-4e32-8da8-34ae7a76f8a1", 00:06:40.060 "aliases": [ 00:06:40.060 "lvs/lvol" 00:06:40.060 ], 00:06:40.060 "product_name": "Logical Volume", 00:06:40.060 "block_size": 4096, 00:06:40.060 "num_blocks": 38912, 00:06:40.060 "uuid": "85336be6-d556-4e32-8da8-34ae7a76f8a1", 00:06:40.060 "assigned_rate_limits": { 00:06:40.060 "rw_ios_per_sec": 0, 00:06:40.060 "rw_mbytes_per_sec": 0, 00:06:40.060 "r_mbytes_per_sec": 0, 00:06:40.060 "w_mbytes_per_sec": 0 00:06:40.060 }, 00:06:40.060 "claimed": false, 00:06:40.060 "zoned": false, 00:06:40.060 "supported_io_types": { 00:06:40.060 "read": true, 00:06:40.060 "write": true, 00:06:40.060 "unmap": true, 00:06:40.060 "flush": false, 00:06:40.060 "reset": true, 00:06:40.060 "nvme_admin": false, 00:06:40.060 "nvme_io": false, 00:06:40.060 "nvme_io_md": false, 00:06:40.060 "write_zeroes": true, 00:06:40.060 "zcopy": false, 00:06:40.060 "get_zone_info": false, 00:06:40.060 "zone_management": false, 00:06:40.060 "zone_append": false, 00:06:40.060 "compare": false, 00:06:40.060 "compare_and_write": false, 00:06:40.060 "abort": false, 00:06:40.060 "seek_hole": true, 00:06:40.060 "seek_data": true, 00:06:40.060 "copy": false, 00:06:40.060 "nvme_iov_md": false 00:06:40.060 }, 00:06:40.060 "driver_specific": { 00:06:40.060 "lvol": { 00:06:40.060 "lvol_store_uuid": "35d8c9b4-a6a0-4d28-b48c-382c4ff13265", 00:06:40.060 "base_bdev": "aio_bdev", 00:06:40.060 "thin_provision": false, 00:06:40.060 "num_allocated_clusters": 38, 00:06:40.060 "snapshot": false, 00:06:40.060 "clone": false, 00:06:40.060 "esnap_clone": false 00:06:40.060 } 00:06:40.060 } 00:06:40.060 } 00:06:40.060 ] 00:06:40.060 11:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:06:40.060 11:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 35d8c9b4-a6a0-4d28-b48c-382c4ff13265 00:06:40.060 11:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:06:40.318 11:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:06:40.318 11:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 35d8c9b4-a6a0-4d28-b48c-382c4ff13265 00:06:40.318 11:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:06:40.576 11:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:06:40.576 11:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 85336be6-d556-4e32-8da8-34ae7a76f8a1 00:06:40.834 11:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 35d8c9b4-a6a0-4d28-b48c-382c4ff13265 00:06:41.093 11:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:06:41.351 11:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:41.351 00:06:41.351 real 0m17.782s 00:06:41.351 user 0m17.383s 00:06:41.351 sys 0m1.857s 00:06:41.351 11:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.351 11:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:41.351 ************************************ 00:06:41.351 END TEST lvs_grow_clean 00:06:41.351 ************************************ 00:06:41.351 11:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:06:41.351 11:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:41.351 11:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.351 11:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:41.351 ************************************ 00:06:41.351 START TEST lvs_grow_dirty 00:06:41.351 ************************************ 00:06:41.351 11:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:06:41.351 11:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:41.351 11:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:41.351 11:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:41.351 11:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:41.351 11:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:41.351 11:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:41.351 11:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:41.351 11:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:41.351 11:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:41.609 11:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:41.609 11:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:41.868 11:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=487e7adf-b053-44f5-9c9e-c48db643150f 00:06:41.868 11:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 487e7adf-b053-44f5-9c9e-c48db643150f 00:06:41.868 11:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:42.435 11:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:42.435 11:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:42.435 11:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 487e7adf-b053-44f5-9c9e-c48db643150f lvol 150 00:06:42.435 11:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=0cc6c8fb-4b9f-439e-92d9-65130deb9426 00:06:42.435 11:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:42.435 11:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:42.694 [2024-11-19 11:08:38.164844] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:42.694 [2024-11-19 11:08:38.164931] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:42.694 true 00:06:42.694 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 487e7adf-b053-44f5-9c9e-c48db643150f 00:06:42.694 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:43.261 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:43.261 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:43.261 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0cc6c8fb-4b9f-439e-92d9-65130deb9426 00:06:43.519 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:43.777 [2024-11-19 11:08:39.248138] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:43.777 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:44.344 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2504764 00:06:44.344 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:44.344 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2504764 /var/tmp/bdevperf.sock 00:06:44.344 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2504764 ']' 00:06:44.344 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:44.344 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:44.344 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.344 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:44.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:44.344 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.344 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:06:44.344 [2024-11-19 11:08:39.583996] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:06:44.344 [2024-11-19 11:08:39.584072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2504764 ] 00:06:44.344 [2024-11-19 11:08:39.658419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.344 [2024-11-19 11:08:39.715277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.344 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.344 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:06:44.344 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:44.911 Nvme0n1 00:06:44.911 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:06:45.170 [ 00:06:45.170 { 00:06:45.170 "name": "Nvme0n1", 00:06:45.170 "aliases": [ 00:06:45.170 "0cc6c8fb-4b9f-439e-92d9-65130deb9426" 00:06:45.170 ], 00:06:45.170 "product_name": "NVMe disk", 00:06:45.170 "block_size": 4096, 00:06:45.170 "num_blocks": 38912, 00:06:45.170 "uuid": "0cc6c8fb-4b9f-439e-92d9-65130deb9426", 00:06:45.170 "numa_id": 1, 00:06:45.170 "assigned_rate_limits": { 00:06:45.170 "rw_ios_per_sec": 0, 00:06:45.170 "rw_mbytes_per_sec": 0, 00:06:45.170 "r_mbytes_per_sec": 0, 00:06:45.170 "w_mbytes_per_sec": 0 00:06:45.170 }, 00:06:45.170 "claimed": false, 00:06:45.170 "zoned": false, 00:06:45.170 "supported_io_types": { 00:06:45.170 "read": true, 00:06:45.170 "write": true, 00:06:45.170 "unmap": true, 00:06:45.170 "flush": true, 00:06:45.170 "reset": true, 00:06:45.170 "nvme_admin": true, 00:06:45.170 "nvme_io": true, 00:06:45.170 "nvme_io_md": false, 00:06:45.170 "write_zeroes": true, 00:06:45.170 "zcopy": false, 00:06:45.170 "get_zone_info": false, 00:06:45.170 "zone_management": false, 00:06:45.170 "zone_append": false, 00:06:45.170 "compare": true, 00:06:45.170 "compare_and_write": true, 00:06:45.170 "abort": true, 00:06:45.170 "seek_hole": false, 00:06:45.170 "seek_data": false, 00:06:45.170 "copy": true, 00:06:45.170 "nvme_iov_md": false 00:06:45.170 }, 00:06:45.170 "memory_domains": [ 00:06:45.170 { 00:06:45.170 "dma_device_id": "system", 00:06:45.170 "dma_device_type": 1 00:06:45.170 } 00:06:45.170 ], 00:06:45.170 "driver_specific": { 00:06:45.170 "nvme": [ 00:06:45.170 { 00:06:45.170 "trid": { 00:06:45.170 "trtype": "TCP", 00:06:45.170 "adrfam": "IPv4", 00:06:45.170 "traddr": "10.0.0.2", 00:06:45.170 "trsvcid": "4420", 00:06:45.170 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:06:45.170 }, 00:06:45.170 "ctrlr_data": { 00:06:45.170 "cntlid": 1, 00:06:45.170 "vendor_id": "0x8086", 00:06:45.170 "model_number": "SPDK bdev Controller", 00:06:45.170 "serial_number": "SPDK0", 00:06:45.170 "firmware_revision": "25.01", 00:06:45.170 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:45.170 "oacs": { 00:06:45.170 "security": 0, 00:06:45.170 "format": 0, 00:06:45.170 "firmware": 0, 00:06:45.170 "ns_manage": 0 00:06:45.170 }, 00:06:45.170 "multi_ctrlr": true, 00:06:45.170 "ana_reporting": false 00:06:45.170 }, 00:06:45.170 "vs": { 00:06:45.170 "nvme_version": "1.3" 00:06:45.170 }, 00:06:45.170 "ns_data": { 00:06:45.170 "id": 1, 00:06:45.170 "can_share": true 00:06:45.170 } 00:06:45.170 } 00:06:45.170 ], 00:06:45.170 "mp_policy": "active_passive" 00:06:45.170 } 00:06:45.170 } 00:06:45.170 ] 00:06:45.170 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2504898 00:06:45.170 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:06:45.170 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:45.170 Running I/O for 10 seconds... 00:06:46.544 Latency(us) 00:06:46.544 [2024-11-19T10:08:42.041Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:46.544 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:46.544 Nvme0n1 : 1.00 16278.00 63.59 0.00 0.00 0.00 0.00 0.00 00:06:46.544 [2024-11-19T10:08:42.041Z] =================================================================================================================== 00:06:46.544 [2024-11-19T10:08:42.041Z] Total : 16278.00 63.59 0.00 0.00 0.00 0.00 0.00 00:06:46.544 00:06:47.111 11:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 487e7adf-b053-44f5-9c9e-c48db643150f 00:06:47.369 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:47.369 Nvme0n1 : 2.00 16405.50 64.08 0.00 0.00 0.00 0.00 0.00 00:06:47.369 [2024-11-19T10:08:42.866Z] =================================================================================================================== 00:06:47.369 [2024-11-19T10:08:42.866Z] Total : 16405.50 64.08 0.00 0.00 0.00 0.00 0.00 00:06:47.369 00:06:47.369 true 00:06:47.369 11:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 487e7adf-b053-44f5-9c9e-c48db643150f 00:06:47.369 11:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:06:47.935 11:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:06:47.935 11:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:06:47.935 11:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2504898 00:06:48.193 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:48.193 Nvme0n1 : 3.00 16484.67 64.39 0.00 0.00 0.00 0.00 0.00 00:06:48.193 [2024-11-19T10:08:43.690Z] =================================================================================================================== 00:06:48.193 [2024-11-19T10:08:43.690Z] Total : 16484.67 64.39 0.00 0.00 0.00 0.00 0.00 00:06:48.193 00:06:49.568 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:49.568 Nvme0n1 : 4.00 16573.00 64.74 0.00 0.00 0.00 0.00 0.00 00:06:49.568 [2024-11-19T10:08:45.065Z] =================================================================================================================== 00:06:49.568 [2024-11-19T10:08:45.065Z] Total : 16573.00 64.74 0.00 0.00 0.00 0.00 0.00 00:06:49.568 00:06:50.503 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:50.503 Nvme0n1 : 5.00 16681.20 65.16 0.00 0.00 0.00 0.00 0.00 00:06:50.503 [2024-11-19T10:08:46.000Z] =================================================================================================================== 00:06:50.503 [2024-11-19T10:08:46.000Z] Total : 16681.20 65.16 0.00 0.00 0.00 0.00 0.00 00:06:50.503 00:06:51.438 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:51.438 Nvme0n1 : 6.00 16748.67 65.42 0.00 0.00 0.00 0.00 0.00 00:06:51.438 [2024-11-19T10:08:46.935Z] =================================================================================================================== 00:06:51.438 [2024-11-19T10:08:46.935Z] Total : 16748.67 65.42 0.00 0.00 0.00 0.00 0.00 00:06:51.438 00:06:52.371 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:52.371 Nvme0n1 : 7.00 16805.29 65.65 0.00 0.00 0.00 0.00 0.00 00:06:52.371 [2024-11-19T10:08:47.868Z] =================================================================================================================== 00:06:52.371 [2024-11-19T10:08:47.868Z] Total : 16805.29 65.65 0.00 0.00 0.00 0.00 0.00 00:06:52.371 00:06:53.305 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:53.305 Nvme0n1 : 8.00 16833.12 65.75 0.00 0.00 0.00 0.00 0.00 00:06:53.305 [2024-11-19T10:08:48.802Z] =================================================================================================================== 00:06:53.305 [2024-11-19T10:08:48.802Z] Total : 16833.12 65.75 0.00 0.00 0.00 0.00 0.00 00:06:53.305 00:06:54.252 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:54.252 Nvme0n1 : 9.00 16875.00 65.92 0.00 0.00 0.00 0.00 0.00 00:06:54.252 [2024-11-19T10:08:49.749Z] =================================================================================================================== 00:06:54.252 [2024-11-19T10:08:49.749Z] Total : 16875.00 65.92 0.00 0.00 0.00 0.00 0.00 00:06:54.252 00:06:55.187 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:55.187 Nvme0n1 : 10.00 16891.80 65.98 0.00 0.00 0.00 0.00 0.00 00:06:55.187 [2024-11-19T10:08:50.684Z] =================================================================================================================== 00:06:55.187 [2024-11-19T10:08:50.684Z] Total : 16891.80 65.98 0.00 0.00 0.00 0.00 0.00 00:06:55.187 00:06:55.187 00:06:55.187 Latency(us) 00:06:55.187 [2024-11-19T10:08:50.684Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:55.187 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:55.187 Nvme0n1 : 10.00 16898.21 66.01 0.00 0.00 7570.94 4393.34 15340.28 00:06:55.187 [2024-11-19T10:08:50.684Z] =================================================================================================================== 00:06:55.187 [2024-11-19T10:08:50.684Z] Total : 16898.21 66.01 0.00 0.00 7570.94 4393.34 15340.28 00:06:55.187 { 00:06:55.187 "results": [ 00:06:55.187 { 00:06:55.187 "job": "Nvme0n1", 00:06:55.187 "core_mask": "0x2", 00:06:55.187 "workload": "randwrite", 00:06:55.187 "status": "finished", 00:06:55.187 "queue_depth": 128, 00:06:55.187 "io_size": 4096, 00:06:55.187 "runtime": 10.003783, 00:06:55.187 "iops": 16898.207408137503, 00:06:55.187 "mibps": 66.00862268803712, 00:06:55.187 "io_failed": 0, 00:06:55.187 "io_timeout": 0, 00:06:55.187 "avg_latency_us": 7570.940978642237, 00:06:55.187 "min_latency_us": 4393.339259259259, 00:06:55.187 "max_latency_us": 15340.278518518518 00:06:55.187 } 00:06:55.187 ], 00:06:55.187 "core_count": 1 00:06:55.187 } 00:06:55.187 11:08:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2504764 00:06:55.187 11:08:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2504764 ']' 00:06:55.187 11:08:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2504764 00:06:55.446 11:08:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:06:55.446 11:08:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:55.446 11:08:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2504764 00:06:55.446 11:08:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:55.446 11:08:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:55.446 11:08:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2504764' 00:06:55.446 killing process with pid 2504764 00:06:55.446 11:08:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2504764 00:06:55.447 Received shutdown signal, test time was about 10.000000 seconds 00:06:55.447 00:06:55.447 Latency(us) 00:06:55.447 [2024-11-19T10:08:50.944Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:55.447 [2024-11-19T10:08:50.944Z] =================================================================================================================== 00:06:55.447 [2024-11-19T10:08:50.944Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:55.447 11:08:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2504764 00:06:55.705 11:08:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:55.963 11:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:56.222 11:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 487e7adf-b053-44f5-9c9e-c48db643150f 00:06:56.222 11:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:06:56.480 11:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:06:56.480 11:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:06:56.480 11:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2502132 00:06:56.480 11:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2502132 00:06:56.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2502132 Killed "${NVMF_APP[@]}" "$@" 00:06:56.480 11:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:06:56.480 11:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:06:56.480 11:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:56.480 11:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:56.480 11:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:06:56.480 11:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2506239 00:06:56.480 11:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:06:56.480 11:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2506239 00:06:56.480 11:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2506239 ']' 00:06:56.480 11:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.480 11:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.480 11:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.480 11:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.480 11:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:06:56.480 [2024-11-19 11:08:51.871138] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:06:56.480 [2024-11-19 11:08:51.871239] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:56.480 [2024-11-19 11:08:51.955519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.739 [2024-11-19 11:08:52.014871] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:56.739 [2024-11-19 11:08:52.014961] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:56.739 [2024-11-19 11:08:52.014975] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:56.739 [2024-11-19 11:08:52.014986] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:56.739 [2024-11-19 11:08:52.014995] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:56.739 [2024-11-19 11:08:52.015646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.739 11:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.739 11:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:06:56.739 11:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:56.739 11:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:56.739 11:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:06:56.739 11:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:56.739 11:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:56.997 [2024-11-19 11:08:52.417324] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:06:56.997 [2024-11-19 11:08:52.417496] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:06:56.997 [2024-11-19 11:08:52.417547] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:06:56.997 11:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:06:56.997 11:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 0cc6c8fb-4b9f-439e-92d9-65130deb9426 00:06:56.997 11:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=0cc6c8fb-4b9f-439e-92d9-65130deb9426 00:06:56.997 11:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:56.997 11:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:06:56.997 11:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:56.997 11:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:56.997 11:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:06:57.255 11:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0cc6c8fb-4b9f-439e-92d9-65130deb9426 -t 2000 00:06:57.513 [ 00:06:57.513 { 00:06:57.513 "name": "0cc6c8fb-4b9f-439e-92d9-65130deb9426", 00:06:57.513 "aliases": [ 00:06:57.513 "lvs/lvol" 00:06:57.513 ], 00:06:57.513 "product_name": "Logical Volume", 00:06:57.513 "block_size": 4096, 00:06:57.513 "num_blocks": 38912, 00:06:57.513 "uuid": "0cc6c8fb-4b9f-439e-92d9-65130deb9426", 00:06:57.513 "assigned_rate_limits": { 00:06:57.513 "rw_ios_per_sec": 0, 00:06:57.513 "rw_mbytes_per_sec": 0, 00:06:57.513 "r_mbytes_per_sec": 0, 00:06:57.513 "w_mbytes_per_sec": 0 00:06:57.513 }, 00:06:57.513 "claimed": false, 00:06:57.513 "zoned": false, 00:06:57.513 "supported_io_types": { 00:06:57.513 "read": true, 00:06:57.513 "write": true, 00:06:57.513 "unmap": true, 00:06:57.513 "flush": false, 00:06:57.513 "reset": true, 00:06:57.513 "nvme_admin": false, 00:06:57.513 "nvme_io": false, 00:06:57.513 "nvme_io_md": false, 00:06:57.513 "write_zeroes": true, 00:06:57.513 "zcopy": false, 00:06:57.513 "get_zone_info": false, 00:06:57.513 "zone_management": false, 00:06:57.513 "zone_append": false, 00:06:57.513 "compare": false, 00:06:57.513 "compare_and_write": false, 00:06:57.513 "abort": false, 00:06:57.513 "seek_hole": true, 00:06:57.513 "seek_data": true, 00:06:57.513 "copy": false, 00:06:57.513 "nvme_iov_md": false 00:06:57.513 }, 00:06:57.513 "driver_specific": { 00:06:57.513 "lvol": { 00:06:57.513 "lvol_store_uuid": "487e7adf-b053-44f5-9c9e-c48db643150f", 00:06:57.513 "base_bdev": "aio_bdev", 00:06:57.513 "thin_provision": false, 00:06:57.513 "num_allocated_clusters": 38, 00:06:57.513 "snapshot": false, 00:06:57.514 "clone": false, 00:06:57.514 "esnap_clone": false 00:06:57.514 } 00:06:57.514 } 00:06:57.514 } 00:06:57.514 ] 00:06:57.514 11:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:06:57.514 11:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 487e7adf-b053-44f5-9c9e-c48db643150f 00:06:57.514 11:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:06:57.771 11:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:06:57.771 11:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 487e7adf-b053-44f5-9c9e-c48db643150f 00:06:57.771 11:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:06:58.028 11:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:06:58.029 11:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:06:58.286 [2024-11-19 11:08:53.778939] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:06:58.544 11:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 487e7adf-b053-44f5-9c9e-c48db643150f 00:06:58.544 11:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:06:58.544 11:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 487e7adf-b053-44f5-9c9e-c48db643150f 00:06:58.544 11:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:58.544 11:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:58.544 11:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:58.544 11:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:58.544 11:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:58.544 11:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:58.544 11:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:58.544 11:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:58.544 11:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 487e7adf-b053-44f5-9c9e-c48db643150f 00:06:58.800 request: 00:06:58.800 { 00:06:58.800 "uuid": "487e7adf-b053-44f5-9c9e-c48db643150f", 00:06:58.800 "method": "bdev_lvol_get_lvstores", 00:06:58.800 "req_id": 1 00:06:58.800 } 00:06:58.800 Got JSON-RPC error response 00:06:58.800 response: 00:06:58.800 { 00:06:58.800 "code": -19, 00:06:58.800 "message": "No such device" 00:06:58.800 } 00:06:58.800 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:06:58.801 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:58.801 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:58.801 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:58.801 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:59.058 aio_bdev 00:06:59.058 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0cc6c8fb-4b9f-439e-92d9-65130deb9426 00:06:59.058 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=0cc6c8fb-4b9f-439e-92d9-65130deb9426 00:06:59.058 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:59.058 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:06:59.058 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:59.058 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:59.058 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:06:59.316 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0cc6c8fb-4b9f-439e-92d9-65130deb9426 -t 2000 00:06:59.574 [ 00:06:59.574 { 00:06:59.574 "name": "0cc6c8fb-4b9f-439e-92d9-65130deb9426", 00:06:59.574 "aliases": [ 00:06:59.574 "lvs/lvol" 00:06:59.574 ], 00:06:59.574 "product_name": "Logical Volume", 00:06:59.574 "block_size": 4096, 00:06:59.574 "num_blocks": 38912, 00:06:59.574 "uuid": "0cc6c8fb-4b9f-439e-92d9-65130deb9426", 00:06:59.574 "assigned_rate_limits": { 00:06:59.574 "rw_ios_per_sec": 0, 00:06:59.574 "rw_mbytes_per_sec": 0, 00:06:59.574 "r_mbytes_per_sec": 0, 00:06:59.574 "w_mbytes_per_sec": 0 00:06:59.574 }, 00:06:59.574 "claimed": false, 00:06:59.574 "zoned": false, 00:06:59.574 "supported_io_types": { 00:06:59.574 "read": true, 00:06:59.574 "write": true, 00:06:59.574 "unmap": true, 00:06:59.574 "flush": false, 00:06:59.574 "reset": true, 00:06:59.574 "nvme_admin": false, 00:06:59.574 "nvme_io": false, 00:06:59.574 "nvme_io_md": false, 00:06:59.574 "write_zeroes": true, 00:06:59.574 "zcopy": false, 00:06:59.574 "get_zone_info": false, 00:06:59.574 "zone_management": false, 00:06:59.574 "zone_append": false, 00:06:59.574 "compare": false, 00:06:59.574 "compare_and_write": false, 00:06:59.574 "abort": false, 00:06:59.574 "seek_hole": true, 00:06:59.574 "seek_data": true, 00:06:59.574 "copy": false, 00:06:59.574 "nvme_iov_md": false 00:06:59.574 }, 00:06:59.574 "driver_specific": { 00:06:59.574 "lvol": { 00:06:59.574 "lvol_store_uuid": "487e7adf-b053-44f5-9c9e-c48db643150f", 00:06:59.574 "base_bdev": "aio_bdev", 00:06:59.574 "thin_provision": false, 00:06:59.574 "num_allocated_clusters": 38, 00:06:59.574 "snapshot": false, 00:06:59.574 "clone": false, 00:06:59.574 "esnap_clone": false 00:06:59.574 } 00:06:59.574 } 00:06:59.574 } 00:06:59.574 ] 00:06:59.574 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:06:59.575 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 487e7adf-b053-44f5-9c9e-c48db643150f 00:06:59.575 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:06:59.831 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:06:59.831 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 487e7adf-b053-44f5-9c9e-c48db643150f 00:06:59.831 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:00.087 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:00.087 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0cc6c8fb-4b9f-439e-92d9-65130deb9426 00:07:00.344 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 487e7adf-b053-44f5-9c9e-c48db643150f 00:07:00.601 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:00.859 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:00.859 00:07:00.859 real 0m19.496s 00:07:00.859 user 0m49.072s 00:07:00.859 sys 0m4.952s 00:07:00.859 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.859 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:00.859 ************************************ 00:07:00.859 END TEST lvs_grow_dirty 00:07:00.859 ************************************ 00:07:00.859 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:00.859 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:00.859 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:00.859 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:00.859 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:00.859 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:00.859 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:00.859 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:00.859 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:00.859 nvmf_trace.0 00:07:00.859 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:00.859 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:00.859 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:00.859 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:00.859 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:00.859 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:00.859 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:00.859 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:00.859 rmmod nvme_tcp 00:07:01.116 rmmod nvme_fabrics 00:07:01.116 rmmod nvme_keyring 00:07:01.116 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:01.116 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:01.116 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:01.116 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2506239 ']' 00:07:01.116 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2506239 00:07:01.117 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2506239 ']' 00:07:01.117 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2506239 00:07:01.117 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:01.117 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:01.117 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2506239 00:07:01.117 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:01.117 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:01.117 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2506239' 00:07:01.117 killing process with pid 2506239 00:07:01.117 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2506239 00:07:01.117 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2506239 00:07:01.376 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:01.376 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:01.376 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:01.376 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:01.376 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:01.376 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:01.376 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:01.376 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:01.376 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:01.376 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:01.376 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:01.376 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:03.285 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:03.285 00:07:03.285 real 0m43.333s 00:07:03.285 user 1m12.695s 00:07:03.285 sys 0m9.165s 00:07:03.285 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.285 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:03.285 ************************************ 00:07:03.285 END TEST nvmf_lvs_grow 00:07:03.285 ************************************ 00:07:03.285 11:08:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:03.285 11:08:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:03.285 11:08:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.285 11:08:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:03.285 ************************************ 00:07:03.285 START TEST nvmf_bdev_io_wait 00:07:03.285 ************************************ 00:07:03.285 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:03.544 * Looking for test storage... 00:07:03.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:03.544 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:03.544 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:07:03.544 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:03.544 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:03.544 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:03.544 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:03.544 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:03.544 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:03.544 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:03.544 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:03.544 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:03.544 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:03.544 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:03.544 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:03.544 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:03.544 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:03.544 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:03.544 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:03.544 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:03.544 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:03.544 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:03.544 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:03.544 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:03.544 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:03.544 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:03.544 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:03.544 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:03.544 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:03.544 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:03.544 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:03.544 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:03.544 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:03.544 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:03.544 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:03.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.544 --rc genhtml_branch_coverage=1 00:07:03.544 --rc genhtml_function_coverage=1 00:07:03.544 --rc genhtml_legend=1 00:07:03.544 --rc geninfo_all_blocks=1 00:07:03.544 --rc geninfo_unexecuted_blocks=1 00:07:03.544 00:07:03.544 ' 00:07:03.544 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:03.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.544 --rc genhtml_branch_coverage=1 00:07:03.544 --rc genhtml_function_coverage=1 00:07:03.544 --rc genhtml_legend=1 00:07:03.544 --rc geninfo_all_blocks=1 00:07:03.544 --rc geninfo_unexecuted_blocks=1 00:07:03.544 00:07:03.544 ' 00:07:03.544 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:03.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.544 --rc genhtml_branch_coverage=1 00:07:03.544 --rc genhtml_function_coverage=1 00:07:03.544 --rc genhtml_legend=1 00:07:03.544 --rc geninfo_all_blocks=1 00:07:03.544 --rc geninfo_unexecuted_blocks=1 00:07:03.545 00:07:03.545 ' 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:03.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.545 --rc genhtml_branch_coverage=1 00:07:03.545 --rc genhtml_function_coverage=1 00:07:03.545 --rc genhtml_legend=1 00:07:03.545 --rc geninfo_all_blocks=1 00:07:03.545 --rc geninfo_unexecuted_blocks=1 00:07:03.545 00:07:03.545 ' 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:03.545 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:03.545 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:06.131 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:06.131 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:06.131 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:06.131 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:06.131 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:06.131 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:06.131 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:06.131 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:06.131 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:06.131 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:06.131 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:06.131 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:06.131 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:06.131 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:06.131 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:06.131 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:06.131 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:06.131 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:06.131 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:06.131 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:06.131 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:06.131 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:06.131 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:06.131 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:06.131 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:06.131 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:06.131 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:06.131 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:06.131 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:06.131 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:06.131 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:06.131 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:06.131 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:06.131 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:06.131 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:07:06.131 Found 0000:82:00.0 (0x8086 - 0x159b) 00:07:06.131 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:06.131 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:06.131 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:06.131 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:06.131 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:06.131 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:06.131 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:07:06.131 Found 0000:82:00.1 (0x8086 - 0x159b) 00:07:06.132 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:06.132 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:06.132 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:06.132 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:06.132 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:06.132 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:06.132 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:06.132 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:06.132 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:06.132 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:06.132 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:06.132 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:06.132 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:06.132 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:06.132 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:06.132 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:07:06.132 Found net devices under 0000:82:00.0: cvl_0_0 00:07:06.132 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:06.132 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:06.132 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:06.132 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:06.132 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:06.132 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:06.132 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:06.132 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:06.132 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:07:06.132 Found net devices under 0000:82:00.1: cvl_0_1 00:07:06.132 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:06.132 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:06.132 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:07:06.132 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:06.132 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:06.132 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:06.132 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:06.132 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:06.132 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:06.132 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:06.132 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:06.132 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:06.132 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:06.132 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:06.132 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:06.132 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:06.132 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:06.132 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:06.132 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:06.132 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:06.132 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:06.419 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:06.419 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:06.419 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:06.419 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:06.419 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:06.419 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:06.419 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:06.419 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:06.419 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:06.419 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:07:06.419 00:07:06.419 --- 10.0.0.2 ping statistics --- 00:07:06.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.419 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:07:06.419 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:06.419 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:06.419 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:07:06.419 00:07:06.419 --- 10.0.0.1 ping statistics --- 00:07:06.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.419 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:07:06.419 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:06.419 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:07:06.419 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:06.419 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:06.419 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:06.419 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:06.419 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:06.419 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:06.419 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:06.419 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:06.419 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:06.419 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:06.419 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:06.419 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2509182 00:07:06.419 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:06.419 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2509182 00:07:06.419 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2509182 ']' 00:07:06.419 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.419 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.419 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.419 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.419 11:09:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:06.419 [2024-11-19 11:09:01.793557] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:07:06.419 [2024-11-19 11:09:01.793637] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:06.419 [2024-11-19 11:09:01.879873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:06.678 [2024-11-19 11:09:01.940645] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:06.678 [2024-11-19 11:09:01.940717] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:06.678 [2024-11-19 11:09:01.940732] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:06.678 [2024-11-19 11:09:01.940743] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:06.678 [2024-11-19 11:09:01.940753] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:06.678 [2024-11-19 11:09:01.942544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.678 [2024-11-19 11:09:01.942593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.678 [2024-11-19 11:09:01.942616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:06.678 [2024-11-19 11:09:01.942620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.678 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.678 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:07:06.678 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:06.678 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:06.678 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:06.678 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:06.678 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:06.678 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.678 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:06.678 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.678 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:06.678 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.678 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:06.678 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.678 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:06.678 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.678 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:06.678 [2024-11-19 11:09:02.127509] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:06.678 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.678 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:06.678 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.678 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:06.678 Malloc0 00:07:06.678 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.678 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:06.678 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.678 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:06.678 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.678 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:06.678 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.678 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:06.937 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.937 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:06.937 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.937 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:06.937 [2024-11-19 11:09:02.180721] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:06.937 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.937 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2509334 00:07:06.937 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2509336 00:07:06.937 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:06.937 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:06.937 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:06.937 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:06.937 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:06.937 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2509338 00:07:06.937 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:06.937 { 00:07:06.937 "params": { 00:07:06.937 "name": "Nvme$subsystem", 00:07:06.937 "trtype": "$TEST_TRANSPORT", 00:07:06.937 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:06.937 "adrfam": "ipv4", 00:07:06.937 "trsvcid": "$NVMF_PORT", 00:07:06.937 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:06.937 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:06.937 "hdgst": ${hdgst:-false}, 00:07:06.937 "ddgst": ${ddgst:-false} 00:07:06.937 }, 00:07:06.937 "method": "bdev_nvme_attach_controller" 00:07:06.937 } 00:07:06.937 EOF 00:07:06.937 )") 00:07:06.937 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:06.937 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:06.937 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:06.937 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:06.937 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:06.937 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:06.937 { 00:07:06.937 "params": { 00:07:06.937 "name": "Nvme$subsystem", 00:07:06.937 "trtype": "$TEST_TRANSPORT", 00:07:06.937 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:06.937 "adrfam": "ipv4", 00:07:06.937 "trsvcid": "$NVMF_PORT", 00:07:06.937 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:06.937 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:06.937 "hdgst": ${hdgst:-false}, 00:07:06.937 "ddgst": ${ddgst:-false} 00:07:06.937 }, 00:07:06.937 "method": "bdev_nvme_attach_controller" 00:07:06.937 } 00:07:06.937 EOF 00:07:06.937 )") 00:07:06.937 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2509340 00:07:06.937 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:06.937 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:06.937 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:06.937 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:06.938 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:06.938 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:06.938 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:06.938 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:06.938 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:06.938 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:06.938 { 00:07:06.938 "params": { 00:07:06.938 "name": "Nvme$subsystem", 00:07:06.938 "trtype": "$TEST_TRANSPORT", 00:07:06.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:06.938 "adrfam": "ipv4", 00:07:06.938 "trsvcid": "$NVMF_PORT", 00:07:06.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:06.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:06.938 "hdgst": ${hdgst:-false}, 00:07:06.938 "ddgst": ${ddgst:-false} 00:07:06.938 }, 00:07:06.938 "method": "bdev_nvme_attach_controller" 00:07:06.938 } 00:07:06.938 EOF 00:07:06.938 )") 00:07:06.938 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:06.938 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:06.938 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:06.938 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:06.938 { 00:07:06.938 "params": { 00:07:06.938 "name": "Nvme$subsystem", 00:07:06.938 "trtype": "$TEST_TRANSPORT", 00:07:06.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:06.938 "adrfam": "ipv4", 00:07:06.938 "trsvcid": "$NVMF_PORT", 00:07:06.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:06.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:06.938 "hdgst": ${hdgst:-false}, 00:07:06.938 "ddgst": ${ddgst:-false} 00:07:06.938 }, 00:07:06.938 "method": "bdev_nvme_attach_controller" 00:07:06.938 } 00:07:06.938 EOF 00:07:06.938 )") 00:07:06.938 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:06.938 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:06.938 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2509334 00:07:06.938 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:06.938 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:06.938 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:06.938 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:06.938 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:06.938 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:06.938 "params": { 00:07:06.938 "name": "Nvme1", 00:07:06.938 "trtype": "tcp", 00:07:06.938 "traddr": "10.0.0.2", 00:07:06.938 "adrfam": "ipv4", 00:07:06.938 "trsvcid": "4420", 00:07:06.938 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:06.938 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:06.938 "hdgst": false, 00:07:06.938 "ddgst": false 00:07:06.938 }, 00:07:06.938 "method": "bdev_nvme_attach_controller" 00:07:06.938 }' 00:07:06.938 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:06.938 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:06.938 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:06.938 "params": { 00:07:06.938 "name": "Nvme1", 00:07:06.938 "trtype": "tcp", 00:07:06.938 "traddr": "10.0.0.2", 00:07:06.938 "adrfam": "ipv4", 00:07:06.938 "trsvcid": "4420", 00:07:06.938 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:06.938 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:06.938 "hdgst": false, 00:07:06.938 "ddgst": false 00:07:06.938 }, 00:07:06.938 "method": "bdev_nvme_attach_controller" 00:07:06.938 }' 00:07:06.938 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:06.938 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:06.938 "params": { 00:07:06.938 "name": "Nvme1", 00:07:06.938 "trtype": "tcp", 00:07:06.938 "traddr": "10.0.0.2", 00:07:06.938 "adrfam": "ipv4", 00:07:06.938 "trsvcid": "4420", 00:07:06.938 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:06.938 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:06.938 "hdgst": false, 00:07:06.938 "ddgst": false 00:07:06.938 }, 00:07:06.938 "method": "bdev_nvme_attach_controller" 00:07:06.938 }' 00:07:06.938 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:06.938 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:06.938 "params": { 00:07:06.938 "name": "Nvme1", 00:07:06.938 "trtype": "tcp", 00:07:06.938 "traddr": "10.0.0.2", 00:07:06.938 "adrfam": "ipv4", 00:07:06.938 "trsvcid": "4420", 00:07:06.938 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:06.938 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:06.938 "hdgst": false, 00:07:06.938 "ddgst": false 00:07:06.938 }, 00:07:06.938 "method": "bdev_nvme_attach_controller" 00:07:06.938 }' 00:07:06.938 [2024-11-19 11:09:02.232938] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:07:06.938 [2024-11-19 11:09:02.233012] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:06.938 [2024-11-19 11:09:02.233941] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:07:06.938 [2024-11-19 11:09:02.233943] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:07:06.938 [2024-11-19 11:09:02.233968] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:07:06.938 [2024-11-19 11:09:02.234035] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-19 11:09:02.234034] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 [2024-11-19 11:09:02.234037] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:06.938 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:06.938 --proc-type=auto ] 00:07:06.938 [2024-11-19 11:09:02.433824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.197 [2024-11-19 11:09:02.487495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:07.197 [2024-11-19 11:09:02.533265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.197 [2024-11-19 11:09:02.586471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:07.197 [2024-11-19 11:09:02.633306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.197 [2024-11-19 11:09:02.693744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:07.456 [2024-11-19 11:09:02.711980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.456 [2024-11-19 11:09:02.766328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:07.456 Running I/O for 1 seconds... 00:07:07.456 Running I/O for 1 seconds... 00:07:07.715 Running I/O for 1 seconds... 00:07:07.715 Running I/O for 1 seconds... 00:07:08.651 11397.00 IOPS, 44.52 MiB/s 00:07:08.651 Latency(us) 00:07:08.651 [2024-11-19T10:09:04.148Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:08.651 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:08.651 Nvme1n1 : 1.01 11439.10 44.68 0.00 0.00 11144.91 6262.33 18835.53 00:07:08.651 [2024-11-19T10:09:04.148Z] =================================================================================================================== 00:07:08.651 [2024-11-19T10:09:04.148Z] Total : 11439.10 44.68 0.00 0.00 11144.91 6262.33 18835.53 00:07:08.651 5412.00 IOPS, 21.14 MiB/s 00:07:08.651 Latency(us) 00:07:08.651 [2024-11-19T10:09:04.148Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:08.651 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:08.651 Nvme1n1 : 1.02 5442.87 21.26 0.00 0.00 23302.93 5825.42 37671.06 00:07:08.651 [2024-11-19T10:09:04.148Z] =================================================================================================================== 00:07:08.651 [2024-11-19T10:09:04.148Z] Total : 5442.87 21.26 0.00 0.00 23302.93 5825.42 37671.06 00:07:08.651 198936.00 IOPS, 777.09 MiB/s 00:07:08.651 Latency(us) 00:07:08.651 [2024-11-19T10:09:04.148Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:08.651 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:08.651 Nvme1n1 : 1.00 198563.55 775.64 0.00 0.00 641.25 294.31 1856.85 00:07:08.651 [2024-11-19T10:09:04.148Z] =================================================================================================================== 00:07:08.651 [2024-11-19T10:09:04.148Z] Total : 198563.55 775.64 0.00 0.00 641.25 294.31 1856.85 00:07:08.651 11:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2509336 00:07:08.651 11:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2509338 00:07:08.651 5681.00 IOPS, 22.19 MiB/s 00:07:08.651 Latency(us) 00:07:08.651 [2024-11-19T10:09:04.148Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:08.651 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:08.651 Nvme1n1 : 1.01 5778.27 22.57 0.00 0.00 22068.39 4975.88 51263.72 00:07:08.651 [2024-11-19T10:09:04.148Z] =================================================================================================================== 00:07:08.651 [2024-11-19T10:09:04.148Z] Total : 5778.27 22.57 0.00 0.00 22068.39 4975.88 51263.72 00:07:08.909 11:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2509340 00:07:08.910 11:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:08.910 11:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.910 11:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:08.910 11:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.910 11:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:08.910 11:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:08.910 11:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:08.910 11:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:08.910 11:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:08.910 11:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:08.910 11:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:08.910 11:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:08.910 rmmod nvme_tcp 00:07:08.910 rmmod nvme_fabrics 00:07:08.910 rmmod nvme_keyring 00:07:08.910 11:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:08.910 11:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:08.910 11:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:08.910 11:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2509182 ']' 00:07:08.910 11:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2509182 00:07:08.910 11:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2509182 ']' 00:07:08.910 11:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2509182 00:07:08.910 11:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:07:08.910 11:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:08.910 11:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2509182 00:07:08.910 11:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:08.910 11:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:08.910 11:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2509182' 00:07:08.910 killing process with pid 2509182 00:07:08.910 11:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2509182 00:07:08.910 11:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2509182 00:07:09.168 11:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:09.168 11:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:09.168 11:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:09.168 11:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:09.168 11:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:09.168 11:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:09.168 11:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:09.168 11:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:09.168 11:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:09.168 11:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:09.168 11:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:09.168 11:09:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.707 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:11.707 00:07:11.707 real 0m7.864s 00:07:11.707 user 0m16.762s 00:07:11.707 sys 0m3.908s 00:07:11.707 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.707 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:11.707 ************************************ 00:07:11.707 END TEST nvmf_bdev_io_wait 00:07:11.707 ************************************ 00:07:11.707 11:09:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:11.707 11:09:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:11.707 11:09:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.707 11:09:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:11.707 ************************************ 00:07:11.707 START TEST nvmf_queue_depth 00:07:11.707 ************************************ 00:07:11.707 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:11.707 * Looking for test storage... 00:07:11.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:11.707 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:11.707 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:07:11.707 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:11.707 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:11.707 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:11.707 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:11.707 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:11.707 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:11.707 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:11.707 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:11.707 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:11.707 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:11.707 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:11.707 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:11.707 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:11.707 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:11.707 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:11.707 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:11.707 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:11.707 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:11.707 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:11.707 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:11.707 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:11.707 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:11.707 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:11.707 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:11.707 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:11.707 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:11.707 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:11.707 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:11.707 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:11.707 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:11.707 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:11.707 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:11.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.707 --rc genhtml_branch_coverage=1 00:07:11.707 --rc genhtml_function_coverage=1 00:07:11.707 --rc genhtml_legend=1 00:07:11.707 --rc geninfo_all_blocks=1 00:07:11.707 --rc geninfo_unexecuted_blocks=1 00:07:11.707 00:07:11.707 ' 00:07:11.707 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:11.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.707 --rc genhtml_branch_coverage=1 00:07:11.707 --rc genhtml_function_coverage=1 00:07:11.707 --rc genhtml_legend=1 00:07:11.707 --rc geninfo_all_blocks=1 00:07:11.707 --rc geninfo_unexecuted_blocks=1 00:07:11.707 00:07:11.707 ' 00:07:11.707 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:11.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.707 --rc genhtml_branch_coverage=1 00:07:11.707 --rc genhtml_function_coverage=1 00:07:11.708 --rc genhtml_legend=1 00:07:11.708 --rc geninfo_all_blocks=1 00:07:11.708 --rc geninfo_unexecuted_blocks=1 00:07:11.708 00:07:11.708 ' 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:11.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.708 --rc genhtml_branch_coverage=1 00:07:11.708 --rc genhtml_function_coverage=1 00:07:11.708 --rc genhtml_legend=1 00:07:11.708 --rc geninfo_all_blocks=1 00:07:11.708 --rc geninfo_unexecuted_blocks=1 00:07:11.708 00:07:11.708 ' 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:11.708 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:11.708 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:14.240 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:14.240 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:07:14.240 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:14.240 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:14.240 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:14.240 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:14.240 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:14.240 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:07:14.240 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:14.240 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:07:14.240 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:07:14.240 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:07:14.240 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:07:14.240 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:07:14.240 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:07:14.240 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:14.240 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:14.240 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:14.240 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:14.240 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:14.240 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:14.240 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:14.240 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:14.240 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:14.240 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:14.240 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:14.240 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:14.240 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:14.240 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:14.240 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:14.240 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:14.240 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:14.240 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:14.240 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:14.240 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:07:14.240 Found 0000:82:00.0 (0x8086 - 0x159b) 00:07:14.240 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:14.240 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:14.240 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:14.240 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:14.240 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:14.240 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:14.240 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:07:14.240 Found 0000:82:00.1 (0x8086 - 0x159b) 00:07:14.240 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:14.240 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:14.240 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:14.240 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:14.240 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:14.240 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:07:14.241 Found net devices under 0000:82:00.0: cvl_0_0 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:07:14.241 Found net devices under 0000:82:00.1: cvl_0_1 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:14.241 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:14.241 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:07:14.241 00:07:14.241 --- 10.0.0.2 ping statistics --- 00:07:14.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:14.241 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:14.241 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:14.241 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:07:14.241 00:07:14.241 --- 10.0.0.1 ping statistics --- 00:07:14.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:14.241 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2512374 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2512374 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2512374 ']' 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.241 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:14.241 [2024-11-19 11:09:09.716687] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:07:14.241 [2024-11-19 11:09:09.716786] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:14.499 [2024-11-19 11:09:09.807168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.499 [2024-11-19 11:09:09.864692] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:14.499 [2024-11-19 11:09:09.864755] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:14.499 [2024-11-19 11:09:09.864783] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:14.499 [2024-11-19 11:09:09.864794] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:14.499 [2024-11-19 11:09:09.864803] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:14.499 [2024-11-19 11:09:09.865517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.499 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.499 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:14.499 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:14.499 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:14.499 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:14.758 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:14.758 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:14.758 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.758 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:14.758 [2024-11-19 11:09:10.012250] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:14.758 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.758 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:14.758 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.758 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:14.758 Malloc0 00:07:14.758 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.758 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:14.758 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.758 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:14.758 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.758 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:14.758 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.758 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:14.758 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.758 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:14.758 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.758 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:14.758 [2024-11-19 11:09:10.061635] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:14.758 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.758 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2512512 00:07:14.758 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:14.758 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:14.758 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2512512 /var/tmp/bdevperf.sock 00:07:14.758 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2512512 ']' 00:07:14.758 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:14.758 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.758 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:14.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:14.758 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.758 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:14.758 [2024-11-19 11:09:10.108085] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:07:14.758 [2024-11-19 11:09:10.108164] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2512512 ] 00:07:14.758 [2024-11-19 11:09:10.185049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.758 [2024-11-19 11:09:10.241461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.017 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:15.017 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:15.017 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:15.017 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.017 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:15.017 NVMe0n1 00:07:15.017 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.017 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:15.275 Running I/O for 10 seconds... 00:07:17.148 9176.00 IOPS, 35.84 MiB/s [2024-11-19T10:09:14.020Z] 9220.50 IOPS, 36.02 MiB/s [2024-11-19T10:09:14.956Z] 9515.33 IOPS, 37.17 MiB/s [2024-11-19T10:09:15.890Z] 9493.75 IOPS, 37.08 MiB/s [2024-11-19T10:09:16.825Z] 9602.20 IOPS, 37.51 MiB/s [2024-11-19T10:09:17.762Z] 9559.50 IOPS, 37.34 MiB/s [2024-11-19T10:09:18.696Z] 9631.71 IOPS, 37.62 MiB/s [2024-11-19T10:09:20.071Z] 9590.62 IOPS, 37.46 MiB/s [2024-11-19T10:09:21.006Z] 9616.56 IOPS, 37.56 MiB/s [2024-11-19T10:09:21.006Z] 9607.80 IOPS, 37.53 MiB/s 00:07:25.509 Latency(us) 00:07:25.509 [2024-11-19T10:09:21.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:25.509 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:07:25.509 Verification LBA range: start 0x0 length 0x4000 00:07:25.509 NVMe0n1 : 10.09 9624.83 37.60 0.00 0.00 105992.06 20680.25 69905.07 00:07:25.509 [2024-11-19T10:09:21.006Z] =================================================================================================================== 00:07:25.509 [2024-11-19T10:09:21.006Z] Total : 9624.83 37.60 0.00 0.00 105992.06 20680.25 69905.07 00:07:25.509 { 00:07:25.509 "results": [ 00:07:25.509 { 00:07:25.509 "job": "NVMe0n1", 00:07:25.509 "core_mask": "0x1", 00:07:25.509 "workload": "verify", 00:07:25.509 "status": "finished", 00:07:25.509 "verify_range": { 00:07:25.510 "start": 0, 00:07:25.510 "length": 16384 00:07:25.510 }, 00:07:25.510 "queue_depth": 1024, 00:07:25.510 "io_size": 4096, 00:07:25.510 "runtime": 10.087141, 00:07:25.510 "iops": 9624.828283851688, 00:07:25.510 "mibps": 37.596985483795656, 00:07:25.510 "io_failed": 0, 00:07:25.510 "io_timeout": 0, 00:07:25.510 "avg_latency_us": 105992.05938133382, 00:07:25.510 "min_latency_us": 20680.248888888887, 00:07:25.510 "max_latency_us": 69905.06666666667 00:07:25.510 } 00:07:25.510 ], 00:07:25.510 "core_count": 1 00:07:25.510 } 00:07:25.510 11:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2512512 00:07:25.510 11:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2512512 ']' 00:07:25.510 11:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2512512 00:07:25.510 11:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:25.510 11:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:25.510 11:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2512512 00:07:25.510 11:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:25.510 11:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:25.510 11:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2512512' 00:07:25.510 killing process with pid 2512512 00:07:25.510 11:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2512512 00:07:25.510 Received shutdown signal, test time was about 10.000000 seconds 00:07:25.510 00:07:25.510 Latency(us) 00:07:25.510 [2024-11-19T10:09:21.007Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:25.510 [2024-11-19T10:09:21.007Z] =================================================================================================================== 00:07:25.510 [2024-11-19T10:09:21.007Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:25.510 11:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2512512 00:07:25.768 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:07:25.768 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:07:25.768 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:25.768 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:07:25.768 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:25.768 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:07:25.768 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:25.768 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:25.768 rmmod nvme_tcp 00:07:25.768 rmmod nvme_fabrics 00:07:25.768 rmmod nvme_keyring 00:07:25.768 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:25.768 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:07:25.768 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:07:25.768 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2512374 ']' 00:07:25.768 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2512374 00:07:25.768 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2512374 ']' 00:07:25.768 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2512374 00:07:25.768 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:25.768 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:25.768 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2512374 00:07:25.768 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:25.768 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:25.768 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2512374' 00:07:25.768 killing process with pid 2512374 00:07:25.768 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2512374 00:07:25.768 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2512374 00:07:26.027 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:26.027 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:26.027 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:26.027 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:07:26.027 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:07:26.027 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:26.027 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:07:26.027 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:26.027 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:26.027 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.027 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:26.027 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:27.932 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:27.932 00:07:27.932 real 0m16.745s 00:07:27.932 user 0m22.569s 00:07:27.932 sys 0m3.894s 00:07:27.932 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.932 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:27.932 ************************************ 00:07:27.932 END TEST nvmf_queue_depth 00:07:27.932 ************************************ 00:07:28.192 11:09:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:28.192 11:09:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:28.192 11:09:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.192 11:09:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:28.192 ************************************ 00:07:28.192 START TEST nvmf_target_multipath 00:07:28.192 ************************************ 00:07:28.192 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:28.192 * Looking for test storage... 00:07:28.192 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:28.192 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:28.192 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:07:28.192 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:28.192 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:28.192 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:28.192 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:28.192 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:28.192 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:07:28.192 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:07:28.192 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:07:28.192 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:07:28.192 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:07:28.192 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:07:28.192 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:28.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.193 --rc genhtml_branch_coverage=1 00:07:28.193 --rc genhtml_function_coverage=1 00:07:28.193 --rc genhtml_legend=1 00:07:28.193 --rc geninfo_all_blocks=1 00:07:28.193 --rc geninfo_unexecuted_blocks=1 00:07:28.193 00:07:28.193 ' 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:28.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.193 --rc genhtml_branch_coverage=1 00:07:28.193 --rc genhtml_function_coverage=1 00:07:28.193 --rc genhtml_legend=1 00:07:28.193 --rc geninfo_all_blocks=1 00:07:28.193 --rc geninfo_unexecuted_blocks=1 00:07:28.193 00:07:28.193 ' 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:28.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.193 --rc genhtml_branch_coverage=1 00:07:28.193 --rc genhtml_function_coverage=1 00:07:28.193 --rc genhtml_legend=1 00:07:28.193 --rc geninfo_all_blocks=1 00:07:28.193 --rc geninfo_unexecuted_blocks=1 00:07:28.193 00:07:28.193 ' 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:28.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.193 --rc genhtml_branch_coverage=1 00:07:28.193 --rc genhtml_function_coverage=1 00:07:28.193 --rc genhtml_legend=1 00:07:28.193 --rc geninfo_all_blocks=1 00:07:28.193 --rc geninfo_unexecuted_blocks=1 00:07:28.193 00:07:28.193 ' 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:28.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:28.193 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:28.194 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:28.194 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:28.194 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:28.194 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:07:28.194 11:09:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:07:31.534 Found 0000:82:00.0 (0x8086 - 0x159b) 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:07:31.534 Found 0000:82:00.1 (0x8086 - 0x159b) 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:07:31.534 Found net devices under 0000:82:00.0: cvl_0_0 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:07:31.534 Found net devices under 0000:82:00.1: cvl_0_1 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:31.534 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:31.535 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:31.535 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:07:31.535 00:07:31.535 --- 10.0.0.2 ping statistics --- 00:07:31.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.535 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:31.535 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:31.535 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:07:31.535 00:07:31.535 --- 10.0.0.1 ping statistics --- 00:07:31.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.535 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:07:31.535 only one NIC for nvmf test 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:31.535 rmmod nvme_tcp 00:07:31.535 rmmod nvme_fabrics 00:07:31.535 rmmod nvme_keyring 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:31.535 11:09:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:33.445 00:07:33.445 real 0m5.245s 00:07:33.445 user 0m1.174s 00:07:33.445 sys 0m2.100s 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:33.445 ************************************ 00:07:33.445 END TEST nvmf_target_multipath 00:07:33.445 ************************************ 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:33.445 ************************************ 00:07:33.445 START TEST nvmf_zcopy 00:07:33.445 ************************************ 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:07:33.445 * Looking for test storage... 00:07:33.445 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:33.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.445 --rc genhtml_branch_coverage=1 00:07:33.445 --rc genhtml_function_coverage=1 00:07:33.445 --rc genhtml_legend=1 00:07:33.445 --rc geninfo_all_blocks=1 00:07:33.445 --rc geninfo_unexecuted_blocks=1 00:07:33.445 00:07:33.445 ' 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:33.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.445 --rc genhtml_branch_coverage=1 00:07:33.445 --rc genhtml_function_coverage=1 00:07:33.445 --rc genhtml_legend=1 00:07:33.445 --rc geninfo_all_blocks=1 00:07:33.445 --rc geninfo_unexecuted_blocks=1 00:07:33.445 00:07:33.445 ' 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:33.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.445 --rc genhtml_branch_coverage=1 00:07:33.445 --rc genhtml_function_coverage=1 00:07:33.445 --rc genhtml_legend=1 00:07:33.445 --rc geninfo_all_blocks=1 00:07:33.445 --rc geninfo_unexecuted_blocks=1 00:07:33.445 00:07:33.445 ' 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:33.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.445 --rc genhtml_branch_coverage=1 00:07:33.445 --rc genhtml_function_coverage=1 00:07:33.445 --rc genhtml_legend=1 00:07:33.445 --rc geninfo_all_blocks=1 00:07:33.445 --rc geninfo_unexecuted_blocks=1 00:07:33.445 00:07:33.445 ' 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:33.445 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:33.446 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:33.446 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:07:33.446 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:07:33.446 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:33.446 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:33.446 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:33.446 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:33.446 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:33.446 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:07:33.446 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:33.446 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:33.446 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:33.446 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.446 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.446 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.446 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:07:33.446 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.446 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:07:33.446 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:33.446 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:33.446 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:33.446 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:33.446 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:33.446 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:33.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:33.446 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:33.446 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:33.446 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:33.446 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:07:33.446 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:33.446 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:33.446 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:33.705 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:33.705 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:33.705 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.705 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:33.705 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.705 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:33.705 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:33.705 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:07:33.705 11:09:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:07:36.244 Found 0000:82:00.0 (0x8086 - 0x159b) 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:07:36.244 Found 0000:82:00.1 (0x8086 - 0x159b) 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:07:36.244 Found net devices under 0000:82:00.0: cvl_0_0 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:36.244 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:36.245 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:36.245 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:07:36.245 Found net devices under 0000:82:00.1: cvl_0_1 00:07:36.245 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:36.245 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:36.245 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:07:36.245 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:36.245 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:36.245 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:36.245 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:36.245 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:36.245 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:36.245 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:36.245 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:36.245 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:36.245 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:36.245 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:36.245 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:36.245 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:36.245 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:36.245 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:36.245 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:36.245 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:36.245 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:36.504 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:36.504 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:36.504 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:36.504 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:36.504 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:36.504 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:36.504 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:36.504 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:36.504 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:36.504 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:07:36.504 00:07:36.504 --- 10.0.0.2 ping statistics --- 00:07:36.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:36.504 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:07:36.504 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:36.504 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:36.504 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:07:36.504 00:07:36.504 --- 10.0.0.1 ping statistics --- 00:07:36.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:36.504 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:07:36.504 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:36.504 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:07:36.504 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:36.504 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:36.504 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:36.504 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:36.504 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:36.504 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:36.504 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:36.504 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:07:36.504 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:36.504 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:36.504 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:36.504 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2518438 00:07:36.504 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:36.504 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2518438 00:07:36.504 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2518438 ']' 00:07:36.504 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.504 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.504 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.504 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.504 11:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:36.504 [2024-11-19 11:09:31.876326] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:07:36.504 [2024-11-19 11:09:31.876445] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:36.504 [2024-11-19 11:09:31.957528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.763 [2024-11-19 11:09:32.016414] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:36.763 [2024-11-19 11:09:32.016470] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:36.763 [2024-11-19 11:09:32.016500] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:36.763 [2024-11-19 11:09:32.016512] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:36.763 [2024-11-19 11:09:32.016522] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:36.763 [2024-11-19 11:09:32.017235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.763 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:36.763 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:07:36.763 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:36.763 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:36.763 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:36.763 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:36.763 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:07:36.763 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:07:36.763 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.763 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:36.763 [2024-11-19 11:09:32.160515] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:36.763 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.763 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:36.763 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.763 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:36.763 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.763 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:36.763 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.763 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:36.763 [2024-11-19 11:09:32.176767] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:36.763 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.763 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:36.763 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.763 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:36.763 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.763 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:07:36.763 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.763 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:36.763 malloc0 00:07:36.763 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.763 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:07:36.763 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.763 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:36.763 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.763 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:07:36.763 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:07:36.763 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:07:36.763 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:07:36.763 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:36.763 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:36.763 { 00:07:36.763 "params": { 00:07:36.763 "name": "Nvme$subsystem", 00:07:36.763 "trtype": "$TEST_TRANSPORT", 00:07:36.763 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:36.763 "adrfam": "ipv4", 00:07:36.763 "trsvcid": "$NVMF_PORT", 00:07:36.763 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:36.763 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:36.763 "hdgst": ${hdgst:-false}, 00:07:36.763 "ddgst": ${ddgst:-false} 00:07:36.763 }, 00:07:36.763 "method": "bdev_nvme_attach_controller" 00:07:36.763 } 00:07:36.763 EOF 00:07:36.763 )") 00:07:36.763 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:07:36.763 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:07:36.763 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:07:36.763 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:36.763 "params": { 00:07:36.763 "name": "Nvme1", 00:07:36.763 "trtype": "tcp", 00:07:36.763 "traddr": "10.0.0.2", 00:07:36.763 "adrfam": "ipv4", 00:07:36.763 "trsvcid": "4420", 00:07:36.763 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:36.763 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:36.763 "hdgst": false, 00:07:36.763 "ddgst": false 00:07:36.763 }, 00:07:36.763 "method": "bdev_nvme_attach_controller" 00:07:36.763 }' 00:07:36.763 [2024-11-19 11:09:32.259716] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:07:36.763 [2024-11-19 11:09:32.259823] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2518464 ] 00:07:37.022 [2024-11-19 11:09:32.334228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.022 [2024-11-19 11:09:32.393874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.280 Running I/O for 10 seconds... 00:07:39.590 6196.00 IOPS, 48.41 MiB/s [2024-11-19T10:09:36.022Z] 6279.50 IOPS, 49.06 MiB/s [2024-11-19T10:09:36.956Z] 6302.33 IOPS, 49.24 MiB/s [2024-11-19T10:09:37.890Z] 6314.00 IOPS, 49.33 MiB/s [2024-11-19T10:09:38.825Z] 6335.80 IOPS, 49.50 MiB/s [2024-11-19T10:09:39.776Z] 6342.17 IOPS, 49.55 MiB/s [2024-11-19T10:09:41.151Z] 6356.14 IOPS, 49.66 MiB/s [2024-11-19T10:09:42.085Z] 6342.75 IOPS, 49.55 MiB/s [2024-11-19T10:09:43.020Z] 6345.89 IOPS, 49.58 MiB/s [2024-11-19T10:09:43.020Z] 6348.90 IOPS, 49.60 MiB/s 00:07:47.523 Latency(us) 00:07:47.523 [2024-11-19T10:09:43.020Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:47.523 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:07:47.523 Verification LBA range: start 0x0 length 0x1000 00:07:47.523 Nvme1n1 : 10.02 6351.30 49.62 0.00 0.00 20101.95 3034.07 29515.47 00:07:47.523 [2024-11-19T10:09:43.020Z] =================================================================================================================== 00:07:47.523 [2024-11-19T10:09:43.020Z] Total : 6351.30 49.62 0.00 0.00 20101.95 3034.07 29515.47 00:07:47.523 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2519784 00:07:47.523 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:07:47.523 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:47.523 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:07:47.523 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:07:47.523 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:07:47.523 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:07:47.523 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:47.523 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:47.523 { 00:07:47.523 "params": { 00:07:47.523 "name": "Nvme$subsystem", 00:07:47.523 "trtype": "$TEST_TRANSPORT", 00:07:47.523 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:47.523 "adrfam": "ipv4", 00:07:47.523 "trsvcid": "$NVMF_PORT", 00:07:47.523 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:47.523 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:47.523 "hdgst": ${hdgst:-false}, 00:07:47.523 "ddgst": ${ddgst:-false} 00:07:47.523 }, 00:07:47.523 "method": "bdev_nvme_attach_controller" 00:07:47.524 } 00:07:47.524 EOF 00:07:47.524 )") 00:07:47.524 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:07:47.524 [2024-11-19 11:09:42.990873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.524 [2024-11-19 11:09:42.990911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.524 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:07:47.524 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:07:47.524 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:47.524 "params": { 00:07:47.524 "name": "Nvme1", 00:07:47.524 "trtype": "tcp", 00:07:47.524 "traddr": "10.0.0.2", 00:07:47.524 "adrfam": "ipv4", 00:07:47.524 "trsvcid": "4420", 00:07:47.524 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:47.524 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:47.524 "hdgst": false, 00:07:47.524 "ddgst": false 00:07:47.524 }, 00:07:47.524 "method": "bdev_nvme_attach_controller" 00:07:47.524 }' 00:07:47.524 [2024-11-19 11:09:42.998837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.524 [2024-11-19 11:09:42.998859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.524 [2024-11-19 11:09:43.006857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.524 [2024-11-19 11:09:43.006878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.524 [2024-11-19 11:09:43.014881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.524 [2024-11-19 11:09:43.014901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.783 [2024-11-19 11:09:43.022907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.783 [2024-11-19 11:09:43.022927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.783 [2024-11-19 11:09:43.030923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.783 [2024-11-19 11:09:43.030943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.783 [2024-11-19 11:09:43.031601] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:07:47.783 [2024-11-19 11:09:43.031682] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2519784 ] 00:07:47.783 [2024-11-19 11:09:43.038944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.783 [2024-11-19 11:09:43.038964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.783 [2024-11-19 11:09:43.046965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.783 [2024-11-19 11:09:43.046993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.783 [2024-11-19 11:09:43.054987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.783 [2024-11-19 11:09:43.055007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.783 [2024-11-19 11:09:43.063009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.783 [2024-11-19 11:09:43.063028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.783 [2024-11-19 11:09:43.071033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.783 [2024-11-19 11:09:43.071053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.783 [2024-11-19 11:09:43.079053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.783 [2024-11-19 11:09:43.079072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.783 [2024-11-19 11:09:43.087075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.783 [2024-11-19 11:09:43.087095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.783 [2024-11-19 11:09:43.095096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.783 [2024-11-19 11:09:43.095116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.783 [2024-11-19 11:09:43.103118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.783 [2024-11-19 11:09:43.103138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.783 [2024-11-19 11:09:43.109587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.783 [2024-11-19 11:09:43.111140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.783 [2024-11-19 11:09:43.111159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.783 [2024-11-19 11:09:43.119183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.783 [2024-11-19 11:09:43.119215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.783 [2024-11-19 11:09:43.127208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.783 [2024-11-19 11:09:43.127242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.783 [2024-11-19 11:09:43.135207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.783 [2024-11-19 11:09:43.135227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.783 [2024-11-19 11:09:43.143227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.783 [2024-11-19 11:09:43.143246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.783 [2024-11-19 11:09:43.151248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.783 [2024-11-19 11:09:43.151268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.783 [2024-11-19 11:09:43.159269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.783 [2024-11-19 11:09:43.159288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.783 [2024-11-19 11:09:43.167295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.783 [2024-11-19 11:09:43.167316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.783 [2024-11-19 11:09:43.171042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.783 [2024-11-19 11:09:43.175311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.783 [2024-11-19 11:09:43.175331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.783 [2024-11-19 11:09:43.183334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.783 [2024-11-19 11:09:43.183377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.783 [2024-11-19 11:09:43.191402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.783 [2024-11-19 11:09:43.191453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.783 [2024-11-19 11:09:43.199424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.783 [2024-11-19 11:09:43.199458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.783 [2024-11-19 11:09:43.207447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.783 [2024-11-19 11:09:43.207483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.783 [2024-11-19 11:09:43.215459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.783 [2024-11-19 11:09:43.215496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.783 [2024-11-19 11:09:43.223496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.783 [2024-11-19 11:09:43.223534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.783 [2024-11-19 11:09:43.231507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.783 [2024-11-19 11:09:43.231544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.783 [2024-11-19 11:09:43.239506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.783 [2024-11-19 11:09:43.239527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.783 [2024-11-19 11:09:43.247547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.783 [2024-11-19 11:09:43.247582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.783 [2024-11-19 11:09:43.255574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.783 [2024-11-19 11:09:43.255611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.783 [2024-11-19 11:09:43.263596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.783 [2024-11-19 11:09:43.263633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.784 [2024-11-19 11:09:43.271596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.784 [2024-11-19 11:09:43.271618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:47.784 [2024-11-19 11:09:43.279614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:47.784 [2024-11-19 11:09:43.279652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.042 [2024-11-19 11:09:43.287662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.042 [2024-11-19 11:09:43.287687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.042 [2024-11-19 11:09:43.295678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.042 [2024-11-19 11:09:43.295715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.042 [2024-11-19 11:09:43.303699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.042 [2024-11-19 11:09:43.303721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.042 [2024-11-19 11:09:43.311721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.042 [2024-11-19 11:09:43.311744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.042 [2024-11-19 11:09:43.319740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.042 [2024-11-19 11:09:43.319761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.042 [2024-11-19 11:09:43.327779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.042 [2024-11-19 11:09:43.327799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.042 [2024-11-19 11:09:43.335800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.042 [2024-11-19 11:09:43.335819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.042 [2024-11-19 11:09:43.343821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.042 [2024-11-19 11:09:43.343848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.042 [2024-11-19 11:09:43.351843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.042 [2024-11-19 11:09:43.351863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.042 [2024-11-19 11:09:43.359872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.043 [2024-11-19 11:09:43.359894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.043 [2024-11-19 11:09:43.367892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.043 [2024-11-19 11:09:43.367914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.043 [2024-11-19 11:09:43.375913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.043 [2024-11-19 11:09:43.375935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.043 [2024-11-19 11:09:43.383932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.043 [2024-11-19 11:09:43.383953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.043 [2024-11-19 11:09:43.391960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.043 [2024-11-19 11:09:43.391984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.043 Running I/O for 5 seconds... 00:07:48.043 [2024-11-19 11:09:43.399977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.043 [2024-11-19 11:09:43.399997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.043 [2024-11-19 11:09:43.413769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.043 [2024-11-19 11:09:43.413794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.043 [2024-11-19 11:09:43.424530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.043 [2024-11-19 11:09:43.424556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.043 [2024-11-19 11:09:43.435105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.043 [2024-11-19 11:09:43.435130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.043 [2024-11-19 11:09:43.445726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.043 [2024-11-19 11:09:43.445751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.043 [2024-11-19 11:09:43.456168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.043 [2024-11-19 11:09:43.456192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.043 [2024-11-19 11:09:43.469951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.043 [2024-11-19 11:09:43.469977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.043 [2024-11-19 11:09:43.479990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.043 [2024-11-19 11:09:43.480015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.043 [2024-11-19 11:09:43.490438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.043 [2024-11-19 11:09:43.490464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.043 [2024-11-19 11:09:43.503059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.043 [2024-11-19 11:09:43.503083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.043 [2024-11-19 11:09:43.513678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.043 [2024-11-19 11:09:43.513704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.043 [2024-11-19 11:09:43.524335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.043 [2024-11-19 11:09:43.524383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.043 [2024-11-19 11:09:43.538163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.043 [2024-11-19 11:09:43.538192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.301 [2024-11-19 11:09:43.548166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.301 [2024-11-19 11:09:43.548191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.301 [2024-11-19 11:09:43.559245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.301 [2024-11-19 11:09:43.559270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.301 [2024-11-19 11:09:43.569826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.301 [2024-11-19 11:09:43.569850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.301 [2024-11-19 11:09:43.580375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.301 [2024-11-19 11:09:43.580401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.301 [2024-11-19 11:09:43.592550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.301 [2024-11-19 11:09:43.592577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.301 [2024-11-19 11:09:43.602287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.301 [2024-11-19 11:09:43.602312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.301 [2024-11-19 11:09:43.613663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.301 [2024-11-19 11:09:43.613688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.301 [2024-11-19 11:09:43.624336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.301 [2024-11-19 11:09:43.624384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.301 [2024-11-19 11:09:43.634909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.301 [2024-11-19 11:09:43.634934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.301 [2024-11-19 11:09:43.645292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.301 [2024-11-19 11:09:43.645316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.301 [2024-11-19 11:09:43.656249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.301 [2024-11-19 11:09:43.656273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.301 [2024-11-19 11:09:43.666684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.301 [2024-11-19 11:09:43.666710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.301 [2024-11-19 11:09:43.677373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.301 [2024-11-19 11:09:43.677401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.301 [2024-11-19 11:09:43.690559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.301 [2024-11-19 11:09:43.690587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.301 [2024-11-19 11:09:43.701778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.301 [2024-11-19 11:09:43.701804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.301 [2024-11-19 11:09:43.710661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.301 [2024-11-19 11:09:43.710688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.301 [2024-11-19 11:09:43.721700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.301 [2024-11-19 11:09:43.721739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.301 [2024-11-19 11:09:43.733538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.301 [2024-11-19 11:09:43.733564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.301 [2024-11-19 11:09:43.743306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.301 [2024-11-19 11:09:43.743330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.301 [2024-11-19 11:09:43.753576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.301 [2024-11-19 11:09:43.753603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.301 [2024-11-19 11:09:43.764457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.301 [2024-11-19 11:09:43.764485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.301 [2024-11-19 11:09:43.774787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.301 [2024-11-19 11:09:43.774813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.301 [2024-11-19 11:09:43.785190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.301 [2024-11-19 11:09:43.785216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.301 [2024-11-19 11:09:43.795190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.301 [2024-11-19 11:09:43.795217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.559 [2024-11-19 11:09:43.806217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.559 [2024-11-19 11:09:43.806242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.559 [2024-11-19 11:09:43.818393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.559 [2024-11-19 11:09:43.818420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.559 [2024-11-19 11:09:43.828469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.559 [2024-11-19 11:09:43.828496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.559 [2024-11-19 11:09:43.838427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.559 [2024-11-19 11:09:43.838455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.559 [2024-11-19 11:09:43.848976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.559 [2024-11-19 11:09:43.849002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.559 [2024-11-19 11:09:43.862142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.559 [2024-11-19 11:09:43.862167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.559 [2024-11-19 11:09:43.872372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.559 [2024-11-19 11:09:43.872398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.559 [2024-11-19 11:09:43.882703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.559 [2024-11-19 11:09:43.882742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.559 [2024-11-19 11:09:43.893159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.559 [2024-11-19 11:09:43.893185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.559 [2024-11-19 11:09:43.903526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.559 [2024-11-19 11:09:43.903553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.559 [2024-11-19 11:09:43.914088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.559 [2024-11-19 11:09:43.914114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.559 [2024-11-19 11:09:43.926093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.559 [2024-11-19 11:09:43.926119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.559 [2024-11-19 11:09:43.935792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.559 [2024-11-19 11:09:43.935818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.559 [2024-11-19 11:09:43.945988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.559 [2024-11-19 11:09:43.946014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.560 [2024-11-19 11:09:43.956143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.560 [2024-11-19 11:09:43.956168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.560 [2024-11-19 11:09:43.966558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.560 [2024-11-19 11:09:43.966587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.560 [2024-11-19 11:09:43.976663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.560 [2024-11-19 11:09:43.976689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.560 [2024-11-19 11:09:43.987413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.560 [2024-11-19 11:09:43.987441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.560 [2024-11-19 11:09:44.000336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.560 [2024-11-19 11:09:44.000399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.560 [2024-11-19 11:09:44.010301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.560 [2024-11-19 11:09:44.010327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.560 [2024-11-19 11:09:44.020618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.560 [2024-11-19 11:09:44.020660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.560 [2024-11-19 11:09:44.030788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.560 [2024-11-19 11:09:44.030813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.560 [2024-11-19 11:09:44.040976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.560 [2024-11-19 11:09:44.041001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.560 [2024-11-19 11:09:44.051232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.560 [2024-11-19 11:09:44.051258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.818 [2024-11-19 11:09:44.062386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.818 [2024-11-19 11:09:44.062424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.818 [2024-11-19 11:09:44.073560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.818 [2024-11-19 11:09:44.073589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.818 [2024-11-19 11:09:44.084694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.818 [2024-11-19 11:09:44.084735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.818 [2024-11-19 11:09:44.094899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.818 [2024-11-19 11:09:44.094925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.818 [2024-11-19 11:09:44.105311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.818 [2024-11-19 11:09:44.105337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.818 [2024-11-19 11:09:44.117847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.818 [2024-11-19 11:09:44.117884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.818 [2024-11-19 11:09:44.128022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.818 [2024-11-19 11:09:44.128047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.818 [2024-11-19 11:09:44.138626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.818 [2024-11-19 11:09:44.138681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.818 [2024-11-19 11:09:44.149583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.818 [2024-11-19 11:09:44.149611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.818 [2024-11-19 11:09:44.160100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.818 [2024-11-19 11:09:44.160125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.818 [2024-11-19 11:09:44.172506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.818 [2024-11-19 11:09:44.172534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.818 [2024-11-19 11:09:44.182526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.818 [2024-11-19 11:09:44.182554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.818 [2024-11-19 11:09:44.193333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.818 [2024-11-19 11:09:44.193382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.818 [2024-11-19 11:09:44.206273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.819 [2024-11-19 11:09:44.206299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.819 [2024-11-19 11:09:44.216716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.819 [2024-11-19 11:09:44.216742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.819 [2024-11-19 11:09:44.227146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.819 [2024-11-19 11:09:44.227172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.819 [2024-11-19 11:09:44.237977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.819 [2024-11-19 11:09:44.238002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.819 [2024-11-19 11:09:44.248807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.819 [2024-11-19 11:09:44.248834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.819 [2024-11-19 11:09:44.261730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.819 [2024-11-19 11:09:44.261757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.819 [2024-11-19 11:09:44.271837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.819 [2024-11-19 11:09:44.271863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.819 [2024-11-19 11:09:44.282539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.819 [2024-11-19 11:09:44.282568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.819 [2024-11-19 11:09:44.294931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.819 [2024-11-19 11:09:44.294957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.819 [2024-11-19 11:09:44.304702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.819 [2024-11-19 11:09:44.304741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:48.819 [2024-11-19 11:09:44.315468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:48.819 [2024-11-19 11:09:44.315495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.077 [2024-11-19 11:09:44.326014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.077 [2024-11-19 11:09:44.326038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.077 [2024-11-19 11:09:44.336613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.077 [2024-11-19 11:09:44.336653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.077 [2024-11-19 11:09:44.348960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.077 [2024-11-19 11:09:44.348992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.077 [2024-11-19 11:09:44.359070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.077 [2024-11-19 11:09:44.359095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.077 [2024-11-19 11:09:44.369683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.077 [2024-11-19 11:09:44.369708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.077 [2024-11-19 11:09:44.382043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.077 [2024-11-19 11:09:44.382068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.077 [2024-11-19 11:09:44.391770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.077 [2024-11-19 11:09:44.391794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.077 [2024-11-19 11:09:44.402358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.077 [2024-11-19 11:09:44.402393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.077 11976.00 IOPS, 93.56 MiB/s [2024-11-19T10:09:44.574Z] [2024-11-19 11:09:44.415336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.077 [2024-11-19 11:09:44.415386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.077 [2024-11-19 11:09:44.425437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.077 [2024-11-19 11:09:44.425465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.077 [2024-11-19 11:09:44.435904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.077 [2024-11-19 11:09:44.435929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.077 [2024-11-19 11:09:44.446188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.077 [2024-11-19 11:09:44.446213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.077 [2024-11-19 11:09:44.456291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.077 [2024-11-19 11:09:44.456316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.077 [2024-11-19 11:09:44.466769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.077 [2024-11-19 11:09:44.466795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.077 [2024-11-19 11:09:44.477204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.077 [2024-11-19 11:09:44.477230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.077 [2024-11-19 11:09:44.487653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.077 [2024-11-19 11:09:44.487679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.077 [2024-11-19 11:09:44.498170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.077 [2024-11-19 11:09:44.498194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.077 [2024-11-19 11:09:44.508581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.077 [2024-11-19 11:09:44.508608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.077 [2024-11-19 11:09:44.518983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.077 [2024-11-19 11:09:44.519008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.077 [2024-11-19 11:09:44.530069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.077 [2024-11-19 11:09:44.530094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.077 [2024-11-19 11:09:44.540996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.077 [2024-11-19 11:09:44.541021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.077 [2024-11-19 11:09:44.551498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.077 [2024-11-19 11:09:44.551532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.077 [2024-11-19 11:09:44.563639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.077 [2024-11-19 11:09:44.563679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.077 [2024-11-19 11:09:44.573634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.077 [2024-11-19 11:09:44.573676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.336 [2024-11-19 11:09:44.584358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.336 [2024-11-19 11:09:44.584392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.336 [2024-11-19 11:09:44.596310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.336 [2024-11-19 11:09:44.596335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.336 [2024-11-19 11:09:44.606024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.336 [2024-11-19 11:09:44.606049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.336 [2024-11-19 11:09:44.618150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.336 [2024-11-19 11:09:44.618175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.336 [2024-11-19 11:09:44.627909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.336 [2024-11-19 11:09:44.627934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.336 [2024-11-19 11:09:44.637960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.336 [2024-11-19 11:09:44.637985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.336 [2024-11-19 11:09:44.648713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.336 [2024-11-19 11:09:44.648738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.336 [2024-11-19 11:09:44.659060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.336 [2024-11-19 11:09:44.659085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.336 [2024-11-19 11:09:44.669636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.336 [2024-11-19 11:09:44.669675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.336 [2024-11-19 11:09:44.682142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.336 [2024-11-19 11:09:44.682167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.336 [2024-11-19 11:09:44.693622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.336 [2024-11-19 11:09:44.693662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.336 [2024-11-19 11:09:44.702765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.336 [2024-11-19 11:09:44.702790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.336 [2024-11-19 11:09:44.713687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.336 [2024-11-19 11:09:44.713726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.336 [2024-11-19 11:09:44.725963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.336 [2024-11-19 11:09:44.725988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.336 [2024-11-19 11:09:44.735781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.336 [2024-11-19 11:09:44.735806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.336 [2024-11-19 11:09:44.746412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.336 [2024-11-19 11:09:44.746439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.336 [2024-11-19 11:09:44.756873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.336 [2024-11-19 11:09:44.756898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.336 [2024-11-19 11:09:44.767173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.336 [2024-11-19 11:09:44.767198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.336 [2024-11-19 11:09:44.777728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.336 [2024-11-19 11:09:44.777753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.336 [2024-11-19 11:09:44.788774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.336 [2024-11-19 11:09:44.788799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.336 [2024-11-19 11:09:44.799881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.336 [2024-11-19 11:09:44.799906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.336 [2024-11-19 11:09:44.809875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.336 [2024-11-19 11:09:44.809901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.336 [2024-11-19 11:09:44.819995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.336 [2024-11-19 11:09:44.820020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.336 [2024-11-19 11:09:44.830992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.336 [2024-11-19 11:09:44.831017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.595 [2024-11-19 11:09:44.843870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.595 [2024-11-19 11:09:44.843895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.595 [2024-11-19 11:09:44.853958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.595 [2024-11-19 11:09:44.853983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.595 [2024-11-19 11:09:44.863751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.595 [2024-11-19 11:09:44.863777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.595 [2024-11-19 11:09:44.873751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.595 [2024-11-19 11:09:44.873776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.595 [2024-11-19 11:09:44.884081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.595 [2024-11-19 11:09:44.884106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.595 [2024-11-19 11:09:44.896371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.595 [2024-11-19 11:09:44.896397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.595 [2024-11-19 11:09:44.906241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.595 [2024-11-19 11:09:44.906265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.595 [2024-11-19 11:09:44.916297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.595 [2024-11-19 11:09:44.916322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.595 [2024-11-19 11:09:44.926490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.595 [2024-11-19 11:09:44.926516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.595 [2024-11-19 11:09:44.936765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.595 [2024-11-19 11:09:44.936791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.596 [2024-11-19 11:09:44.947258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.596 [2024-11-19 11:09:44.947284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.596 [2024-11-19 11:09:44.957798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.596 [2024-11-19 11:09:44.957824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.596 [2024-11-19 11:09:44.967910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.596 [2024-11-19 11:09:44.967935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.596 [2024-11-19 11:09:44.981480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.596 [2024-11-19 11:09:44.981509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.596 [2024-11-19 11:09:44.991690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.596 [2024-11-19 11:09:44.991737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.596 [2024-11-19 11:09:45.002194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.596 [2024-11-19 11:09:45.002219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.596 [2024-11-19 11:09:45.012545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.596 [2024-11-19 11:09:45.012573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.596 [2024-11-19 11:09:45.022764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.596 [2024-11-19 11:09:45.022790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.596 [2024-11-19 11:09:45.033461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.596 [2024-11-19 11:09:45.033487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.596 [2024-11-19 11:09:45.045945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.596 [2024-11-19 11:09:45.045972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.596 [2024-11-19 11:09:45.055870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.596 [2024-11-19 11:09:45.055895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.596 [2024-11-19 11:09:45.065986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.596 [2024-11-19 11:09:45.066013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.596 [2024-11-19 11:09:45.076444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.596 [2024-11-19 11:09:45.076472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.596 [2024-11-19 11:09:45.087059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.596 [2024-11-19 11:09:45.087085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.855 [2024-11-19 11:09:45.098472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.855 [2024-11-19 11:09:45.098500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.855 [2024-11-19 11:09:45.110613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.855 [2024-11-19 11:09:45.110656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.855 [2024-11-19 11:09:45.120422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.855 [2024-11-19 11:09:45.120449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.855 [2024-11-19 11:09:45.130444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.855 [2024-11-19 11:09:45.130471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.855 [2024-11-19 11:09:45.140817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.855 [2024-11-19 11:09:45.140843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.855 [2024-11-19 11:09:45.151086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.855 [2024-11-19 11:09:45.151112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.855 [2024-11-19 11:09:45.161181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.855 [2024-11-19 11:09:45.161206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.855 [2024-11-19 11:09:45.171301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.855 [2024-11-19 11:09:45.171326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.855 [2024-11-19 11:09:45.181525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.855 [2024-11-19 11:09:45.181552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.855 [2024-11-19 11:09:45.191682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.855 [2024-11-19 11:09:45.191722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.855 [2024-11-19 11:09:45.201699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.855 [2024-11-19 11:09:45.201739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.855 [2024-11-19 11:09:45.212105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.855 [2024-11-19 11:09:45.212131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.855 [2024-11-19 11:09:45.222586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.855 [2024-11-19 11:09:45.222618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.855 [2024-11-19 11:09:45.233413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.855 [2024-11-19 11:09:45.233440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.855 [2024-11-19 11:09:45.243808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.855 [2024-11-19 11:09:45.243833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.855 [2024-11-19 11:09:45.254178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.855 [2024-11-19 11:09:45.254203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.855 [2024-11-19 11:09:45.266544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.855 [2024-11-19 11:09:45.266572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.855 [2024-11-19 11:09:45.276493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.855 [2024-11-19 11:09:45.276521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.855 [2024-11-19 11:09:45.286699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.855 [2024-11-19 11:09:45.286724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.855 [2024-11-19 11:09:45.297557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.855 [2024-11-19 11:09:45.297584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.855 [2024-11-19 11:09:45.309453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.855 [2024-11-19 11:09:45.309481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.855 [2024-11-19 11:09:45.318936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.855 [2024-11-19 11:09:45.318964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.855 [2024-11-19 11:09:45.331978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.855 [2024-11-19 11:09:45.332004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:49.855 [2024-11-19 11:09:45.342628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:49.855 [2024-11-19 11:09:45.342669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.114 [2024-11-19 11:09:45.353337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.114 [2024-11-19 11:09:45.353388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.114 [2024-11-19 11:09:45.365916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.114 [2024-11-19 11:09:45.365942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.114 [2024-11-19 11:09:45.375193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.114 [2024-11-19 11:09:45.375219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.114 [2024-11-19 11:09:45.388645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.114 [2024-11-19 11:09:45.388688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.114 [2024-11-19 11:09:45.399130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.114 [2024-11-19 11:09:45.399156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.114 12067.00 IOPS, 94.27 MiB/s [2024-11-19T10:09:45.611Z] [2024-11-19 11:09:45.409674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.114 [2024-11-19 11:09:45.409701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.114 [2024-11-19 11:09:45.420377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.114 [2024-11-19 11:09:45.420406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.114 [2024-11-19 11:09:45.431412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.115 [2024-11-19 11:09:45.431439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.115 [2024-11-19 11:09:45.444294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.115 [2024-11-19 11:09:45.444320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.115 [2024-11-19 11:09:45.454785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.115 [2024-11-19 11:09:45.454811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.115 [2024-11-19 11:09:45.465381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.115 [2024-11-19 11:09:45.465423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.115 [2024-11-19 11:09:45.476030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.115 [2024-11-19 11:09:45.476057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.115 [2024-11-19 11:09:45.487198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.115 [2024-11-19 11:09:45.487224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.115 [2024-11-19 11:09:45.500207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.115 [2024-11-19 11:09:45.500234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.115 [2024-11-19 11:09:45.510860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.115 [2024-11-19 11:09:45.510886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.115 [2024-11-19 11:09:45.521756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.115 [2024-11-19 11:09:45.521782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.115 [2024-11-19 11:09:45.532691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.115 [2024-11-19 11:09:45.532732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.115 [2024-11-19 11:09:45.543509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.115 [2024-11-19 11:09:45.543537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.115 [2024-11-19 11:09:45.555873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.115 [2024-11-19 11:09:45.555899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.115 [2024-11-19 11:09:45.565404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.115 [2024-11-19 11:09:45.565441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.115 [2024-11-19 11:09:45.576319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.115 [2024-11-19 11:09:45.576359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.115 [2024-11-19 11:09:45.587084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.115 [2024-11-19 11:09:45.587108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.115 [2024-11-19 11:09:45.597381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.115 [2024-11-19 11:09:45.597407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.115 [2024-11-19 11:09:45.608314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.115 [2024-11-19 11:09:45.608356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.373 [2024-11-19 11:09:45.619861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.373 [2024-11-19 11:09:45.619887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.373 [2024-11-19 11:09:45.630743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.373 [2024-11-19 11:09:45.630768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.373 [2024-11-19 11:09:45.641592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.374 [2024-11-19 11:09:45.641619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.374 [2024-11-19 11:09:45.652042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.374 [2024-11-19 11:09:45.652067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.374 [2024-11-19 11:09:45.664861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.374 [2024-11-19 11:09:45.664887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.374 [2024-11-19 11:09:45.674766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.374 [2024-11-19 11:09:45.674791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.374 [2024-11-19 11:09:45.685223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.374 [2024-11-19 11:09:45.685249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.374 [2024-11-19 11:09:45.695821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.374 [2024-11-19 11:09:45.695846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.374 [2024-11-19 11:09:45.708379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.374 [2024-11-19 11:09:45.708405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.374 [2024-11-19 11:09:45.720311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.374 [2024-11-19 11:09:45.720337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.374 [2024-11-19 11:09:45.729772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.374 [2024-11-19 11:09:45.729798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.374 [2024-11-19 11:09:45.740528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.374 [2024-11-19 11:09:45.740555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.374 [2024-11-19 11:09:45.751037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.374 [2024-11-19 11:09:45.751062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.374 [2024-11-19 11:09:45.761289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.374 [2024-11-19 11:09:45.761314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.374 [2024-11-19 11:09:45.773825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.374 [2024-11-19 11:09:45.773858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.374 [2024-11-19 11:09:45.783507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.374 [2024-11-19 11:09:45.783534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.374 [2024-11-19 11:09:45.794125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.374 [2024-11-19 11:09:45.794150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.374 [2024-11-19 11:09:45.804987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.374 [2024-11-19 11:09:45.805012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.374 [2024-11-19 11:09:45.815530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.374 [2024-11-19 11:09:45.815558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.374 [2024-11-19 11:09:45.827675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.374 [2024-11-19 11:09:45.827714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.374 [2024-11-19 11:09:45.837394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.374 [2024-11-19 11:09:45.837421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.374 [2024-11-19 11:09:45.847998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.374 [2024-11-19 11:09:45.848023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.374 [2024-11-19 11:09:45.860134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.374 [2024-11-19 11:09:45.860160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.374 [2024-11-19 11:09:45.870433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.374 [2024-11-19 11:09:45.870460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.632 [2024-11-19 11:09:45.881518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.632 [2024-11-19 11:09:45.881545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.632 [2024-11-19 11:09:45.893548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.632 [2024-11-19 11:09:45.893575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.632 [2024-11-19 11:09:45.903111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.632 [2024-11-19 11:09:45.903136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.632 [2024-11-19 11:09:45.914428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.632 [2024-11-19 11:09:45.914457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.632 [2024-11-19 11:09:45.924679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.632 [2024-11-19 11:09:45.924704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.632 [2024-11-19 11:09:45.935013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.632 [2024-11-19 11:09:45.935038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.632 [2024-11-19 11:09:45.945793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.632 [2024-11-19 11:09:45.945819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.632 [2024-11-19 11:09:45.955992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.632 [2024-11-19 11:09:45.956017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.633 [2024-11-19 11:09:45.966429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.633 [2024-11-19 11:09:45.966456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.633 [2024-11-19 11:09:45.976521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.633 [2024-11-19 11:09:45.976555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.633 [2024-11-19 11:09:45.987479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.633 [2024-11-19 11:09:45.987507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.633 [2024-11-19 11:09:45.999680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.633 [2024-11-19 11:09:45.999707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.633 [2024-11-19 11:09:46.009834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.633 [2024-11-19 11:09:46.009859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.633 [2024-11-19 11:09:46.020513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.633 [2024-11-19 11:09:46.020542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.633 [2024-11-19 11:09:46.032410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.633 [2024-11-19 11:09:46.032438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.633 [2024-11-19 11:09:46.041824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.633 [2024-11-19 11:09:46.041850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.633 [2024-11-19 11:09:46.052013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.633 [2024-11-19 11:09:46.052038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.633 [2024-11-19 11:09:46.064381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.633 [2024-11-19 11:09:46.064408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.633 [2024-11-19 11:09:46.074274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.633 [2024-11-19 11:09:46.074299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.633 [2024-11-19 11:09:46.084714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.633 [2024-11-19 11:09:46.084740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.633 [2024-11-19 11:09:46.094923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.633 [2024-11-19 11:09:46.094948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.633 [2024-11-19 11:09:46.104932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.633 [2024-11-19 11:09:46.104957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.633 [2024-11-19 11:09:46.115180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.633 [2024-11-19 11:09:46.115205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.633 [2024-11-19 11:09:46.128167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.633 [2024-11-19 11:09:46.128195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.891 [2024-11-19 11:09:46.138736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.891 [2024-11-19 11:09:46.138762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.891 [2024-11-19 11:09:46.148952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.891 [2024-11-19 11:09:46.148977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.891 [2024-11-19 11:09:46.159101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.891 [2024-11-19 11:09:46.159127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.891 [2024-11-19 11:09:46.169222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.891 [2024-11-19 11:09:46.169248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.891 [2024-11-19 11:09:46.179824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.891 [2024-11-19 11:09:46.179849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.891 [2024-11-19 11:09:46.189804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.891 [2024-11-19 11:09:46.189831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.891 [2024-11-19 11:09:46.199818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.891 [2024-11-19 11:09:46.199844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.891 [2024-11-19 11:09:46.209810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.891 [2024-11-19 11:09:46.209836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.891 [2024-11-19 11:09:46.219676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.891 [2024-11-19 11:09:46.219702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.891 [2024-11-19 11:09:46.230227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.891 [2024-11-19 11:09:46.230253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.891 [2024-11-19 11:09:46.240721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.891 [2024-11-19 11:09:46.240747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.891 [2024-11-19 11:09:46.253624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.891 [2024-11-19 11:09:46.253666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.891 [2024-11-19 11:09:46.265272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.891 [2024-11-19 11:09:46.265297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.891 [2024-11-19 11:09:46.273995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.891 [2024-11-19 11:09:46.274020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.891 [2024-11-19 11:09:46.286304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.891 [2024-11-19 11:09:46.286330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.891 [2024-11-19 11:09:46.297776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.891 [2024-11-19 11:09:46.297801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.891 [2024-11-19 11:09:46.307016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.891 [2024-11-19 11:09:46.307052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.891 [2024-11-19 11:09:46.317437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.891 [2024-11-19 11:09:46.317465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.891 [2024-11-19 11:09:46.329684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.891 [2024-11-19 11:09:46.329711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.891 [2024-11-19 11:09:46.340833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.891 [2024-11-19 11:09:46.340857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.891 [2024-11-19 11:09:46.349628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.891 [2024-11-19 11:09:46.349669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.891 [2024-11-19 11:09:46.360735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.891 [2024-11-19 11:09:46.360760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.891 [2024-11-19 11:09:46.372774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.891 [2024-11-19 11:09:46.372800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:50.891 [2024-11-19 11:09:46.382452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:50.891 [2024-11-19 11:09:46.382479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.150 [2024-11-19 11:09:46.394117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.150 [2024-11-19 11:09:46.394143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.150 [2024-11-19 11:09:46.405151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.150 [2024-11-19 11:09:46.405176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.150 12074.67 IOPS, 94.33 MiB/s [2024-11-19T10:09:46.647Z] [2024-11-19 11:09:46.415916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.150 [2024-11-19 11:09:46.415941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.150 [2024-11-19 11:09:46.428520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.150 [2024-11-19 11:09:46.428546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.150 [2024-11-19 11:09:46.438447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.150 [2024-11-19 11:09:46.438474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.150 [2024-11-19 11:09:46.449532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.150 [2024-11-19 11:09:46.449559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.150 [2024-11-19 11:09:46.459913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.150 [2024-11-19 11:09:46.459938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.150 [2024-11-19 11:09:46.470620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.150 [2024-11-19 11:09:46.470647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.150 [2024-11-19 11:09:46.481729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.150 [2024-11-19 11:09:46.481755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.150 [2024-11-19 11:09:46.492663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.150 [2024-11-19 11:09:46.492690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.150 [2024-11-19 11:09:46.504697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.150 [2024-11-19 11:09:46.504722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.150 [2024-11-19 11:09:46.514780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.150 [2024-11-19 11:09:46.514805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.150 [2024-11-19 11:09:46.525269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.150 [2024-11-19 11:09:46.525295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.150 [2024-11-19 11:09:46.535774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.150 [2024-11-19 11:09:46.535799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.150 [2024-11-19 11:09:46.545918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.150 [2024-11-19 11:09:46.545942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.150 [2024-11-19 11:09:46.556432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.150 [2024-11-19 11:09:46.556459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.150 [2024-11-19 11:09:46.569124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.150 [2024-11-19 11:09:46.569149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.150 [2024-11-19 11:09:46.579238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.150 [2024-11-19 11:09:46.579270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.150 [2024-11-19 11:09:46.589751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.150 [2024-11-19 11:09:46.589777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.150 [2024-11-19 11:09:46.599744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.150 [2024-11-19 11:09:46.599769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.150 [2024-11-19 11:09:46.609801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.150 [2024-11-19 11:09:46.609827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.150 [2024-11-19 11:09:46.620207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.150 [2024-11-19 11:09:46.620232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.150 [2024-11-19 11:09:46.630438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.151 [2024-11-19 11:09:46.630463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.151 [2024-11-19 11:09:46.640513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.151 [2024-11-19 11:09:46.640539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.409 [2024-11-19 11:09:46.651690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.409 [2024-11-19 11:09:46.651732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.409 [2024-11-19 11:09:46.662210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.409 [2024-11-19 11:09:46.662235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.409 [2024-11-19 11:09:46.672500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.409 [2024-11-19 11:09:46.672526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.409 [2024-11-19 11:09:46.682600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.409 [2024-11-19 11:09:46.682627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.409 [2024-11-19 11:09:46.692546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.409 [2024-11-19 11:09:46.692572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.409 [2024-11-19 11:09:46.702875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.409 [2024-11-19 11:09:46.702900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.409 [2024-11-19 11:09:46.712313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.409 [2024-11-19 11:09:46.712337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.409 [2024-11-19 11:09:46.722148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.409 [2024-11-19 11:09:46.722172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.409 [2024-11-19 11:09:46.732477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.409 [2024-11-19 11:09:46.732504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.409 [2024-11-19 11:09:46.742596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.409 [2024-11-19 11:09:46.742623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.409 [2024-11-19 11:09:46.752694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.409 [2024-11-19 11:09:46.752720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.409 [2024-11-19 11:09:46.762995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.409 [2024-11-19 11:09:46.763021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.409 [2024-11-19 11:09:46.773187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.409 [2024-11-19 11:09:46.773219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.409 [2024-11-19 11:09:46.783578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.409 [2024-11-19 11:09:46.783606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.409 [2024-11-19 11:09:46.794405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.409 [2024-11-19 11:09:46.794440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.409 [2024-11-19 11:09:46.806970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.409 [2024-11-19 11:09:46.806995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.409 [2024-11-19 11:09:46.816519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.409 [2024-11-19 11:09:46.816548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.409 [2024-11-19 11:09:46.826802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.409 [2024-11-19 11:09:46.826828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.409 [2024-11-19 11:09:46.836743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.409 [2024-11-19 11:09:46.836769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.409 [2024-11-19 11:09:46.847507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.409 [2024-11-19 11:09:46.847534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.409 [2024-11-19 11:09:46.860305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.409 [2024-11-19 11:09:46.860330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.409 [2024-11-19 11:09:46.870326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.409 [2024-11-19 11:09:46.870374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.409 [2024-11-19 11:09:46.880686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.409 [2024-11-19 11:09:46.880726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.410 [2024-11-19 11:09:46.891327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.410 [2024-11-19 11:09:46.891376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.410 [2024-11-19 11:09:46.904247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.410 [2024-11-19 11:09:46.904272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.668 [2024-11-19 11:09:46.914886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.668 [2024-11-19 11:09:46.914912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.668 [2024-11-19 11:09:46.925086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.668 [2024-11-19 11:09:46.925112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.668 [2024-11-19 11:09:46.935919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.668 [2024-11-19 11:09:46.935944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.668 [2024-11-19 11:09:46.948573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.668 [2024-11-19 11:09:46.948600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.668 [2024-11-19 11:09:46.958455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.668 [2024-11-19 11:09:46.958482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.668 [2024-11-19 11:09:46.969100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.668 [2024-11-19 11:09:46.969125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.668 [2024-11-19 11:09:46.979153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.668 [2024-11-19 11:09:46.979189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.668 [2024-11-19 11:09:46.989920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.668 [2024-11-19 11:09:46.989945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.668 [2024-11-19 11:09:47.000020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.668 [2024-11-19 11:09:47.000044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.668 [2024-11-19 11:09:47.009883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.668 [2024-11-19 11:09:47.009909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.668 [2024-11-19 11:09:47.020460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.668 [2024-11-19 11:09:47.020488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.668 [2024-11-19 11:09:47.032888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.668 [2024-11-19 11:09:47.032914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.668 [2024-11-19 11:09:47.042084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.668 [2024-11-19 11:09:47.042109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.668 [2024-11-19 11:09:47.052804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.668 [2024-11-19 11:09:47.052829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.668 [2024-11-19 11:09:47.063477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.668 [2024-11-19 11:09:47.063503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.668 [2024-11-19 11:09:47.073898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.668 [2024-11-19 11:09:47.073923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.668 [2024-11-19 11:09:47.084468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.668 [2024-11-19 11:09:47.084495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.668 [2024-11-19 11:09:47.095123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.668 [2024-11-19 11:09:47.095148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.668 [2024-11-19 11:09:47.105524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.668 [2024-11-19 11:09:47.105551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.668 [2024-11-19 11:09:47.116101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.668 [2024-11-19 11:09:47.116126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.668 [2024-11-19 11:09:47.128807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.668 [2024-11-19 11:09:47.128832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.668 [2024-11-19 11:09:47.138226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.668 [2024-11-19 11:09:47.138251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.668 [2024-11-19 11:09:47.148662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.668 [2024-11-19 11:09:47.148687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.668 [2024-11-19 11:09:47.159369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.668 [2024-11-19 11:09:47.159396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.927 [2024-11-19 11:09:47.171602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.927 [2024-11-19 11:09:47.171629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.927 [2024-11-19 11:09:47.182015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.927 [2024-11-19 11:09:47.182047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.927 [2024-11-19 11:09:47.192483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.928 [2024-11-19 11:09:47.192510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.928 [2024-11-19 11:09:47.205603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.928 [2024-11-19 11:09:47.205632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.928 [2024-11-19 11:09:47.216586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.928 [2024-11-19 11:09:47.216613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.928 [2024-11-19 11:09:47.225588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.928 [2024-11-19 11:09:47.225614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.928 [2024-11-19 11:09:47.236950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.928 [2024-11-19 11:09:47.236975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.928 [2024-11-19 11:09:47.249043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.928 [2024-11-19 11:09:47.249069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.928 [2024-11-19 11:09:47.258870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.928 [2024-11-19 11:09:47.258894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.928 [2024-11-19 11:09:47.269901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.928 [2024-11-19 11:09:47.269926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.928 [2024-11-19 11:09:47.282085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.928 [2024-11-19 11:09:47.282110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.928 [2024-11-19 11:09:47.292099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.928 [2024-11-19 11:09:47.292124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.928 [2024-11-19 11:09:47.302896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.928 [2024-11-19 11:09:47.302921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.928 [2024-11-19 11:09:47.313849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.928 [2024-11-19 11:09:47.313873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.928 [2024-11-19 11:09:47.324396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.928 [2024-11-19 11:09:47.324437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.928 [2024-11-19 11:09:47.337000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.928 [2024-11-19 11:09:47.337026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.928 [2024-11-19 11:09:47.346502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.928 [2024-11-19 11:09:47.346530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.928 [2024-11-19 11:09:47.357033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.928 [2024-11-19 11:09:47.357059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.928 [2024-11-19 11:09:47.367783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.928 [2024-11-19 11:09:47.367809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.928 [2024-11-19 11:09:47.379856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.928 [2024-11-19 11:09:47.379890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.928 [2024-11-19 11:09:47.389231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.928 [2024-11-19 11:09:47.389263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.928 [2024-11-19 11:09:47.399695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.928 [2024-11-19 11:09:47.399735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.928 12097.75 IOPS, 94.51 MiB/s [2024-11-19T10:09:47.425Z] [2024-11-19 11:09:47.412064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.928 [2024-11-19 11:09:47.412089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:51.928 [2024-11-19 11:09:47.422324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:51.928 [2024-11-19 11:09:47.422374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.186 [2024-11-19 11:09:47.433132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.186 [2024-11-19 11:09:47.433157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.186 [2024-11-19 11:09:47.443417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.186 [2024-11-19 11:09:47.443444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.186 [2024-11-19 11:09:47.453902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.186 [2024-11-19 11:09:47.453927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.186 [2024-11-19 11:09:47.464173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.186 [2024-11-19 11:09:47.464199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.186 [2024-11-19 11:09:47.474701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.186 [2024-11-19 11:09:47.474739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.186 [2024-11-19 11:09:47.485257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.186 [2024-11-19 11:09:47.485282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.186 [2024-11-19 11:09:47.497610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.186 [2024-11-19 11:09:47.497636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.186 [2024-11-19 11:09:47.509152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.186 [2024-11-19 11:09:47.509177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.187 [2024-11-19 11:09:47.518533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.187 [2024-11-19 11:09:47.518559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.187 [2024-11-19 11:09:47.529167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.187 [2024-11-19 11:09:47.529192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.187 [2024-11-19 11:09:47.539676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.187 [2024-11-19 11:09:47.539706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.187 [2024-11-19 11:09:47.550250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.187 [2024-11-19 11:09:47.550275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.187 [2024-11-19 11:09:47.560777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.187 [2024-11-19 11:09:47.560802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.187 [2024-11-19 11:09:47.573179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.187 [2024-11-19 11:09:47.573203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.187 [2024-11-19 11:09:47.583266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.187 [2024-11-19 11:09:47.583292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.187 [2024-11-19 11:09:47.594460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.187 [2024-11-19 11:09:47.594487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.187 [2024-11-19 11:09:47.605320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.187 [2024-11-19 11:09:47.605358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.187 [2024-11-19 11:09:47.616084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.187 [2024-11-19 11:09:47.616108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.187 [2024-11-19 11:09:47.627233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.187 [2024-11-19 11:09:47.627257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.187 [2024-11-19 11:09:47.637943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.187 [2024-11-19 11:09:47.637968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.187 [2024-11-19 11:09:47.650171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.187 [2024-11-19 11:09:47.650195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.187 [2024-11-19 11:09:47.660181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.187 [2024-11-19 11:09:47.660206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.187 [2024-11-19 11:09:47.670921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.187 [2024-11-19 11:09:47.670946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.187 [2024-11-19 11:09:47.683424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.187 [2024-11-19 11:09:47.683450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.445 [2024-11-19 11:09:47.694010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.445 [2024-11-19 11:09:47.694034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.445 [2024-11-19 11:09:47.704503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.445 [2024-11-19 11:09:47.704529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.445 [2024-11-19 11:09:47.717680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.445 [2024-11-19 11:09:47.717720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.445 [2024-11-19 11:09:47.728855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.445 [2024-11-19 11:09:47.728895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.445 [2024-11-19 11:09:47.737834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.445 [2024-11-19 11:09:47.737858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.445 [2024-11-19 11:09:47.749118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.445 [2024-11-19 11:09:47.749143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.445 [2024-11-19 11:09:47.761307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.445 [2024-11-19 11:09:47.761332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.445 [2024-11-19 11:09:47.773037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.445 [2024-11-19 11:09:47.773062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.445 [2024-11-19 11:09:47.782256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.445 [2024-11-19 11:09:47.782281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.445 [2024-11-19 11:09:47.793700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.445 [2024-11-19 11:09:47.793727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.445 [2024-11-19 11:09:47.806061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.445 [2024-11-19 11:09:47.806086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.445 [2024-11-19 11:09:47.815914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.445 [2024-11-19 11:09:47.815939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.445 [2024-11-19 11:09:47.825696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.445 [2024-11-19 11:09:47.825735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.445 [2024-11-19 11:09:47.836687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.445 [2024-11-19 11:09:47.836711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.445 [2024-11-19 11:09:47.849504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.445 [2024-11-19 11:09:47.849532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.445 [2024-11-19 11:09:47.859797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.445 [2024-11-19 11:09:47.859821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.445 [2024-11-19 11:09:47.870116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.445 [2024-11-19 11:09:47.870141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.445 [2024-11-19 11:09:47.880095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.445 [2024-11-19 11:09:47.880120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.445 [2024-11-19 11:09:47.890142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.445 [2024-11-19 11:09:47.890166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.445 [2024-11-19 11:09:47.900424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.445 [2024-11-19 11:09:47.900449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.445 [2024-11-19 11:09:47.910829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.445 [2024-11-19 11:09:47.910853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.445 [2024-11-19 11:09:47.921580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.445 [2024-11-19 11:09:47.921606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.445 [2024-11-19 11:09:47.932441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.445 [2024-11-19 11:09:47.932467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.704 [2024-11-19 11:09:47.943632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.704 [2024-11-19 11:09:47.943673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.704 [2024-11-19 11:09:47.954296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.704 [2024-11-19 11:09:47.954321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.704 [2024-11-19 11:09:47.964436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.704 [2024-11-19 11:09:47.964463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.704 [2024-11-19 11:09:47.974970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.704 [2024-11-19 11:09:47.974994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.704 [2024-11-19 11:09:47.985235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.704 [2024-11-19 11:09:47.985270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.704 [2024-11-19 11:09:47.995446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.704 [2024-11-19 11:09:47.995480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.704 [2024-11-19 11:09:48.005673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.704 [2024-11-19 11:09:48.005712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.704 [2024-11-19 11:09:48.016001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.704 [2024-11-19 11:09:48.016025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.704 [2024-11-19 11:09:48.026557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.704 [2024-11-19 11:09:48.026584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.704 [2024-11-19 11:09:48.036811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.704 [2024-11-19 11:09:48.036835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.705 [2024-11-19 11:09:48.047279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.705 [2024-11-19 11:09:48.047303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.705 [2024-11-19 11:09:48.057499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.705 [2024-11-19 11:09:48.057525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.705 [2024-11-19 11:09:48.067898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.705 [2024-11-19 11:09:48.067923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.705 [2024-11-19 11:09:48.078441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.705 [2024-11-19 11:09:48.078466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.705 [2024-11-19 11:09:48.088974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.705 [2024-11-19 11:09:48.088999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.705 [2024-11-19 11:09:48.099797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.705 [2024-11-19 11:09:48.099822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.705 [2024-11-19 11:09:48.110076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.705 [2024-11-19 11:09:48.110100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.705 [2024-11-19 11:09:48.120938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.705 [2024-11-19 11:09:48.120963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.705 [2024-11-19 11:09:48.135109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.705 [2024-11-19 11:09:48.135134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.705 [2024-11-19 11:09:48.144983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.705 [2024-11-19 11:09:48.145008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.705 [2024-11-19 11:09:48.155627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.705 [2024-11-19 11:09:48.155669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.705 [2024-11-19 11:09:48.168269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.705 [2024-11-19 11:09:48.168294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.705 [2024-11-19 11:09:48.178269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.705 [2024-11-19 11:09:48.178293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.705 [2024-11-19 11:09:48.189273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.705 [2024-11-19 11:09:48.189297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.705 [2024-11-19 11:09:48.200276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.705 [2024-11-19 11:09:48.200313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.963 [2024-11-19 11:09:48.210872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.963 [2024-11-19 11:09:48.210896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.963 [2024-11-19 11:09:48.221510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.963 [2024-11-19 11:09:48.221536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.963 [2024-11-19 11:09:48.232043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.963 [2024-11-19 11:09:48.232068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.963 [2024-11-19 11:09:48.242856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.963 [2024-11-19 11:09:48.242880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.963 [2024-11-19 11:09:48.253143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.963 [2024-11-19 11:09:48.253168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.963 [2024-11-19 11:09:48.263623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.963 [2024-11-19 11:09:48.263664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.963 [2024-11-19 11:09:48.276138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.963 [2024-11-19 11:09:48.276162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.963 [2024-11-19 11:09:48.286011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.963 [2024-11-19 11:09:48.286036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.963 [2024-11-19 11:09:48.296873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.963 [2024-11-19 11:09:48.296898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.963 [2024-11-19 11:09:48.307403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.963 [2024-11-19 11:09:48.307429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.963 [2024-11-19 11:09:48.320022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.963 [2024-11-19 11:09:48.320046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.963 [2024-11-19 11:09:48.331150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.963 [2024-11-19 11:09:48.331174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.963 [2024-11-19 11:09:48.340116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.963 [2024-11-19 11:09:48.340141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.963 [2024-11-19 11:09:48.351558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.963 [2024-11-19 11:09:48.351587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.964 [2024-11-19 11:09:48.363421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.964 [2024-11-19 11:09:48.363449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.964 [2024-11-19 11:09:48.373208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.964 [2024-11-19 11:09:48.373232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.964 [2024-11-19 11:09:48.383809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.964 [2024-11-19 11:09:48.383834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.964 [2024-11-19 11:09:48.394360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.964 [2024-11-19 11:09:48.394396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.964 [2024-11-19 11:09:48.405143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.964 [2024-11-19 11:09:48.405174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.964 12096.60 IOPS, 94.50 MiB/s [2024-11-19T10:09:48.461Z] [2024-11-19 11:09:48.417375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.964 [2024-11-19 11:09:48.417410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.964 00:07:52.964 Latency(us) 00:07:52.964 [2024-11-19T10:09:48.461Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:52.964 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:07:52.964 Nvme1n1 : 5.01 12098.34 94.52 0.00 0.00 10566.81 4466.16 20388.98 00:07:52.964 [2024-11-19T10:09:48.461Z] =================================================================================================================== 00:07:52.964 [2024-11-19T10:09:48.461Z] Total : 12098.34 94.52 0.00 0.00 10566.81 4466.16 20388.98 00:07:52.964 [2024-11-19 11:09:48.425080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.964 [2024-11-19 11:09:48.425104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.964 [2024-11-19 11:09:48.433104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.964 [2024-11-19 11:09:48.433126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.964 [2024-11-19 11:09:48.441121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.964 [2024-11-19 11:09:48.441142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.964 [2024-11-19 11:09:48.449204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.964 [2024-11-19 11:09:48.449247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:52.964 [2024-11-19 11:09:48.457237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:52.964 [2024-11-19 11:09:48.457289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.222 [2024-11-19 11:09:48.465247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.222 [2024-11-19 11:09:48.465302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.222 [2024-11-19 11:09:48.473267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.222 [2024-11-19 11:09:48.473311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.222 [2024-11-19 11:09:48.481286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.223 [2024-11-19 11:09:48.481330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.223 [2024-11-19 11:09:48.489316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.223 [2024-11-19 11:09:48.489371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.223 [2024-11-19 11:09:48.497337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.223 [2024-11-19 11:09:48.497389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.223 [2024-11-19 11:09:48.505356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.223 [2024-11-19 11:09:48.505419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.223 [2024-11-19 11:09:48.513390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.223 [2024-11-19 11:09:48.513446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.223 [2024-11-19 11:09:48.521421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.223 [2024-11-19 11:09:48.521467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.223 [2024-11-19 11:09:48.529435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.223 [2024-11-19 11:09:48.529479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.223 [2024-11-19 11:09:48.537460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.223 [2024-11-19 11:09:48.537505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.223 [2024-11-19 11:09:48.545476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.223 [2024-11-19 11:09:48.545520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.223 [2024-11-19 11:09:48.553497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.223 [2024-11-19 11:09:48.553543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.223 [2024-11-19 11:09:48.561489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.223 [2024-11-19 11:09:48.561513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.223 [2024-11-19 11:09:48.569503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.223 [2024-11-19 11:09:48.569526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.223 [2024-11-19 11:09:48.577520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.223 [2024-11-19 11:09:48.577541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.223 [2024-11-19 11:09:48.585546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.223 [2024-11-19 11:09:48.585569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.223 [2024-11-19 11:09:48.593588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.223 [2024-11-19 11:09:48.593620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.223 [2024-11-19 11:09:48.601639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.223 [2024-11-19 11:09:48.601684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.223 [2024-11-19 11:09:48.609668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.223 [2024-11-19 11:09:48.609713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.223 [2024-11-19 11:09:48.617631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.223 [2024-11-19 11:09:48.617666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.223 [2024-11-19 11:09:48.625668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.223 [2024-11-19 11:09:48.625687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.223 [2024-11-19 11:09:48.633690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.223 [2024-11-19 11:09:48.633725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2519784) - No such process 00:07:53.223 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2519784 00:07:53.223 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.223 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.223 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:53.223 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.223 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:53.223 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.223 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:53.223 delay0 00:07:53.223 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.223 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:07:53.223 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.223 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:53.223 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.223 11:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:07:53.223 [2024-11-19 11:09:48.714212] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:59.784 Initializing NVMe Controllers 00:07:59.784 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:59.784 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:59.784 Initialization complete. Launching workers. 00:07:59.784 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 290, failed: 6197 00:07:59.784 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 6422, failed to submit 65 00:07:59.784 success 6316, unsuccessful 106, failed 0 00:07:59.784 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:07:59.784 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:07:59.784 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:59.784 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:07:59.784 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:59.784 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:07:59.784 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:59.784 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:59.784 rmmod nvme_tcp 00:07:59.784 rmmod nvme_fabrics 00:07:59.784 rmmod nvme_keyring 00:07:59.784 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:59.784 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:07:59.784 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:07:59.784 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2518438 ']' 00:07:59.784 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2518438 00:07:59.784 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2518438 ']' 00:07:59.784 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2518438 00:07:59.784 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:07:59.784 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:59.785 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2518438 00:07:59.785 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:59.785 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:59.785 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2518438' 00:07:59.785 killing process with pid 2518438 00:07:59.785 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2518438 00:07:59.785 11:09:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2518438 00:07:59.785 11:09:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:59.785 11:09:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:59.785 11:09:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:59.785 11:09:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:07:59.785 11:09:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:07:59.785 11:09:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:59.785 11:09:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:07:59.785 11:09:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:59.785 11:09:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:59.785 11:09:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.785 11:09:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:59.785 11:09:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.322 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:02.322 00:08:02.322 real 0m28.486s 00:08:02.322 user 0m39.490s 00:08:02.322 sys 0m10.449s 00:08:02.322 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.322 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:02.322 ************************************ 00:08:02.322 END TEST nvmf_zcopy 00:08:02.322 ************************************ 00:08:02.322 11:09:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:02.322 11:09:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:02.322 11:09:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.322 11:09:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:02.322 ************************************ 00:08:02.322 START TEST nvmf_nmic 00:08:02.322 ************************************ 00:08:02.322 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:02.322 * Looking for test storage... 00:08:02.322 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:02.322 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:02.322 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:08:02.322 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:02.322 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:02.322 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:02.322 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:02.322 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:02.322 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:02.322 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:02.322 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:02.322 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:02.322 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:02.322 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:02.322 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:02.322 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:02.322 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:02.322 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:02.322 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:02.322 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:02.322 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:02.322 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:02.322 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:02.322 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:02.322 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:02.322 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:02.322 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:02.322 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:02.322 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:02.322 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:02.322 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:02.322 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:02.322 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:02.322 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:02.322 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:02.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.323 --rc genhtml_branch_coverage=1 00:08:02.323 --rc genhtml_function_coverage=1 00:08:02.323 --rc genhtml_legend=1 00:08:02.323 --rc geninfo_all_blocks=1 00:08:02.323 --rc geninfo_unexecuted_blocks=1 00:08:02.323 00:08:02.323 ' 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:02.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.323 --rc genhtml_branch_coverage=1 00:08:02.323 --rc genhtml_function_coverage=1 00:08:02.323 --rc genhtml_legend=1 00:08:02.323 --rc geninfo_all_blocks=1 00:08:02.323 --rc geninfo_unexecuted_blocks=1 00:08:02.323 00:08:02.323 ' 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:02.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.323 --rc genhtml_branch_coverage=1 00:08:02.323 --rc genhtml_function_coverage=1 00:08:02.323 --rc genhtml_legend=1 00:08:02.323 --rc geninfo_all_blocks=1 00:08:02.323 --rc geninfo_unexecuted_blocks=1 00:08:02.323 00:08:02.323 ' 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:02.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.323 --rc genhtml_branch_coverage=1 00:08:02.323 --rc genhtml_function_coverage=1 00:08:02.323 --rc genhtml_legend=1 00:08:02.323 --rc geninfo_all_blocks=1 00:08:02.323 --rc geninfo_unexecuted_blocks=1 00:08:02.323 00:08:02.323 ' 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:02.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:08:02.323 11:09:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:04.928 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:04.928 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:08:04.928 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:04.928 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:04.928 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:04.928 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:04.928 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:04.928 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:08:04.928 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:04.928 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:08:04.928 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:08:04.928 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:08:04.928 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:08:04.928 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:08:04.928 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:08:04.928 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:04.928 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:04.928 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:04.928 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:04.928 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:04.928 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:04.928 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:04.928 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:04.928 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:08:04.929 Found 0000:82:00.0 (0x8086 - 0x159b) 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:08:04.929 Found 0000:82:00.1 (0x8086 - 0x159b) 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:08:04.929 Found net devices under 0000:82:00.0: cvl_0_0 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:08:04.929 Found net devices under 0000:82:00.1: cvl_0_1 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:04.929 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:04.929 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:08:04.929 00:08:04.929 --- 10.0.0.2 ping statistics --- 00:08:04.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.929 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:04.929 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:04.929 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:08:04.929 00:08:04.929 --- 10.0.0.1 ping statistics --- 00:08:04.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.929 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2523491 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2523491 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2523491 ']' 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:04.929 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:04.929 [2024-11-19 11:10:00.359324] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:08:04.929 [2024-11-19 11:10:00.359450] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:05.188 [2024-11-19 11:10:00.448118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:05.188 [2024-11-19 11:10:00.514022] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:05.188 [2024-11-19 11:10:00.514072] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:05.188 [2024-11-19 11:10:00.514103] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:05.188 [2024-11-19 11:10:00.514116] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:05.188 [2024-11-19 11:10:00.514126] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:05.188 [2024-11-19 11:10:00.518386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.188 [2024-11-19 11:10:00.518454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:05.188 [2024-11-19 11:10:00.518521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:05.188 [2024-11-19 11:10:00.518525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.188 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:05.188 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:08:05.188 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:05.188 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:05.188 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:05.188 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:05.188 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:05.188 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.188 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:05.188 [2024-11-19 11:10:00.675186] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:05.188 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.188 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:05.188 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.188 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:05.448 Malloc0 00:08:05.448 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.448 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:05.448 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.448 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:05.448 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.448 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:05.448 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.448 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:05.448 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.448 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:05.448 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.448 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:05.448 [2024-11-19 11:10:00.734485] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:05.448 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.448 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:05.448 test case1: single bdev can't be used in multiple subsystems 00:08:05.448 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:05.448 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.448 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:05.448 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.448 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:05.448 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.448 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:05.448 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.448 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:05.448 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:05.448 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.448 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:05.448 [2024-11-19 11:10:00.758278] bdev.c:8199:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:05.448 [2024-11-19 11:10:00.758308] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:05.448 [2024-11-19 11:10:00.758324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.448 request: 00:08:05.448 { 00:08:05.448 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:05.448 "namespace": { 00:08:05.448 "bdev_name": "Malloc0", 00:08:05.448 "no_auto_visible": false 00:08:05.448 }, 00:08:05.448 "method": "nvmf_subsystem_add_ns", 00:08:05.448 "req_id": 1 00:08:05.448 } 00:08:05.448 Got JSON-RPC error response 00:08:05.448 response: 00:08:05.448 { 00:08:05.448 "code": -32602, 00:08:05.448 "message": "Invalid parameters" 00:08:05.448 } 00:08:05.448 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:05.448 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:05.448 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:05.448 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:05.448 Adding namespace failed - expected result. 00:08:05.448 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:05.448 test case2: host connect to nvmf target in multiple paths 00:08:05.448 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:08:05.448 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.448 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:05.448 [2024-11-19 11:10:00.770435] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:08:05.448 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.448 11:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:06.015 11:10:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:08:06.948 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:06.948 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:08:06.948 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:06.948 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:06.948 11:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:08:08.847 11:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:08.847 11:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:08.847 11:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:08.847 11:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:08.847 11:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:08.847 11:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:08:08.847 11:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:08.847 [global] 00:08:08.847 thread=1 00:08:08.847 invalidate=1 00:08:08.847 rw=write 00:08:08.847 time_based=1 00:08:08.847 runtime=1 00:08:08.847 ioengine=libaio 00:08:08.847 direct=1 00:08:08.847 bs=4096 00:08:08.847 iodepth=1 00:08:08.847 norandommap=0 00:08:08.847 numjobs=1 00:08:08.847 00:08:08.847 verify_dump=1 00:08:08.847 verify_backlog=512 00:08:08.847 verify_state_save=0 00:08:08.847 do_verify=1 00:08:08.847 verify=crc32c-intel 00:08:08.847 [job0] 00:08:08.847 filename=/dev/nvme0n1 00:08:08.847 Could not set queue depth (nvme0n1) 00:08:08.847 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:08.847 fio-3.35 00:08:08.847 Starting 1 thread 00:08:10.221 00:08:10.221 job0: (groupid=0, jobs=1): err= 0: pid=2524025: Tue Nov 19 11:10:05 2024 00:08:10.221 read: IOPS=21, BW=86.8KiB/s (88.9kB/s)(88.0KiB/1014msec) 00:08:10.221 slat (nsec): min=8038, max=28645, avg=16305.77, stdev=5954.65 00:08:10.221 clat (usec): min=40631, max=41030, avg=40957.23, stdev=80.59 00:08:10.221 lat (usec): min=40639, max=41044, avg=40973.54, stdev=81.19 00:08:10.221 clat percentiles (usec): 00:08:10.221 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:08:10.221 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:08:10.221 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:08:10.221 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:10.221 | 99.99th=[41157] 00:08:10.221 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:08:10.221 slat (usec): min=7, max=28189, avg=64.37, stdev=1245.42 00:08:10.221 clat (usec): min=130, max=231, avg=152.13, stdev=11.97 00:08:10.221 lat (usec): min=138, max=28393, avg=216.50, stdev=1247.74 00:08:10.221 clat percentiles (usec): 00:08:10.221 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 143], 00:08:10.221 | 30.00th=[ 145], 40.00th=[ 147], 50.00th=[ 149], 60.00th=[ 153], 00:08:10.221 | 70.00th=[ 155], 80.00th=[ 161], 90.00th=[ 169], 95.00th=[ 176], 00:08:10.221 | 99.00th=[ 186], 99.50th=[ 190], 99.90th=[ 231], 99.95th=[ 231], 00:08:10.221 | 99.99th=[ 231] 00:08:10.221 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:08:10.221 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:10.221 lat (usec) : 250=95.88% 00:08:10.221 lat (msec) : 50=4.12% 00:08:10.221 cpu : usr=0.39%, sys=0.49%, ctx=537, majf=0, minf=1 00:08:10.221 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:10.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:10.221 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:10.221 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:10.221 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:10.221 00:08:10.221 Run status group 0 (all jobs): 00:08:10.221 READ: bw=86.8KiB/s (88.9kB/s), 86.8KiB/s-86.8KiB/s (88.9kB/s-88.9kB/s), io=88.0KiB (90.1kB), run=1014-1014msec 00:08:10.221 WRITE: bw=2020KiB/s (2068kB/s), 2020KiB/s-2020KiB/s (2068kB/s-2068kB/s), io=2048KiB (2097kB), run=1014-1014msec 00:08:10.221 00:08:10.221 Disk stats (read/write): 00:08:10.221 nvme0n1: ios=45/512, merge=0/0, ticks=1764/73, in_queue=1837, util=98.70% 00:08:10.221 11:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:10.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:10.221 11:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:10.221 11:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:08:10.221 11:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:10.221 11:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:10.221 11:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:10.221 11:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:10.221 11:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:08:10.221 11:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:10.221 11:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:10.221 11:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:10.221 11:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:08:10.221 11:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:10.221 11:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:08:10.221 11:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:10.221 11:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:10.221 rmmod nvme_tcp 00:08:10.221 rmmod nvme_fabrics 00:08:10.221 rmmod nvme_keyring 00:08:10.221 11:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:10.221 11:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:08:10.221 11:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:08:10.221 11:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2523491 ']' 00:08:10.221 11:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2523491 00:08:10.221 11:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2523491 ']' 00:08:10.221 11:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2523491 00:08:10.221 11:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:08:10.221 11:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:10.221 11:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2523491 00:08:10.221 11:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:10.221 11:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:10.221 11:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2523491' 00:08:10.221 killing process with pid 2523491 00:08:10.221 11:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2523491 00:08:10.221 11:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2523491 00:08:10.481 11:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:10.481 11:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:10.481 11:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:10.481 11:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:08:10.481 11:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:08:10.481 11:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:10.481 11:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:08:10.481 11:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:10.481 11:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:10.481 11:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.481 11:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:10.740 11:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.650 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:12.650 00:08:12.650 real 0m10.723s 00:08:12.650 user 0m22.987s 00:08:12.650 sys 0m2.870s 00:08:12.650 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.650 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:12.650 ************************************ 00:08:12.650 END TEST nvmf_nmic 00:08:12.650 ************************************ 00:08:12.650 11:10:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:12.650 11:10:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:12.650 11:10:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.650 11:10:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:12.650 ************************************ 00:08:12.650 START TEST nvmf_fio_target 00:08:12.650 ************************************ 00:08:12.650 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:12.650 * Looking for test storage... 00:08:12.650 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:12.650 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:12.650 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:12.650 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:12.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.910 --rc genhtml_branch_coverage=1 00:08:12.910 --rc genhtml_function_coverage=1 00:08:12.910 --rc genhtml_legend=1 00:08:12.910 --rc geninfo_all_blocks=1 00:08:12.910 --rc geninfo_unexecuted_blocks=1 00:08:12.910 00:08:12.910 ' 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:12.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.910 --rc genhtml_branch_coverage=1 00:08:12.910 --rc genhtml_function_coverage=1 00:08:12.910 --rc genhtml_legend=1 00:08:12.910 --rc geninfo_all_blocks=1 00:08:12.910 --rc geninfo_unexecuted_blocks=1 00:08:12.910 00:08:12.910 ' 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:12.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.910 --rc genhtml_branch_coverage=1 00:08:12.910 --rc genhtml_function_coverage=1 00:08:12.910 --rc genhtml_legend=1 00:08:12.910 --rc geninfo_all_blocks=1 00:08:12.910 --rc geninfo_unexecuted_blocks=1 00:08:12.910 00:08:12.910 ' 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:12.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.910 --rc genhtml_branch_coverage=1 00:08:12.910 --rc genhtml_function_coverage=1 00:08:12.910 --rc genhtml_legend=1 00:08:12.910 --rc geninfo_all_blocks=1 00:08:12.910 --rc geninfo_unexecuted_blocks=1 00:08:12.910 00:08:12.910 ' 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:12.910 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:12.911 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:12.911 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:12.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:12.911 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:12.911 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:12.911 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:12.911 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:12.911 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:12.911 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:12.911 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:12.911 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:12.911 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:12.911 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:12.911 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:12.911 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:12.911 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.911 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:12.911 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.911 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:12.911 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:12.911 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:08:12.911 11:10:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:15.456 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:15.456 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:08:15.456 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:15.456 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:15.456 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:15.456 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:15.456 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:15.456 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:08:15.456 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:15.456 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:08:15.456 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:08:15.456 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:08:15.456 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:08:15.456 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:08:15.456 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:08:15.456 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:15.456 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:15.456 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:15.456 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:15.456 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:15.456 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:15.456 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:15.456 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:15.456 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:15.456 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:15.456 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:15.456 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:15.456 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:15.456 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:15.456 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:15.456 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:15.456 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:15.456 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:15.456 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:15.456 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:08:15.456 Found 0000:82:00.0 (0x8086 - 0x159b) 00:08:15.456 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:15.456 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:15.456 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.456 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.456 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:15.456 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:15.456 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:08:15.456 Found 0000:82:00.1 (0x8086 - 0x159b) 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:08:15.457 Found net devices under 0000:82:00.0: cvl_0_0 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:08:15.457 Found net devices under 0000:82:00.1: cvl_0_1 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:15.457 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:15.457 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:08:15.457 00:08:15.457 --- 10.0.0.2 ping statistics --- 00:08:15.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.457 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:15.457 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:15.457 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:08:15.457 00:08:15.457 --- 10.0.0.1 ping statistics --- 00:08:15.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.457 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:15.457 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:15.716 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2526514 00:08:15.716 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:15.716 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2526514 00:08:15.716 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2526514 ']' 00:08:15.716 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.716 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:15.716 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.716 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:15.716 11:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:15.716 [2024-11-19 11:10:11.004825] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:08:15.716 [2024-11-19 11:10:11.004923] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.716 [2024-11-19 11:10:11.088114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:15.716 [2024-11-19 11:10:11.148971] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:15.716 [2024-11-19 11:10:11.149022] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:15.716 [2024-11-19 11:10:11.149050] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:15.716 [2024-11-19 11:10:11.149061] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:15.716 [2024-11-19 11:10:11.149071] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:15.716 [2024-11-19 11:10:11.150671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.716 [2024-11-19 11:10:11.150751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:15.716 [2024-11-19 11:10:11.150818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:15.716 [2024-11-19 11:10:11.150821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.975 11:10:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:15.975 11:10:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:08:15.975 11:10:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:15.975 11:10:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:15.975 11:10:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:15.975 11:10:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:15.975 11:10:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:16.233 [2024-11-19 11:10:11.546897] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:16.233 11:10:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:16.491 11:10:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:08:16.491 11:10:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:16.749 11:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:08:16.749 11:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:17.007 11:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:08:17.007 11:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:17.265 11:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:08:17.265 11:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:08:17.523 11:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:18.088 11:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:08:18.088 11:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:18.088 11:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:08:18.088 11:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:18.654 11:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:08:18.654 11:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:08:18.654 11:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:18.911 11:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:18.912 11:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:19.477 11:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:19.477 11:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:19.477 11:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:19.734 [2024-11-19 11:10:15.200858] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:19.734 11:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:08:19.991 11:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:08:20.554 11:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:21.118 11:10:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:08:21.118 11:10:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:08:21.118 11:10:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:21.118 11:10:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:08:21.118 11:10:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:08:21.118 11:10:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:08:23.011 11:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:23.011 11:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:23.011 11:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:23.011 11:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:08:23.011 11:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:23.011 11:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:08:23.011 11:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:23.011 [global] 00:08:23.011 thread=1 00:08:23.011 invalidate=1 00:08:23.011 rw=write 00:08:23.011 time_based=1 00:08:23.011 runtime=1 00:08:23.011 ioengine=libaio 00:08:23.011 direct=1 00:08:23.011 bs=4096 00:08:23.011 iodepth=1 00:08:23.011 norandommap=0 00:08:23.011 numjobs=1 00:08:23.011 00:08:23.011 verify_dump=1 00:08:23.011 verify_backlog=512 00:08:23.011 verify_state_save=0 00:08:23.011 do_verify=1 00:08:23.011 verify=crc32c-intel 00:08:23.011 [job0] 00:08:23.011 filename=/dev/nvme0n1 00:08:23.011 [job1] 00:08:23.011 filename=/dev/nvme0n2 00:08:23.011 [job2] 00:08:23.011 filename=/dev/nvme0n3 00:08:23.011 [job3] 00:08:23.011 filename=/dev/nvme0n4 00:08:23.269 Could not set queue depth (nvme0n1) 00:08:23.269 Could not set queue depth (nvme0n2) 00:08:23.269 Could not set queue depth (nvme0n3) 00:08:23.269 Could not set queue depth (nvme0n4) 00:08:23.269 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:23.269 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:23.269 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:23.269 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:23.269 fio-3.35 00:08:23.269 Starting 4 threads 00:08:24.641 00:08:24.641 job0: (groupid=0, jobs=1): err= 0: pid=2527594: Tue Nov 19 11:10:19 2024 00:08:24.641 read: IOPS=23, BW=92.2KiB/s (94.4kB/s)(96.0KiB/1041msec) 00:08:24.641 slat (nsec): min=9388, max=30424, avg=15768.83, stdev=6326.64 00:08:24.642 clat (usec): min=288, max=41985, avg=39296.30, stdev=8312.18 00:08:24.642 lat (usec): min=299, max=41999, avg=39312.07, stdev=8313.23 00:08:24.642 clat percentiles (usec): 00:08:24.642 | 1.00th=[ 289], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:08:24.642 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:08:24.642 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:08:24.642 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:24.642 | 99.99th=[42206] 00:08:24.642 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:08:24.642 slat (nsec): min=9197, max=35718, avg=11371.38, stdev=2712.04 00:08:24.642 clat (usec): min=145, max=250, avg=174.52, stdev=14.15 00:08:24.642 lat (usec): min=154, max=265, avg=185.89, stdev=14.80 00:08:24.642 clat percentiles (usec): 00:08:24.642 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 163], 00:08:24.642 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 176], 00:08:24.642 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 194], 95.00th=[ 200], 00:08:24.642 | 99.00th=[ 215], 99.50th=[ 231], 99.90th=[ 251], 99.95th=[ 251], 00:08:24.642 | 99.99th=[ 251] 00:08:24.642 bw ( KiB/s): min= 4096, max= 4096, per=20.27%, avg=4096.00, stdev= 0.00, samples=1 00:08:24.642 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:24.642 lat (usec) : 250=95.34%, 500=0.37% 00:08:24.642 lat (msec) : 50=4.29% 00:08:24.642 cpu : usr=0.58%, sys=0.48%, ctx=537, majf=0, minf=1 00:08:24.642 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:24.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:24.642 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:24.642 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:24.642 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:24.642 job1: (groupid=0, jobs=1): err= 0: pid=2527595: Tue Nov 19 11:10:19 2024 00:08:24.642 read: IOPS=1717, BW=6869KiB/s (7034kB/s)(7000KiB/1019msec) 00:08:24.642 slat (nsec): min=5315, max=46926, avg=10358.80, stdev=6168.48 00:08:24.642 clat (usec): min=175, max=41321, avg=334.81, stdev=1387.15 00:08:24.642 lat (usec): min=181, max=41328, avg=345.17, stdev=1387.35 00:08:24.642 clat percentiles (usec): 00:08:24.642 | 1.00th=[ 184], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 202], 00:08:24.642 | 30.00th=[ 210], 40.00th=[ 221], 50.00th=[ 245], 60.00th=[ 277], 00:08:24.642 | 70.00th=[ 310], 80.00th=[ 363], 90.00th=[ 429], 95.00th=[ 570], 00:08:24.642 | 99.00th=[ 725], 99.50th=[ 775], 99.90th=[41157], 99.95th=[41157], 00:08:24.642 | 99.99th=[41157] 00:08:24.642 write: IOPS=2009, BW=8039KiB/s (8232kB/s)(8192KiB/1019msec); 0 zone resets 00:08:24.642 slat (usec): min=7, max=31971, avg=28.78, stdev=706.45 00:08:24.642 clat (usec): min=122, max=542, avg=167.32, stdev=33.75 00:08:24.642 lat (usec): min=130, max=32168, avg=196.10, stdev=708.07 00:08:24.642 clat percentiles (usec): 00:08:24.642 | 1.00th=[ 126], 5.00th=[ 129], 10.00th=[ 133], 20.00th=[ 139], 00:08:24.642 | 30.00th=[ 147], 40.00th=[ 155], 50.00th=[ 161], 60.00th=[ 169], 00:08:24.642 | 70.00th=[ 178], 80.00th=[ 188], 90.00th=[ 210], 95.00th=[ 231], 00:08:24.642 | 99.00th=[ 281], 99.50th=[ 285], 99.90th=[ 363], 99.95th=[ 404], 00:08:24.642 | 99.99th=[ 545] 00:08:24.642 bw ( KiB/s): min= 8192, max= 8192, per=40.55%, avg=8192.00, stdev= 0.00, samples=2 00:08:24.642 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:08:24.642 lat (usec) : 250=76.30%, 500=20.88%, 750=2.47%, 1000=0.29% 00:08:24.642 lat (msec) : 50=0.05% 00:08:24.642 cpu : usr=2.75%, sys=4.13%, ctx=3803, majf=0, minf=1 00:08:24.642 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:24.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:24.642 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:24.642 issued rwts: total=1750,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:24.642 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:24.642 job2: (groupid=0, jobs=1): err= 0: pid=2527598: Tue Nov 19 11:10:19 2024 00:08:24.642 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:08:24.642 slat (nsec): min=5486, max=72220, avg=17127.10, stdev=11345.71 00:08:24.642 clat (usec): min=200, max=41998, avg=720.99, stdev=3999.74 00:08:24.642 lat (usec): min=209, max=42014, avg=738.12, stdev=3999.33 00:08:24.642 clat percentiles (usec): 00:08:24.642 | 1.00th=[ 208], 5.00th=[ 221], 10.00th=[ 231], 20.00th=[ 249], 00:08:24.642 | 30.00th=[ 273], 40.00th=[ 289], 50.00th=[ 314], 60.00th=[ 334], 00:08:24.642 | 70.00th=[ 363], 80.00th=[ 396], 90.00th=[ 445], 95.00th=[ 478], 00:08:24.642 | 99.00th=[ 668], 99.50th=[40633], 99.90th=[41681], 99.95th=[42206], 00:08:24.642 | 99.99th=[42206] 00:08:24.642 write: IOPS=1160, BW=4643KiB/s (4755kB/s)(4648KiB/1001msec); 0 zone resets 00:08:24.642 slat (nsec): min=7061, max=85181, avg=9442.30, stdev=3892.59 00:08:24.642 clat (usec): min=137, max=331, avg=193.61, stdev=35.69 00:08:24.642 lat (usec): min=145, max=340, avg=203.05, stdev=36.04 00:08:24.642 clat percentiles (usec): 00:08:24.642 | 1.00th=[ 143], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 165], 00:08:24.642 | 30.00th=[ 172], 40.00th=[ 178], 50.00th=[ 184], 60.00th=[ 194], 00:08:24.642 | 70.00th=[ 206], 80.00th=[ 217], 90.00th=[ 251], 95.00th=[ 273], 00:08:24.642 | 99.00th=[ 293], 99.50th=[ 302], 99.90th=[ 322], 99.95th=[ 330], 00:08:24.642 | 99.99th=[ 330] 00:08:24.642 bw ( KiB/s): min= 7192, max= 7192, per=35.60%, avg=7192.00, stdev= 0.00, samples=1 00:08:24.642 iops : min= 1798, max= 1798, avg=1798.00, stdev= 0.00, samples=1 00:08:24.642 lat (usec) : 250=57.37%, 500=40.90%, 750=1.28% 00:08:24.642 lat (msec) : 50=0.46% 00:08:24.642 cpu : usr=1.40%, sys=3.10%, ctx=2186, majf=0, minf=2 00:08:24.642 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:24.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:24.642 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:24.642 issued rwts: total=1024,1162,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:24.642 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:24.642 job3: (groupid=0, jobs=1): err= 0: pid=2527599: Tue Nov 19 11:10:19 2024 00:08:24.642 read: IOPS=997, BW=3988KiB/s (4084kB/s)(4140KiB/1038msec) 00:08:24.642 slat (nsec): min=7223, max=53200, avg=11200.63, stdev=5489.03 00:08:24.642 clat (usec): min=185, max=41085, avg=677.12, stdev=4177.76 00:08:24.642 lat (usec): min=192, max=41102, avg=688.32, stdev=4178.71 00:08:24.642 clat percentiles (usec): 00:08:24.642 | 1.00th=[ 190], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 206], 00:08:24.642 | 30.00th=[ 210], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 229], 00:08:24.642 | 70.00th=[ 249], 80.00th=[ 277], 90.00th=[ 314], 95.00th=[ 388], 00:08:24.642 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:24.642 | 99.99th=[41157] 00:08:24.642 write: IOPS=1479, BW=5919KiB/s (6061kB/s)(6144KiB/1038msec); 0 zone resets 00:08:24.642 slat (nsec): min=7433, max=90819, avg=11933.22, stdev=5437.48 00:08:24.642 clat (usec): min=138, max=453, avg=194.11, stdev=38.19 00:08:24.642 lat (usec): min=147, max=464, avg=206.05, stdev=39.77 00:08:24.642 clat percentiles (usec): 00:08:24.642 | 1.00th=[ 147], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 165], 00:08:24.642 | 30.00th=[ 172], 40.00th=[ 178], 50.00th=[ 186], 60.00th=[ 194], 00:08:24.642 | 70.00th=[ 204], 80.00th=[ 217], 90.00th=[ 243], 95.00th=[ 269], 00:08:24.642 | 99.00th=[ 322], 99.50th=[ 367], 99.90th=[ 453], 99.95th=[ 453], 00:08:24.642 | 99.99th=[ 453] 00:08:24.642 bw ( KiB/s): min= 3168, max= 9120, per=30.41%, avg=6144.00, stdev=4208.70, samples=2 00:08:24.642 iops : min= 792, max= 2280, avg=1536.00, stdev=1052.17, samples=2 00:08:24.642 lat (usec) : 250=82.69%, 500=16.53%, 750=0.27% 00:08:24.642 lat (msec) : 2=0.08%, 50=0.43% 00:08:24.642 cpu : usr=1.54%, sys=4.15%, ctx=2574, majf=0, minf=1 00:08:24.642 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:24.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:24.642 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:24.642 issued rwts: total=1035,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:24.642 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:24.642 00:08:24.642 Run status group 0 (all jobs): 00:08:24.642 READ: bw=14.4MiB/s (15.1MB/s), 92.2KiB/s-6869KiB/s (94.4kB/s-7034kB/s), io=15.0MiB (15.7MB), run=1001-1041msec 00:08:24.642 WRITE: bw=19.7MiB/s (20.7MB/s), 1967KiB/s-8039KiB/s (2015kB/s-8232kB/s), io=20.5MiB (21.5MB), run=1001-1041msec 00:08:24.642 00:08:24.642 Disk stats (read/write): 00:08:24.642 nvme0n1: ios=69/512, merge=0/0, ticks=758/88, in_queue=846, util=86.87% 00:08:24.642 nvme0n2: ios=1578/1963, merge=0/0, ticks=567/324, in_queue=891, util=90.66% 00:08:24.642 nvme0n3: ios=891/1024, merge=0/0, ticks=666/203, in_queue=869, util=94.90% 00:08:24.642 nvme0n4: ios=1087/1536, merge=0/0, ticks=851/294, in_queue=1145, util=94.33% 00:08:24.642 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:08:24.642 [global] 00:08:24.642 thread=1 00:08:24.642 invalidate=1 00:08:24.642 rw=randwrite 00:08:24.642 time_based=1 00:08:24.642 runtime=1 00:08:24.642 ioengine=libaio 00:08:24.642 direct=1 00:08:24.642 bs=4096 00:08:24.642 iodepth=1 00:08:24.642 norandommap=0 00:08:24.642 numjobs=1 00:08:24.642 00:08:24.642 verify_dump=1 00:08:24.642 verify_backlog=512 00:08:24.642 verify_state_save=0 00:08:24.642 do_verify=1 00:08:24.642 verify=crc32c-intel 00:08:24.642 [job0] 00:08:24.642 filename=/dev/nvme0n1 00:08:24.642 [job1] 00:08:24.642 filename=/dev/nvme0n2 00:08:24.642 [job2] 00:08:24.642 filename=/dev/nvme0n3 00:08:24.642 [job3] 00:08:24.642 filename=/dev/nvme0n4 00:08:24.642 Could not set queue depth (nvme0n1) 00:08:24.642 Could not set queue depth (nvme0n2) 00:08:24.642 Could not set queue depth (nvme0n3) 00:08:24.642 Could not set queue depth (nvme0n4) 00:08:24.900 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:24.900 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:24.900 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:24.900 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:24.900 fio-3.35 00:08:24.900 Starting 4 threads 00:08:26.274 00:08:26.274 job0: (groupid=0, jobs=1): err= 0: pid=2527927: Tue Nov 19 11:10:21 2024 00:08:26.274 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:08:26.274 slat (nsec): min=6629, max=41027, avg=9730.39, stdev=3748.77 00:08:26.274 clat (usec): min=180, max=625, avg=249.42, stdev=55.85 00:08:26.274 lat (usec): min=188, max=633, avg=259.15, stdev=56.54 00:08:26.274 clat percentiles (usec): 00:08:26.274 | 1.00th=[ 192], 5.00th=[ 200], 10.00th=[ 206], 20.00th=[ 215], 00:08:26.274 | 30.00th=[ 223], 40.00th=[ 231], 50.00th=[ 237], 60.00th=[ 245], 00:08:26.274 | 70.00th=[ 251], 80.00th=[ 265], 90.00th=[ 302], 95.00th=[ 351], 00:08:26.274 | 99.00th=[ 510], 99.50th=[ 553], 99.90th=[ 594], 99.95th=[ 603], 00:08:26.274 | 99.99th=[ 627] 00:08:26.274 write: IOPS=2299, BW=9199KiB/s (9420kB/s)(9208KiB/1001msec); 0 zone resets 00:08:26.274 slat (nsec): min=8220, max=62707, avg=16161.42, stdev=7126.10 00:08:26.274 clat (usec): min=135, max=439, avg=180.41, stdev=26.89 00:08:26.274 lat (usec): min=147, max=463, avg=196.57, stdev=30.87 00:08:26.274 clat percentiles (usec): 00:08:26.274 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 157], 00:08:26.274 | 30.00th=[ 163], 40.00th=[ 169], 50.00th=[ 176], 60.00th=[ 184], 00:08:26.274 | 70.00th=[ 192], 80.00th=[ 202], 90.00th=[ 217], 95.00th=[ 229], 00:08:26.274 | 99.00th=[ 260], 99.50th=[ 281], 99.90th=[ 330], 99.95th=[ 375], 00:08:26.274 | 99.99th=[ 441] 00:08:26.274 bw ( KiB/s): min= 8192, max= 8192, per=42.50%, avg=8192.00, stdev= 0.00, samples=1 00:08:26.274 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:26.274 lat (usec) : 250=84.25%, 500=15.20%, 750=0.55% 00:08:26.274 cpu : usr=2.90%, sys=6.20%, ctx=4352, majf=0, minf=1 00:08:26.274 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:26.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:26.274 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:26.274 issued rwts: total=2048,2302,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:26.274 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:26.274 job1: (groupid=0, jobs=1): err= 0: pid=2527942: Tue Nov 19 11:10:21 2024 00:08:26.274 read: IOPS=1024, BW=4099KiB/s (4197kB/s)(4136KiB/1009msec) 00:08:26.274 slat (nsec): min=5314, max=55815, avg=9259.87, stdev=4066.92 00:08:26.274 clat (usec): min=190, max=41982, avg=640.36, stdev=3997.85 00:08:26.274 lat (usec): min=199, max=41995, avg=649.62, stdev=3998.24 00:08:26.274 clat percentiles (usec): 00:08:26.274 | 1.00th=[ 200], 5.00th=[ 208], 10.00th=[ 217], 20.00th=[ 229], 00:08:26.274 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 251], 00:08:26.274 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[ 273], 95.00th=[ 281], 00:08:26.274 | 99.00th=[ 404], 99.50th=[41157], 99.90th=[41157], 99.95th=[42206], 00:08:26.274 | 99.99th=[42206] 00:08:26.274 write: IOPS=1522, BW=6089KiB/s (6235kB/s)(6144KiB/1009msec); 0 zone resets 00:08:26.274 slat (nsec): min=8264, max=58333, avg=14391.86, stdev=7644.35 00:08:26.274 clat (usec): min=129, max=440, avg=198.66, stdev=41.19 00:08:26.274 lat (usec): min=138, max=461, avg=213.05, stdev=44.17 00:08:26.274 clat percentiles (usec): 00:08:26.274 | 1.00th=[ 135], 5.00th=[ 143], 10.00th=[ 149], 20.00th=[ 157], 00:08:26.274 | 30.00th=[ 169], 40.00th=[ 186], 50.00th=[ 198], 60.00th=[ 208], 00:08:26.274 | 70.00th=[ 221], 80.00th=[ 233], 90.00th=[ 249], 95.00th=[ 265], 00:08:26.274 | 99.00th=[ 322], 99.50th=[ 343], 99.90th=[ 400], 99.95th=[ 441], 00:08:26.274 | 99.99th=[ 441] 00:08:26.274 bw ( KiB/s): min= 4096, max= 8192, per=31.88%, avg=6144.00, stdev=2896.31, samples=2 00:08:26.274 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:08:26.274 lat (usec) : 250=77.78%, 500=21.83% 00:08:26.274 lat (msec) : 50=0.39% 00:08:26.274 cpu : usr=1.79%, sys=3.97%, ctx=2571, majf=0, minf=2 00:08:26.274 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:26.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:26.274 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:26.274 issued rwts: total=1034,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:26.274 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:26.274 job2: (groupid=0, jobs=1): err= 0: pid=2527943: Tue Nov 19 11:10:21 2024 00:08:26.274 read: IOPS=318, BW=1275KiB/s (1305kB/s)(1276KiB/1001msec) 00:08:26.274 slat (nsec): min=6610, max=48041, avg=11533.52, stdev=6467.81 00:08:26.274 clat (usec): min=220, max=41280, avg=2748.99, stdev=9479.76 00:08:26.274 lat (usec): min=228, max=41286, avg=2760.52, stdev=9482.09 00:08:26.274 clat percentiles (usec): 00:08:26.274 | 1.00th=[ 231], 5.00th=[ 273], 10.00th=[ 289], 20.00th=[ 293], 00:08:26.274 | 30.00th=[ 302], 40.00th=[ 310], 50.00th=[ 322], 60.00th=[ 351], 00:08:26.274 | 70.00th=[ 379], 80.00th=[ 429], 90.00th=[ 537], 95.00th=[40633], 00:08:26.274 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:26.274 | 99.99th=[41157] 00:08:26.274 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:08:26.274 slat (nsec): min=7461, max=43895, avg=11868.22, stdev=7012.20 00:08:26.274 clat (usec): min=157, max=712, avg=216.29, stdev=53.21 00:08:26.274 lat (usec): min=166, max=731, avg=228.16, stdev=54.15 00:08:26.274 clat percentiles (usec): 00:08:26.274 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 174], 00:08:26.274 | 30.00th=[ 182], 40.00th=[ 194], 50.00th=[ 212], 60.00th=[ 225], 00:08:26.274 | 70.00th=[ 235], 80.00th=[ 247], 90.00th=[ 265], 95.00th=[ 293], 00:08:26.274 | 99.00th=[ 355], 99.50th=[ 529], 99.90th=[ 717], 99.95th=[ 717], 00:08:26.274 | 99.99th=[ 717] 00:08:26.274 bw ( KiB/s): min= 4096, max= 4096, per=21.25%, avg=4096.00, stdev= 0.00, samples=1 00:08:26.274 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:26.274 lat (usec) : 250=51.99%, 500=42.60%, 750=3.01% 00:08:26.274 lat (msec) : 10=0.12%, 50=2.29% 00:08:26.274 cpu : usr=1.10%, sys=0.80%, ctx=832, majf=0, minf=2 00:08:26.274 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:26.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:26.274 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:26.274 issued rwts: total=319,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:26.274 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:26.274 job3: (groupid=0, jobs=1): err= 0: pid=2527944: Tue Nov 19 11:10:21 2024 00:08:26.274 read: IOPS=440, BW=1762KiB/s (1805kB/s)(1764KiB/1001msec) 00:08:26.274 slat (nsec): min=5410, max=33856, avg=9209.69, stdev=4921.15 00:08:26.274 clat (usec): min=194, max=42022, avg=1989.61, stdev=8316.98 00:08:26.274 lat (usec): min=200, max=42036, avg=1998.82, stdev=8318.98 00:08:26.274 clat percentiles (usec): 00:08:26.274 | 1.00th=[ 196], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 210], 00:08:26.274 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 221], 60.00th=[ 225], 00:08:26.274 | 70.00th=[ 233], 80.00th=[ 243], 90.00th=[ 277], 95.00th=[ 302], 00:08:26.274 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:08:26.274 | 99.99th=[42206] 00:08:26.274 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:08:26.274 slat (nsec): min=6301, max=40258, avg=10230.08, stdev=4481.40 00:08:26.274 clat (usec): min=146, max=363, avg=216.09, stdev=38.52 00:08:26.274 lat (usec): min=153, max=403, avg=226.32, stdev=40.05 00:08:26.274 clat percentiles (usec): 00:08:26.274 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 172], 00:08:26.274 | 30.00th=[ 188], 40.00th=[ 219], 50.00th=[ 225], 60.00th=[ 229], 00:08:26.274 | 70.00th=[ 235], 80.00th=[ 245], 90.00th=[ 260], 95.00th=[ 277], 00:08:26.274 | 99.00th=[ 318], 99.50th=[ 359], 99.90th=[ 363], 99.95th=[ 363], 00:08:26.274 | 99.99th=[ 363] 00:08:26.274 bw ( KiB/s): min= 4096, max= 4096, per=21.25%, avg=4096.00, stdev= 0.00, samples=1 00:08:26.274 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:26.274 lat (usec) : 250=83.11%, 500=14.90% 00:08:26.274 lat (msec) : 50=1.99% 00:08:26.274 cpu : usr=0.70%, sys=0.80%, ctx=954, majf=0, minf=1 00:08:26.274 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:26.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:26.274 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:26.274 issued rwts: total=441,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:26.274 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:26.274 00:08:26.274 Run status group 0 (all jobs): 00:08:26.274 READ: bw=14.9MiB/s (15.6MB/s), 1275KiB/s-8184KiB/s (1305kB/s-8380kB/s), io=15.0MiB (15.7MB), run=1001-1009msec 00:08:26.274 WRITE: bw=18.8MiB/s (19.7MB/s), 2046KiB/s-9199KiB/s (2095kB/s-9420kB/s), io=19.0MiB (19.9MB), run=1001-1009msec 00:08:26.274 00:08:26.274 Disk stats (read/write): 00:08:26.274 nvme0n1: ios=1646/2048, merge=0/0, ticks=672/360, in_queue=1032, util=98.10% 00:08:26.274 nvme0n2: ios=1080/1536, merge=0/0, ticks=1497/296, in_queue=1793, util=98.17% 00:08:26.274 nvme0n3: ios=343/512, merge=0/0, ticks=1697/107, in_queue=1804, util=98.12% 00:08:26.274 nvme0n4: ios=68/512, merge=0/0, ticks=1024/111, in_queue=1135, util=97.90% 00:08:26.274 11:10:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:08:26.274 [global] 00:08:26.274 thread=1 00:08:26.274 invalidate=1 00:08:26.274 rw=write 00:08:26.275 time_based=1 00:08:26.275 runtime=1 00:08:26.275 ioengine=libaio 00:08:26.275 direct=1 00:08:26.275 bs=4096 00:08:26.275 iodepth=128 00:08:26.275 norandommap=0 00:08:26.275 numjobs=1 00:08:26.275 00:08:26.275 verify_dump=1 00:08:26.275 verify_backlog=512 00:08:26.275 verify_state_save=0 00:08:26.275 do_verify=1 00:08:26.275 verify=crc32c-intel 00:08:26.275 [job0] 00:08:26.275 filename=/dev/nvme0n1 00:08:26.275 [job1] 00:08:26.275 filename=/dev/nvme0n2 00:08:26.275 [job2] 00:08:26.275 filename=/dev/nvme0n3 00:08:26.275 [job3] 00:08:26.275 filename=/dev/nvme0n4 00:08:26.275 Could not set queue depth (nvme0n1) 00:08:26.275 Could not set queue depth (nvme0n2) 00:08:26.275 Could not set queue depth (nvme0n3) 00:08:26.275 Could not set queue depth (nvme0n4) 00:08:26.275 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:26.275 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:26.275 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:26.275 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:26.275 fio-3.35 00:08:26.275 Starting 4 threads 00:08:27.650 00:08:27.650 job0: (groupid=0, jobs=1): err= 0: pid=2528178: Tue Nov 19 11:10:22 2024 00:08:27.650 read: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(10.0MiB/1009msec) 00:08:27.650 slat (usec): min=2, max=19783, avg=188.95, stdev=1196.83 00:08:27.650 clat (usec): min=7157, max=82488, avg=24873.44, stdev=13939.06 00:08:27.650 lat (usec): min=9149, max=82532, avg=25062.39, stdev=14045.78 00:08:27.650 clat percentiles (usec): 00:08:27.650 | 1.00th=[ 9503], 5.00th=[11863], 10.00th=[12387], 20.00th=[13173], 00:08:27.650 | 30.00th=[16319], 40.00th=[18744], 50.00th=[20055], 60.00th=[22152], 00:08:27.650 | 70.00th=[26346], 80.00th=[33817], 90.00th=[49546], 95.00th=[58983], 00:08:27.650 | 99.00th=[66847], 99.50th=[66847], 99.90th=[66847], 99.95th=[69731], 00:08:27.650 | 99.99th=[82314] 00:08:27.650 write: IOPS=2991, BW=11.7MiB/s (12.3MB/s)(11.8MiB/1009msec); 0 zone resets 00:08:27.650 slat (usec): min=3, max=13649, avg=163.56, stdev=960.16 00:08:27.650 clat (usec): min=5083, max=85970, avg=21194.49, stdev=15937.23 00:08:27.650 lat (usec): min=6717, max=85979, avg=21358.05, stdev=16036.43 00:08:27.650 clat percentiles (usec): 00:08:27.650 | 1.00th=[ 7701], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9896], 00:08:27.650 | 30.00th=[12256], 40.00th=[13698], 50.00th=[15533], 60.00th=[15926], 00:08:27.650 | 70.00th=[22414], 80.00th=[30802], 90.00th=[39584], 95.00th=[58983], 00:08:27.650 | 99.00th=[82314], 99.50th=[85459], 99.90th=[85459], 99.95th=[85459], 00:08:27.650 | 99.99th=[85459] 00:08:27.650 bw ( KiB/s): min= 7568, max=15552, per=21.82%, avg=11560.00, stdev=5645.54, samples=2 00:08:27.650 iops : min= 1892, max= 3888, avg=2890.00, stdev=1411.39, samples=2 00:08:27.650 lat (msec) : 10=13.43%, 20=45.54%, 50=33.63%, 100=7.40% 00:08:27.650 cpu : usr=3.37%, sys=4.56%, ctx=210, majf=0, minf=1 00:08:27.650 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:08:27.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:27.650 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:27.650 issued rwts: total=2560,3018,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:27.650 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:27.650 job1: (groupid=0, jobs=1): err= 0: pid=2528179: Tue Nov 19 11:10:22 2024 00:08:27.650 read: IOPS=3568, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1002msec) 00:08:27.650 slat (usec): min=2, max=11349, avg=100.40, stdev=607.81 00:08:27.650 clat (usec): min=404, max=49331, avg=13300.11, stdev=8591.82 00:08:27.650 lat (usec): min=2407, max=49358, avg=13400.51, stdev=8630.95 00:08:27.650 clat percentiles (usec): 00:08:27.650 | 1.00th=[ 2737], 5.00th=[ 5932], 10.00th=[ 8094], 20.00th=[ 8979], 00:08:27.650 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[10814], 60.00th=[11469], 00:08:27.650 | 70.00th=[12911], 80.00th=[13566], 90.00th=[19530], 95.00th=[40109], 00:08:27.650 | 99.00th=[44827], 99.50th=[48497], 99.90th=[49546], 99.95th=[49546], 00:08:27.650 | 99.99th=[49546] 00:08:27.650 write: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 00:08:27.650 slat (usec): min=3, max=38203, avg=165.75, stdev=1074.04 00:08:27.650 clat (usec): min=261, max=75560, avg=21394.94, stdev=19283.23 00:08:27.650 lat (usec): min=301, max=75585, avg=21560.69, stdev=19411.79 00:08:27.650 clat percentiles (usec): 00:08:27.650 | 1.00th=[ 1123], 5.00th=[ 3884], 10.00th=[ 8029], 20.00th=[ 8717], 00:08:27.650 | 30.00th=[ 9503], 40.00th=[10159], 50.00th=[10552], 60.00th=[11600], 00:08:27.650 | 70.00th=[26346], 80.00th=[38536], 90.00th=[53740], 95.00th=[66323], 00:08:27.650 | 99.00th=[73925], 99.50th=[74974], 99.90th=[74974], 99.95th=[74974], 00:08:27.650 | 99.99th=[76022] 00:08:27.650 bw ( KiB/s): min=13552, max=15120, per=27.05%, avg=14336.00, stdev=1108.74, samples=2 00:08:27.650 iops : min= 3388, max= 3780, avg=3584.00, stdev=277.19, samples=2 00:08:27.650 lat (usec) : 500=0.04%, 750=0.15%, 1000=0.21% 00:08:27.650 lat (msec) : 2=0.39%, 4=2.65%, 10=32.43%, 20=43.14%, 50=14.89% 00:08:27.650 lat (msec) : 100=6.09% 00:08:27.650 cpu : usr=4.00%, sys=6.79%, ctx=333, majf=0, minf=1 00:08:27.650 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:08:27.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:27.650 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:27.650 issued rwts: total=3576,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:27.650 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:27.651 job2: (groupid=0, jobs=1): err= 0: pid=2528181: Tue Nov 19 11:10:22 2024 00:08:27.651 read: IOPS=4037, BW=15.8MiB/s (16.5MB/s)(15.9MiB/1007msec) 00:08:27.651 slat (usec): min=2, max=28861, avg=122.28, stdev=949.12 00:08:27.651 clat (usec): min=6175, max=81228, avg=16325.69, stdev=12260.21 00:08:27.651 lat (usec): min=6198, max=84995, avg=16447.97, stdev=12348.77 00:08:27.651 clat percentiles (usec): 00:08:27.651 | 1.00th=[ 6521], 5.00th=[ 8586], 10.00th=[ 9503], 20.00th=[ 9896], 00:08:27.651 | 30.00th=[10159], 40.00th=[10552], 50.00th=[10814], 60.00th=[11338], 00:08:27.651 | 70.00th=[12649], 80.00th=[21103], 90.00th=[36963], 95.00th=[43254], 00:08:27.651 | 99.00th=[58983], 99.50th=[70779], 99.90th=[76022], 99.95th=[76022], 00:08:27.651 | 99.99th=[81265] 00:08:27.651 write: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec); 0 zone resets 00:08:27.651 slat (usec): min=4, max=16215, avg=113.57, stdev=671.16 00:08:27.651 clat (usec): min=5627, max=74877, avg=14900.66, stdev=11855.35 00:08:27.651 lat (usec): min=5638, max=76661, avg=15014.23, stdev=11935.64 00:08:27.651 clat percentiles (usec): 00:08:27.651 | 1.00th=[ 5866], 5.00th=[ 8979], 10.00th=[ 9765], 20.00th=[10159], 00:08:27.651 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10683], 60.00th=[11076], 00:08:27.651 | 70.00th=[11338], 80.00th=[11863], 90.00th=[31065], 95.00th=[42206], 00:08:27.651 | 99.00th=[70779], 99.50th=[72877], 99.90th=[74974], 99.95th=[74974], 00:08:27.651 | 99.99th=[74974] 00:08:27.651 bw ( KiB/s): min=16352, max=16416, per=30.92%, avg=16384.00, stdev=45.25, samples=2 00:08:27.651 iops : min= 4088, max= 4104, avg=4096.00, stdev=11.31, samples=2 00:08:27.651 lat (msec) : 10=18.86%, 20=63.26%, 50=14.48%, 100=3.41% 00:08:27.651 cpu : usr=4.67%, sys=6.76%, ctx=350, majf=0, minf=1 00:08:27.651 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:08:27.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:27.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:27.651 issued rwts: total=4066,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:27.651 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:27.651 job3: (groupid=0, jobs=1): err= 0: pid=2528182: Tue Nov 19 11:10:22 2024 00:08:27.651 read: IOPS=2522, BW=9.85MiB/s (10.3MB/s)(10.0MiB/1015msec) 00:08:27.651 slat (usec): min=2, max=15872, avg=146.74, stdev=972.60 00:08:27.651 clat (usec): min=5666, max=51287, avg=16973.19, stdev=8251.31 00:08:27.651 lat (usec): min=5674, max=51294, avg=17119.93, stdev=8333.88 00:08:27.651 clat percentiles (usec): 00:08:27.651 | 1.00th=[ 8979], 5.00th=[10421], 10.00th=[11076], 20.00th=[11863], 00:08:27.651 | 30.00th=[12649], 40.00th=[13435], 50.00th=[14484], 60.00th=[15401], 00:08:27.651 | 70.00th=[15926], 80.00th=[19530], 90.00th=[26084], 95.00th=[38011], 00:08:27.651 | 99.00th=[49021], 99.50th=[50070], 99.90th=[51119], 99.95th=[51119], 00:08:27.651 | 99.99th=[51119] 00:08:27.651 write: IOPS=2707, BW=10.6MiB/s (11.1MB/s)(10.7MiB/1015msec); 0 zone resets 00:08:27.651 slat (usec): min=3, max=40668, avg=217.25, stdev=1278.98 00:08:27.651 clat (usec): min=4324, max=78432, avg=28171.86, stdev=17987.78 00:08:27.651 lat (usec): min=4335, max=78446, avg=28389.11, stdev=18131.78 00:08:27.651 clat percentiles (usec): 00:08:27.651 | 1.00th=[ 5473], 5.00th=[10028], 10.00th=[10683], 20.00th=[11469], 00:08:27.651 | 30.00th=[12387], 40.00th=[16581], 50.00th=[26084], 60.00th=[33162], 00:08:27.651 | 70.00th=[35914], 80.00th=[39584], 90.00th=[53216], 95.00th=[69731], 00:08:27.651 | 99.00th=[78119], 99.50th=[78119], 99.90th=[78119], 99.95th=[78119], 00:08:27.651 | 99.99th=[78119] 00:08:27.651 bw ( KiB/s): min= 8680, max=12288, per=19.79%, avg=10484.00, stdev=2551.24, samples=2 00:08:27.651 iops : min= 2170, max= 3072, avg=2621.00, stdev=637.81, samples=2 00:08:27.651 lat (msec) : 10=4.67%, 20=58.76%, 50=30.35%, 100=6.22% 00:08:27.651 cpu : usr=2.96%, sys=5.33%, ctx=257, majf=0, minf=1 00:08:27.651 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:08:27.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:27.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:27.651 issued rwts: total=2560,2748,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:27.651 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:27.651 00:08:27.651 Run status group 0 (all jobs): 00:08:27.651 READ: bw=49.1MiB/s (51.5MB/s), 9.85MiB/s-15.8MiB/s (10.3MB/s-16.5MB/s), io=49.9MiB (52.3MB), run=1002-1015msec 00:08:27.651 WRITE: bw=51.7MiB/s (54.3MB/s), 10.6MiB/s-15.9MiB/s (11.1MB/s-16.7MB/s), io=52.5MiB (55.1MB), run=1002-1015msec 00:08:27.651 00:08:27.651 Disk stats (read/write): 00:08:27.651 nvme0n1: ios=2610/2607, merge=0/0, ticks=20995/13950, in_queue=34945, util=91.08% 00:08:27.651 nvme0n2: ios=2576/2690, merge=0/0, ticks=28780/64515, in_queue=93295, util=98.07% 00:08:27.651 nvme0n3: ios=3584/3893, merge=0/0, ticks=25255/26286, in_queue=51541, util=88.15% 00:08:27.651 nvme0n4: ios=2108/2319, merge=0/0, ticks=31843/63735, in_queue=95578, util=100.00% 00:08:27.651 11:10:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:08:27.651 [global] 00:08:27.651 thread=1 00:08:27.651 invalidate=1 00:08:27.651 rw=randwrite 00:08:27.651 time_based=1 00:08:27.651 runtime=1 00:08:27.651 ioengine=libaio 00:08:27.651 direct=1 00:08:27.651 bs=4096 00:08:27.651 iodepth=128 00:08:27.651 norandommap=0 00:08:27.651 numjobs=1 00:08:27.651 00:08:27.651 verify_dump=1 00:08:27.651 verify_backlog=512 00:08:27.651 verify_state_save=0 00:08:27.651 do_verify=1 00:08:27.651 verify=crc32c-intel 00:08:27.651 [job0] 00:08:27.651 filename=/dev/nvme0n1 00:08:27.651 [job1] 00:08:27.651 filename=/dev/nvme0n2 00:08:27.651 [job2] 00:08:27.651 filename=/dev/nvme0n3 00:08:27.651 [job3] 00:08:27.651 filename=/dev/nvme0n4 00:08:27.651 Could not set queue depth (nvme0n1) 00:08:27.651 Could not set queue depth (nvme0n2) 00:08:27.651 Could not set queue depth (nvme0n3) 00:08:27.651 Could not set queue depth (nvme0n4) 00:08:27.651 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:27.651 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:27.651 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:27.651 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:27.651 fio-3.35 00:08:27.651 Starting 4 threads 00:08:29.023 00:08:29.023 job0: (groupid=0, jobs=1): err= 0: pid=2528408: Tue Nov 19 11:10:24 2024 00:08:29.023 read: IOPS=5060, BW=19.8MiB/s (20.7MB/s)(19.8MiB/1002msec) 00:08:29.023 slat (usec): min=2, max=5790, avg=91.60, stdev=489.03 00:08:29.023 clat (usec): min=651, max=25680, avg=11613.49, stdev=2524.96 00:08:29.023 lat (usec): min=3665, max=25685, avg=11705.09, stdev=2556.75 00:08:29.023 clat percentiles (usec): 00:08:29.023 | 1.00th=[ 7111], 5.00th=[ 9110], 10.00th=[ 9765], 20.00th=[10552], 00:08:29.023 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:08:29.023 | 70.00th=[11600], 80.00th=[11863], 90.00th=[13698], 95.00th=[17433], 00:08:29.023 | 99.00th=[21890], 99.50th=[22152], 99.90th=[22938], 99.95th=[23200], 00:08:29.023 | 99.99th=[25560] 00:08:29.023 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:08:29.023 slat (usec): min=3, max=28317, avg=97.53, stdev=650.12 00:08:29.023 clat (usec): min=7391, max=50354, avg=13154.21, stdev=5707.12 00:08:29.023 lat (usec): min=7398, max=50374, avg=13251.74, stdev=5762.07 00:08:29.023 clat percentiles (usec): 00:08:29.023 | 1.00th=[ 7832], 5.00th=[ 9896], 10.00th=[10552], 20.00th=[10814], 00:08:29.023 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11338], 60.00th=[11469], 00:08:29.023 | 70.00th=[11600], 80.00th=[12387], 90.00th=[20317], 95.00th=[28967], 00:08:29.023 | 99.00th=[37487], 99.50th=[37487], 99.90th=[38011], 99.95th=[41157], 00:08:29.023 | 99.99th=[50594] 00:08:29.023 bw ( KiB/s): min=18576, max=22384, per=30.69%, avg=20480.00, stdev=2692.66, samples=2 00:08:29.023 iops : min= 4644, max= 5596, avg=5120.00, stdev=673.17, samples=2 00:08:29.023 lat (usec) : 750=0.01% 00:08:29.023 lat (msec) : 4=0.21%, 10=8.61%, 20=84.48%, 50=6.69%, 100=0.01% 00:08:29.023 cpu : usr=4.80%, sys=6.79%, ctx=594, majf=0, minf=1 00:08:29.023 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:08:29.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:29.023 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:29.023 issued rwts: total=5071,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:29.023 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:29.023 job1: (groupid=0, jobs=1): err= 0: pid=2528409: Tue Nov 19 11:10:24 2024 00:08:29.023 read: IOPS=2921, BW=11.4MiB/s (12.0MB/s)(11.5MiB/1005msec) 00:08:29.023 slat (usec): min=2, max=14969, avg=163.53, stdev=1016.68 00:08:29.023 clat (usec): min=976, max=62884, avg=21550.13, stdev=14510.63 00:08:29.023 lat (usec): min=7092, max=62899, avg=21713.66, stdev=14601.42 00:08:29.023 clat percentiles (usec): 00:08:29.023 | 1.00th=[ 7242], 5.00th=[ 8717], 10.00th=[ 8848], 20.00th=[10421], 00:08:29.023 | 30.00th=[11469], 40.00th=[12518], 50.00th=[16712], 60.00th=[19268], 00:08:29.023 | 70.00th=[23725], 80.00th=[28443], 90.00th=[50594], 95.00th=[54264], 00:08:29.023 | 99.00th=[62129], 99.50th=[62129], 99.90th=[62653], 99.95th=[62653], 00:08:29.023 | 99.99th=[62653] 00:08:29.023 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:08:29.023 slat (usec): min=3, max=8536, avg=149.87, stdev=719.95 00:08:29.023 clat (usec): min=1541, max=62956, avg=20885.96, stdev=15283.95 00:08:29.023 lat (usec): min=1554, max=62971, avg=21035.83, stdev=15395.99 00:08:29.023 clat percentiles (usec): 00:08:29.023 | 1.00th=[ 3195], 5.00th=[ 7308], 10.00th=[ 9372], 20.00th=[10028], 00:08:29.023 | 30.00th=[10290], 40.00th=[10683], 50.00th=[12125], 60.00th=[22938], 00:08:29.023 | 70.00th=[24511], 80.00th=[30802], 90.00th=[46400], 95.00th=[58983], 00:08:29.023 | 99.00th=[62129], 99.50th=[62653], 99.90th=[63177], 99.95th=[63177], 00:08:29.023 | 99.99th=[63177] 00:08:29.023 bw ( KiB/s): min= 7488, max=17088, per=18.41%, avg=12288.00, stdev=6788.23, samples=2 00:08:29.023 iops : min= 1872, max= 4272, avg=3072.00, stdev=1697.06, samples=2 00:08:29.023 lat (usec) : 1000=0.02% 00:08:29.023 lat (msec) : 2=0.03%, 4=1.61%, 10=13.62%, 20=44.42%, 50=30.59% 00:08:29.023 lat (msec) : 100=9.70% 00:08:29.023 cpu : usr=2.59%, sys=5.08%, ctx=352, majf=0, minf=1 00:08:29.023 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:08:29.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:29.023 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:29.023 issued rwts: total=2936,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:29.023 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:29.023 job2: (groupid=0, jobs=1): err= 0: pid=2528410: Tue Nov 19 11:10:24 2024 00:08:29.023 read: IOPS=3245, BW=12.7MiB/s (13.3MB/s)(12.7MiB/1005msec) 00:08:29.023 slat (usec): min=2, max=10817, avg=147.61, stdev=837.07 00:08:29.023 clat (usec): min=1529, max=44858, avg=18758.21, stdev=6808.92 00:08:29.023 lat (usec): min=5059, max=44870, avg=18905.82, stdev=6850.34 00:08:29.023 clat percentiles (usec): 00:08:29.023 | 1.00th=[ 8586], 5.00th=[12387], 10.00th=[14222], 20.00th=[14877], 00:08:29.023 | 30.00th=[15139], 40.00th=[15401], 50.00th=[15795], 60.00th=[16909], 00:08:29.023 | 70.00th=[18482], 80.00th=[22414], 90.00th=[30278], 95.00th=[33817], 00:08:29.023 | 99.00th=[41681], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:08:29.023 | 99.99th=[44827] 00:08:29.023 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:08:29.023 slat (usec): min=3, max=25789, avg=136.29, stdev=760.51 00:08:29.023 clat (usec): min=9014, max=43623, avg=18265.05, stdev=6089.82 00:08:29.023 lat (usec): min=9022, max=43631, avg=18401.34, stdev=6125.09 00:08:29.023 clat percentiles (usec): 00:08:29.023 | 1.00th=[10683], 5.00th=[12387], 10.00th=[13042], 20.00th=[13829], 00:08:29.023 | 30.00th=[14484], 40.00th=[14877], 50.00th=[15401], 60.00th=[17171], 00:08:29.023 | 70.00th=[22414], 80.00th=[23987], 90.00th=[24511], 95.00th=[26608], 00:08:29.023 | 99.00th=[39584], 99.50th=[39584], 99.90th=[43254], 99.95th=[43779], 00:08:29.023 | 99.99th=[43779] 00:08:29.023 bw ( KiB/s): min=13112, max=15560, per=21.48%, avg=14336.00, stdev=1731.00, samples=2 00:08:29.023 iops : min= 3278, max= 3890, avg=3584.00, stdev=432.75, samples=2 00:08:29.023 lat (msec) : 2=0.01%, 10=1.07%, 20=70.16%, 50=28.76% 00:08:29.023 cpu : usr=3.29%, sys=5.98%, ctx=308, majf=0, minf=1 00:08:29.023 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:08:29.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:29.023 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:29.023 issued rwts: total=3262,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:29.023 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:29.023 job3: (groupid=0, jobs=1): err= 0: pid=2528411: Tue Nov 19 11:10:24 2024 00:08:29.023 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:08:29.023 slat (usec): min=3, max=12562, avg=101.71, stdev=591.05 00:08:29.023 clat (usec): min=7637, max=20148, avg=12827.41, stdev=1608.00 00:08:29.023 lat (usec): min=7651, max=26160, avg=12929.11, stdev=1670.81 00:08:29.023 clat percentiles (usec): 00:08:29.023 | 1.00th=[ 8455], 5.00th=[ 9896], 10.00th=[11076], 20.00th=[11994], 00:08:29.023 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12780], 60.00th=[12911], 00:08:29.023 | 70.00th=[13173], 80.00th=[13698], 90.00th=[14484], 95.00th=[15926], 00:08:29.023 | 99.00th=[17695], 99.50th=[17957], 99.90th=[19530], 99.95th=[20055], 00:08:29.023 | 99.99th=[20055] 00:08:29.023 write: IOPS=4982, BW=19.5MiB/s (20.4MB/s)(19.5MiB/1002msec); 0 zone resets 00:08:29.023 slat (usec): min=4, max=9037, avg=95.14, stdev=518.59 00:08:29.023 clat (usec): min=851, max=28645, avg=13523.63, stdev=2697.70 00:08:29.023 lat (usec): min=5433, max=28694, avg=13618.76, stdev=2733.83 00:08:29.023 clat percentiles (usec): 00:08:29.023 | 1.00th=[ 6587], 5.00th=[ 9110], 10.00th=[11600], 20.00th=[12518], 00:08:29.023 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13042], 60.00th=[13173], 00:08:29.023 | 70.00th=[13304], 80.00th=[14484], 90.00th=[18482], 95.00th=[19530], 00:08:29.023 | 99.00th=[20055], 99.50th=[20055], 99.90th=[27919], 99.95th=[28181], 00:08:29.023 | 99.99th=[28705] 00:08:29.023 bw ( KiB/s): min=18960, max=19960, per=29.16%, avg=19460.00, stdev=707.11, samples=2 00:08:29.023 iops : min= 4740, max= 4990, avg=4865.00, stdev=176.78, samples=2 00:08:29.023 lat (usec) : 1000=0.01% 00:08:29.023 lat (msec) : 10=5.85%, 20=93.02%, 50=1.11% 00:08:29.023 cpu : usr=5.49%, sys=10.59%, ctx=509, majf=0, minf=1 00:08:29.023 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:08:29.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:29.023 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:29.023 issued rwts: total=4608,4992,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:29.023 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:29.023 00:08:29.023 Run status group 0 (all jobs): 00:08:29.023 READ: bw=61.7MiB/s (64.7MB/s), 11.4MiB/s-19.8MiB/s (12.0MB/s-20.7MB/s), io=62.0MiB (65.0MB), run=1002-1005msec 00:08:29.023 WRITE: bw=65.2MiB/s (68.3MB/s), 11.9MiB/s-20.0MiB/s (12.5MB/s-20.9MB/s), io=65.5MiB (68.7MB), run=1002-1005msec 00:08:29.023 00:08:29.023 Disk stats (read/write): 00:08:29.023 nvme0n1: ios=4146/4367, merge=0/0, ticks=16315/18565, in_queue=34880, util=87.07% 00:08:29.023 nvme0n2: ios=2573/3072, merge=0/0, ticks=23679/27465, in_queue=51144, util=86.90% 00:08:29.023 nvme0n3: ios=2617/3055, merge=0/0, ticks=19175/19923, in_queue=39098, util=98.23% 00:08:29.023 nvme0n4: ios=4020/4096, merge=0/0, ticks=25380/28333, in_queue=53713, util=89.61% 00:08:29.023 11:10:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:08:29.023 11:10:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2528547 00:08:29.023 11:10:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:08:29.023 11:10:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:08:29.023 [global] 00:08:29.023 thread=1 00:08:29.023 invalidate=1 00:08:29.023 rw=read 00:08:29.023 time_based=1 00:08:29.023 runtime=10 00:08:29.023 ioengine=libaio 00:08:29.023 direct=1 00:08:29.023 bs=4096 00:08:29.023 iodepth=1 00:08:29.023 norandommap=1 00:08:29.023 numjobs=1 00:08:29.023 00:08:29.023 [job0] 00:08:29.023 filename=/dev/nvme0n1 00:08:29.023 [job1] 00:08:29.023 filename=/dev/nvme0n2 00:08:29.023 [job2] 00:08:29.023 filename=/dev/nvme0n3 00:08:29.023 [job3] 00:08:29.023 filename=/dev/nvme0n4 00:08:29.023 Could not set queue depth (nvme0n1) 00:08:29.023 Could not set queue depth (nvme0n2) 00:08:29.023 Could not set queue depth (nvme0n3) 00:08:29.023 Could not set queue depth (nvme0n4) 00:08:29.280 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:29.280 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:29.280 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:29.280 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:29.280 fio-3.35 00:08:29.280 Starting 4 threads 00:08:32.558 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:08:32.558 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:08:32.558 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=315392, buflen=4096 00:08:32.558 fio: pid=2528644, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:32.558 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:32.558 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:08:32.558 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=356352, buflen=4096 00:08:32.558 fio: pid=2528643, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:32.820 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:32.820 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:08:32.820 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=442368, buflen=4096 00:08:32.820 fio: pid=2528641, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:33.118 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:33.118 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:08:33.118 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=454656, buflen=4096 00:08:33.118 fio: pid=2528642, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:33.118 00:08:33.118 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2528641: Tue Nov 19 11:10:28 2024 00:08:33.118 read: IOPS=31, BW=124KiB/s (127kB/s)(432KiB/3475msec) 00:08:33.118 slat (usec): min=8, max=26872, avg=302.38, stdev=2595.61 00:08:33.118 clat (usec): min=206, max=44036, avg=31649.53, stdev=17198.78 00:08:33.118 lat (usec): min=219, max=68004, avg=31954.55, stdev=17545.47 00:08:33.118 clat percentiles (usec): 00:08:33.118 | 1.00th=[ 237], 5.00th=[ 412], 10.00th=[ 445], 20.00th=[ 570], 00:08:33.118 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:08:33.118 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:08:33.118 | 99.00th=[42206], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:08:33.118 | 99.99th=[43779] 00:08:33.118 bw ( KiB/s): min= 96, max= 168, per=31.66%, avg=129.33, stdev=32.17, samples=6 00:08:33.118 iops : min= 24, max= 42, avg=32.33, stdev= 8.04, samples=6 00:08:33.118 lat (usec) : 250=1.83%, 500=10.09%, 750=11.01% 00:08:33.118 lat (msec) : 50=76.15% 00:08:33.118 cpu : usr=0.14%, sys=0.00%, ctx=111, majf=0, minf=1 00:08:33.118 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:33.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:33.118 complete : 0=0.9%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:33.118 issued rwts: total=109,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:33.118 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:33.118 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2528642: Tue Nov 19 11:10:28 2024 00:08:33.118 read: IOPS=29, BW=118KiB/s (121kB/s)(444KiB/3760msec) 00:08:33.118 slat (usec): min=12, max=9943, avg=107.85, stdev=937.75 00:08:33.118 clat (usec): min=342, max=41559, avg=33666.66, stdev=15651.03 00:08:33.118 lat (usec): min=359, max=50969, avg=33775.35, stdev=15719.02 00:08:33.118 clat percentiles (usec): 00:08:33.118 | 1.00th=[ 375], 5.00th=[ 412], 10.00th=[ 433], 20.00th=[40633], 00:08:33.118 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:08:33.118 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:08:33.118 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:08:33.118 | 99.99th=[41681] 00:08:33.119 bw ( KiB/s): min= 96, max= 176, per=29.21%, avg=119.29, stdev=30.43, samples=7 00:08:33.119 iops : min= 24, max= 44, avg=29.71, stdev= 7.70, samples=7 00:08:33.119 lat (usec) : 500=16.07%, 750=1.79% 00:08:33.119 lat (msec) : 50=81.25% 00:08:33.119 cpu : usr=0.13%, sys=0.00%, ctx=115, majf=0, minf=2 00:08:33.119 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:33.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:33.119 complete : 0=0.9%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:33.119 issued rwts: total=112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:33.119 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:33.119 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2528643: Tue Nov 19 11:10:28 2024 00:08:33.119 read: IOPS=27, BW=109KiB/s (112kB/s)(348KiB/3190msec) 00:08:33.119 slat (nsec): min=7817, max=96958, avg=18958.98, stdev=11147.92 00:08:33.119 clat (usec): min=225, max=42260, avg=36377.15, stdev=13036.31 00:08:33.119 lat (usec): min=240, max=42277, avg=36395.97, stdev=13035.66 00:08:33.119 clat percentiles (usec): 00:08:33.119 | 1.00th=[ 227], 5.00th=[ 469], 10.00th=[ 562], 20.00th=[40633], 00:08:33.119 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:08:33.119 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:08:33.119 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:33.119 | 99.99th=[42206] 00:08:33.119 bw ( KiB/s): min= 104, max= 120, per=26.75%, avg=109.33, stdev= 6.53, samples=6 00:08:33.119 iops : min= 26, max= 30, avg=27.33, stdev= 1.63, samples=6 00:08:33.119 lat (usec) : 250=4.55%, 500=1.14%, 750=5.68% 00:08:33.119 lat (msec) : 50=87.50% 00:08:33.119 cpu : usr=0.06%, sys=0.00%, ctx=89, majf=0, minf=2 00:08:33.119 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:33.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:33.119 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:33.119 issued rwts: total=88,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:33.119 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:33.119 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2528644: Tue Nov 19 11:10:28 2024 00:08:33.119 read: IOPS=26, BW=106KiB/s (109kB/s)(308KiB/2905msec) 00:08:33.119 slat (nsec): min=13846, max=52998, avg=20882.32, stdev=9836.67 00:08:33.119 clat (usec): min=357, max=41996, avg=37359.60, stdev=11750.37 00:08:33.119 lat (usec): min=374, max=42014, avg=37380.57, stdev=11750.21 00:08:33.119 clat percentiles (usec): 00:08:33.119 | 1.00th=[ 359], 5.00th=[ 453], 10.00th=[40633], 20.00th=[41157], 00:08:33.119 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:08:33.119 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:08:33.119 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:33.119 | 99.99th=[42206] 00:08:33.119 bw ( KiB/s): min= 96, max= 120, per=25.77%, avg=105.60, stdev=10.43, samples=5 00:08:33.119 iops : min= 24, max= 30, avg=26.40, stdev= 2.61, samples=5 00:08:33.119 lat (usec) : 500=7.69%, 750=1.28% 00:08:33.119 lat (msec) : 50=89.74% 00:08:33.119 cpu : usr=0.07%, sys=0.00%, ctx=78, majf=0, minf=2 00:08:33.119 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:33.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:33.119 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:33.119 issued rwts: total=78,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:33.119 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:33.119 00:08:33.119 Run status group 0 (all jobs): 00:08:33.119 READ: bw=407KiB/s (417kB/s), 106KiB/s-124KiB/s (109kB/s-127kB/s), io=1532KiB (1569kB), run=2905-3760msec 00:08:33.119 00:08:33.119 Disk stats (read/write): 00:08:33.119 nvme0n1: ios=105/0, merge=0/0, ticks=3298/0, in_queue=3298, util=95.34% 00:08:33.119 nvme0n2: ios=107/0, merge=0/0, ticks=3576/0, in_queue=3576, util=96.41% 00:08:33.119 nvme0n3: ios=84/0, merge=0/0, ticks=3085/0, in_queue=3085, util=96.82% 00:08:33.119 nvme0n4: ios=76/0, merge=0/0, ticks=2838/0, in_queue=2838, util=96.72% 00:08:33.429 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:33.429 11:10:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:08:33.686 11:10:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:33.686 11:10:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:08:33.943 11:10:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:33.943 11:10:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:08:34.200 11:10:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:34.200 11:10:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:08:34.458 11:10:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:08:34.458 11:10:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2528547 00:08:34.458 11:10:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:08:34.458 11:10:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:34.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.715 11:10:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:34.715 11:10:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:08:34.715 11:10:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:34.715 11:10:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:34.715 11:10:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:34.715 11:10:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:34.715 11:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:08:34.715 11:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:08:34.715 11:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:08:34.715 nvmf hotplug test: fio failed as expected 00:08:34.715 11:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:34.973 11:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:08:34.973 11:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:08:34.973 11:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:08:34.973 11:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:08:34.973 11:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:08:34.973 11:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:34.973 11:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:08:34.973 11:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:34.973 11:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:08:34.973 11:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:34.973 11:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:34.973 rmmod nvme_tcp 00:08:34.973 rmmod nvme_fabrics 00:08:34.973 rmmod nvme_keyring 00:08:34.973 11:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:34.973 11:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:08:34.973 11:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:08:34.973 11:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2526514 ']' 00:08:34.973 11:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2526514 00:08:34.973 11:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2526514 ']' 00:08:34.973 11:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2526514 00:08:34.973 11:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:08:34.973 11:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:34.973 11:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2526514 00:08:34.973 11:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:34.973 11:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:34.973 11:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2526514' 00:08:34.973 killing process with pid 2526514 00:08:34.973 11:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2526514 00:08:34.973 11:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2526514 00:08:35.231 11:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:35.231 11:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:35.231 11:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:35.231 11:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:08:35.231 11:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:08:35.231 11:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:35.231 11:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:08:35.231 11:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:35.231 11:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:35.231 11:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.231 11:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:35.231 11:10:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.770 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:37.770 00:08:37.770 real 0m24.593s 00:08:37.770 user 1m25.747s 00:08:37.770 sys 0m6.372s 00:08:37.770 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.770 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:37.770 ************************************ 00:08:37.770 END TEST nvmf_fio_target 00:08:37.771 ************************************ 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:37.771 ************************************ 00:08:37.771 START TEST nvmf_bdevio 00:08:37.771 ************************************ 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:08:37.771 * Looking for test storage... 00:08:37.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:37.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.771 --rc genhtml_branch_coverage=1 00:08:37.771 --rc genhtml_function_coverage=1 00:08:37.771 --rc genhtml_legend=1 00:08:37.771 --rc geninfo_all_blocks=1 00:08:37.771 --rc geninfo_unexecuted_blocks=1 00:08:37.771 00:08:37.771 ' 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:37.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.771 --rc genhtml_branch_coverage=1 00:08:37.771 --rc genhtml_function_coverage=1 00:08:37.771 --rc genhtml_legend=1 00:08:37.771 --rc geninfo_all_blocks=1 00:08:37.771 --rc geninfo_unexecuted_blocks=1 00:08:37.771 00:08:37.771 ' 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:37.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.771 --rc genhtml_branch_coverage=1 00:08:37.771 --rc genhtml_function_coverage=1 00:08:37.771 --rc genhtml_legend=1 00:08:37.771 --rc geninfo_all_blocks=1 00:08:37.771 --rc geninfo_unexecuted_blocks=1 00:08:37.771 00:08:37.771 ' 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:37.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.771 --rc genhtml_branch_coverage=1 00:08:37.771 --rc genhtml_function_coverage=1 00:08:37.771 --rc genhtml_legend=1 00:08:37.771 --rc geninfo_all_blocks=1 00:08:37.771 --rc geninfo_unexecuted_blocks=1 00:08:37.771 00:08:37.771 ' 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.771 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.772 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.772 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:08:37.772 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.772 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:08:37.772 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:37.772 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:37.772 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:37.772 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:37.772 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:37.772 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:37.772 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:37.772 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:37.772 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:37.772 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:37.772 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:37.772 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:37.772 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:08:37.772 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:37.772 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:37.772 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:37.772 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:37.772 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:37.772 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.772 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:37.772 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.772 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:37.772 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:37.772 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:08:37.772 11:10:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:40.305 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:40.305 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:08:40.305 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:40.305 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:40.305 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:40.305 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:40.305 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:40.305 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:08:40.305 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:40.305 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:08:40.305 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:08:40.305 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:08:40.305 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:08:40.305 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:08:40.305 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:08:40.305 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:40.305 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:40.305 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:40.305 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:08:40.306 Found 0000:82:00.0 (0x8086 - 0x159b) 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:08:40.306 Found 0000:82:00.1 (0x8086 - 0x159b) 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:08:40.306 Found net devices under 0000:82:00.0: cvl_0_0 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:08:40.306 Found net devices under 0000:82:00.1: cvl_0_1 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:40.306 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:40.306 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:08:40.306 00:08:40.306 --- 10.0.0.2 ping statistics --- 00:08:40.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.306 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:40.306 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:40.306 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:08:40.306 00:08:40.306 --- 10.0.0.1 ping statistics --- 00:08:40.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.306 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2531701 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2531701 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2531701 ']' 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:40.306 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.307 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:40.307 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:40.307 [2024-11-19 11:10:35.756541] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:08:40.307 [2024-11-19 11:10:35.756628] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.565 [2024-11-19 11:10:35.844616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:40.565 [2024-11-19 11:10:35.904888] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:40.565 [2024-11-19 11:10:35.904969] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:40.565 [2024-11-19 11:10:35.904991] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:40.565 [2024-11-19 11:10:35.905009] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:40.565 [2024-11-19 11:10:35.905023] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:40.565 [2024-11-19 11:10:35.906709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:40.565 [2024-11-19 11:10:35.906774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:40.565 [2024-11-19 11:10:35.906840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:40.565 [2024-11-19 11:10:35.906843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:40.565 11:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:40.565 11:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:08:40.565 11:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:40.565 11:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:40.565 11:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:40.565 11:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:40.565 11:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:40.565 11:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.565 11:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:40.565 [2024-11-19 11:10:36.059190] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:40.822 11:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.822 11:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:40.822 11:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.822 11:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:40.822 Malloc0 00:08:40.822 11:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.822 11:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:40.822 11:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.822 11:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:40.822 11:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.822 11:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:40.822 11:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.822 11:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:40.822 11:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.822 11:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:40.822 11:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.822 11:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:40.822 [2024-11-19 11:10:36.126887] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:40.822 11:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.822 11:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:08:40.822 11:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:08:40.822 11:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:08:40.822 11:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:08:40.822 11:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:40.822 11:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:40.822 { 00:08:40.822 "params": { 00:08:40.822 "name": "Nvme$subsystem", 00:08:40.822 "trtype": "$TEST_TRANSPORT", 00:08:40.822 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:40.822 "adrfam": "ipv4", 00:08:40.822 "trsvcid": "$NVMF_PORT", 00:08:40.822 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:40.822 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:40.822 "hdgst": ${hdgst:-false}, 00:08:40.822 "ddgst": ${ddgst:-false} 00:08:40.822 }, 00:08:40.822 "method": "bdev_nvme_attach_controller" 00:08:40.822 } 00:08:40.822 EOF 00:08:40.822 )") 00:08:40.822 11:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:08:40.822 11:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:08:40.822 11:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:08:40.822 11:10:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:40.822 "params": { 00:08:40.822 "name": "Nvme1", 00:08:40.822 "trtype": "tcp", 00:08:40.822 "traddr": "10.0.0.2", 00:08:40.822 "adrfam": "ipv4", 00:08:40.822 "trsvcid": "4420", 00:08:40.822 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:40.822 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:40.822 "hdgst": false, 00:08:40.822 "ddgst": false 00:08:40.822 }, 00:08:40.822 "method": "bdev_nvme_attach_controller" 00:08:40.822 }' 00:08:40.822 [2024-11-19 11:10:36.176113] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:08:40.822 [2024-11-19 11:10:36.176184] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2531840 ] 00:08:40.822 [2024-11-19 11:10:36.254557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:40.822 [2024-11-19 11:10:36.316579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.822 [2024-11-19 11:10:36.316631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:40.822 [2024-11-19 11:10:36.316635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.080 I/O targets: 00:08:41.080 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:08:41.080 00:08:41.080 00:08:41.080 CUnit - A unit testing framework for C - Version 2.1-3 00:08:41.080 http://cunit.sourceforge.net/ 00:08:41.080 00:08:41.080 00:08:41.080 Suite: bdevio tests on: Nvme1n1 00:08:41.080 Test: blockdev write read block ...passed 00:08:41.338 Test: blockdev write zeroes read block ...passed 00:08:41.338 Test: blockdev write zeroes read no split ...passed 00:08:41.338 Test: blockdev write zeroes read split ...passed 00:08:41.338 Test: blockdev write zeroes read split partial ...passed 00:08:41.338 Test: blockdev reset ...[2024-11-19 11:10:36.615293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:08:41.338 [2024-11-19 11:10:36.615422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc74640 (9): Bad file descriptor 00:08:41.338 [2024-11-19 11:10:36.628535] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:08:41.338 passed 00:08:41.338 Test: blockdev write read 8 blocks ...passed 00:08:41.338 Test: blockdev write read size > 128k ...passed 00:08:41.338 Test: blockdev write read invalid size ...passed 00:08:41.338 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:41.338 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:41.338 Test: blockdev write read max offset ...passed 00:08:41.338 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:41.596 Test: blockdev writev readv 8 blocks ...passed 00:08:41.596 Test: blockdev writev readv 30 x 1block ...passed 00:08:41.596 Test: blockdev writev readv block ...passed 00:08:41.596 Test: blockdev writev readv size > 128k ...passed 00:08:41.596 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:41.596 Test: blockdev comparev and writev ...[2024-11-19 11:10:36.883486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:41.596 [2024-11-19 11:10:36.883525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:08:41.596 [2024-11-19 11:10:36.883550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:41.596 [2024-11-19 11:10:36.883567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:08:41.596 [2024-11-19 11:10:36.884028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:41.596 [2024-11-19 11:10:36.884054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:08:41.596 [2024-11-19 11:10:36.884077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:41.596 [2024-11-19 11:10:36.884094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:08:41.596 [2024-11-19 11:10:36.884532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:41.596 [2024-11-19 11:10:36.884557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:08:41.596 [2024-11-19 11:10:36.884579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:41.596 [2024-11-19 11:10:36.884595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:08:41.596 [2024-11-19 11:10:36.885040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:41.596 [2024-11-19 11:10:36.885074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:08:41.596 [2024-11-19 11:10:36.885098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:41.596 [2024-11-19 11:10:36.885114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:08:41.596 passed 00:08:41.596 Test: blockdev nvme passthru rw ...passed 00:08:41.596 Test: blockdev nvme passthru vendor specific ...[2024-11-19 11:10:36.968839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:41.596 [2024-11-19 11:10:36.968867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:08:41.596 [2024-11-19 11:10:36.969158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:41.596 [2024-11-19 11:10:36.969182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:08:41.596 [2024-11-19 11:10:36.969479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:41.596 [2024-11-19 11:10:36.969504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:08:41.596 [2024-11-19 11:10:36.969793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:41.596 [2024-11-19 11:10:36.969817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:08:41.596 passed 00:08:41.596 Test: blockdev nvme admin passthru ...passed 00:08:41.596 Test: blockdev copy ...passed 00:08:41.596 00:08:41.596 Run Summary: Type Total Ran Passed Failed Inactive 00:08:41.596 suites 1 1 n/a 0 0 00:08:41.596 tests 23 23 23 0 0 00:08:41.596 asserts 152 152 152 0 n/a 00:08:41.596 00:08:41.596 Elapsed time = 1.059 seconds 00:08:41.854 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:41.854 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.854 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:41.854 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.854 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:08:41.854 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:08:41.854 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:41.854 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:08:41.855 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:41.855 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:08:41.855 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:41.855 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:41.855 rmmod nvme_tcp 00:08:41.855 rmmod nvme_fabrics 00:08:41.855 rmmod nvme_keyring 00:08:41.855 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:41.855 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:08:41.855 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:08:41.855 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2531701 ']' 00:08:41.855 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2531701 00:08:41.855 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2531701 ']' 00:08:41.855 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2531701 00:08:41.855 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:08:41.855 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:41.855 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2531701 00:08:41.855 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:08:41.855 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:08:41.855 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2531701' 00:08:41.855 killing process with pid 2531701 00:08:41.855 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2531701 00:08:41.855 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2531701 00:08:42.113 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:42.113 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:42.113 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:42.113 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:08:42.113 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:08:42.113 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:08:42.113 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:42.113 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:42.113 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:42.113 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.113 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:42.113 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:44.654 00:08:44.654 real 0m6.890s 00:08:44.654 user 0m9.623s 00:08:44.654 sys 0m2.560s 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:44.654 ************************************ 00:08:44.654 END TEST nvmf_bdevio 00:08:44.654 ************************************ 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:44.654 00:08:44.654 real 4m2.322s 00:08:44.654 user 10m15.100s 00:08:44.654 sys 1m13.958s 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:44.654 ************************************ 00:08:44.654 END TEST nvmf_target_core 00:08:44.654 ************************************ 00:08:44.654 11:10:39 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:08:44.654 11:10:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:44.654 11:10:39 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.654 11:10:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:44.654 ************************************ 00:08:44.654 START TEST nvmf_target_extra 00:08:44.654 ************************************ 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:08:44.654 * Looking for test storage... 00:08:44.654 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:44.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.654 --rc genhtml_branch_coverage=1 00:08:44.654 --rc genhtml_function_coverage=1 00:08:44.654 --rc genhtml_legend=1 00:08:44.654 --rc geninfo_all_blocks=1 00:08:44.654 --rc geninfo_unexecuted_blocks=1 00:08:44.654 00:08:44.654 ' 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:44.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.654 --rc genhtml_branch_coverage=1 00:08:44.654 --rc genhtml_function_coverage=1 00:08:44.654 --rc genhtml_legend=1 00:08:44.654 --rc geninfo_all_blocks=1 00:08:44.654 --rc geninfo_unexecuted_blocks=1 00:08:44.654 00:08:44.654 ' 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:44.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.654 --rc genhtml_branch_coverage=1 00:08:44.654 --rc genhtml_function_coverage=1 00:08:44.654 --rc genhtml_legend=1 00:08:44.654 --rc geninfo_all_blocks=1 00:08:44.654 --rc geninfo_unexecuted_blocks=1 00:08:44.654 00:08:44.654 ' 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:44.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.654 --rc genhtml_branch_coverage=1 00:08:44.654 --rc genhtml_function_coverage=1 00:08:44.654 --rc genhtml_legend=1 00:08:44.654 --rc geninfo_all_blocks=1 00:08:44.654 --rc geninfo_unexecuted_blocks=1 00:08:44.654 00:08:44.654 ' 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.654 11:10:39 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:08:44.655 11:10:39 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.655 11:10:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:08:44.655 11:10:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:44.655 11:10:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:44.655 11:10:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:44.655 11:10:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:44.655 11:10:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:44.655 11:10:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:44.655 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:44.655 11:10:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:44.655 11:10:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:44.655 11:10:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:44.655 11:10:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:44.655 11:10:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:08:44.655 11:10:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:08:44.655 11:10:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:44.655 11:10:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:44.655 11:10:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.655 11:10:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:08:44.655 ************************************ 00:08:44.655 START TEST nvmf_example 00:08:44.655 ************************************ 00:08:44.655 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:44.655 * Looking for test storage... 00:08:44.655 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:44.655 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:44.655 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:08:44.655 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:44.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.655 --rc genhtml_branch_coverage=1 00:08:44.655 --rc genhtml_function_coverage=1 00:08:44.655 --rc genhtml_legend=1 00:08:44.655 --rc geninfo_all_blocks=1 00:08:44.655 --rc geninfo_unexecuted_blocks=1 00:08:44.655 00:08:44.655 ' 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:44.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.655 --rc genhtml_branch_coverage=1 00:08:44.655 --rc genhtml_function_coverage=1 00:08:44.655 --rc genhtml_legend=1 00:08:44.655 --rc geninfo_all_blocks=1 00:08:44.655 --rc geninfo_unexecuted_blocks=1 00:08:44.655 00:08:44.655 ' 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:44.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.655 --rc genhtml_branch_coverage=1 00:08:44.655 --rc genhtml_function_coverage=1 00:08:44.655 --rc genhtml_legend=1 00:08:44.655 --rc geninfo_all_blocks=1 00:08:44.655 --rc geninfo_unexecuted_blocks=1 00:08:44.655 00:08:44.655 ' 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:44.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.655 --rc genhtml_branch_coverage=1 00:08:44.655 --rc genhtml_function_coverage=1 00:08:44.655 --rc genhtml_legend=1 00:08:44.655 --rc geninfo_all_blocks=1 00:08:44.655 --rc geninfo_unexecuted_blocks=1 00:08:44.655 00:08:44.655 ' 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.655 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.656 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.656 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:08:44.656 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.656 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:08:44.656 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:44.656 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:44.656 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:44.656 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:44.656 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:44.656 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:44.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:44.656 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:44.656 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:44.656 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:44.656 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:08:44.656 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:44.656 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:44.656 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:08:44.656 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:08:44.656 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:08:44.656 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:08:44.656 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:08:44.656 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:44.656 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:44.656 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:08:44.656 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:44.656 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:44.656 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:44.656 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:44.656 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:44.656 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.656 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:44.656 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.656 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:44.656 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:44.656 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:08:44.656 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:47.939 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:47.939 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:08:47.939 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:47.939 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:47.939 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:47.939 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:47.939 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:47.939 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:08:47.939 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:47.939 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:08:47.939 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:08:47.939 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:08:47.939 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:08:47.939 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:08:47.939 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:08:47.939 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:47.939 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:47.939 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:47.939 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:47.939 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:47.939 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:47.939 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:47.939 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:47.939 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:47.939 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:47.939 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:47.939 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:47.939 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:47.939 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:47.939 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:47.939 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:47.939 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:47.939 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:47.939 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:47.939 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:08:47.939 Found 0000:82:00.0 (0x8086 - 0x159b) 00:08:47.939 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:47.939 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:47.939 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:47.939 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:47.939 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:47.939 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:47.939 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:08:47.939 Found 0000:82:00.1 (0x8086 - 0x159b) 00:08:47.939 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:47.939 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:47.939 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:47.939 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:47.939 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:47.939 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:47.939 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:08:47.940 Found net devices under 0000:82:00.0: cvl_0_0 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:08:47.940 Found net devices under 0000:82:00.1: cvl_0_1 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:47.940 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:47.940 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:08:47.940 00:08:47.940 --- 10.0.0.2 ping statistics --- 00:08:47.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.940 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:47.940 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:47.940 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:08:47.940 00:08:47.940 --- 10.0.0.1 ping statistics --- 00:08:47.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.940 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2534289 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2534289 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 2534289 ']' 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:47.940 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:48.505 11:10:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:48.505 11:10:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:08:48.505 11:10:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:48.505 11:10:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:48.505 11:10:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:48.505 11:10:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:48.505 11:10:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.505 11:10:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:48.763 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.763 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:48.763 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.763 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:48.763 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.763 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:48.763 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:48.763 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.763 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:48.763 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.763 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:48.763 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:48.763 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.763 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:48.763 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.763 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:48.763 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.763 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:48.763 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.763 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:48.763 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:00.957 Initializing NVMe Controllers 00:09:00.957 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:00.957 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:00.957 Initialization complete. Launching workers. 00:09:00.957 ======================================================== 00:09:00.957 Latency(us) 00:09:00.957 Device Information : IOPS MiB/s Average min max 00:09:00.957 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14884.67 58.14 4299.32 732.43 16199.58 00:09:00.957 ======================================================== 00:09:00.957 Total : 14884.67 58.14 4299.32 732.43 16199.58 00:09:00.957 00:09:00.957 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:00.957 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:00.957 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:00.957 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:09:00.957 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:00.957 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:09:00.957 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:00.957 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:00.957 rmmod nvme_tcp 00:09:00.957 rmmod nvme_fabrics 00:09:00.957 rmmod nvme_keyring 00:09:00.957 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:00.957 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:09:00.957 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:09:00.957 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2534289 ']' 00:09:00.957 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2534289 00:09:00.957 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 2534289 ']' 00:09:00.957 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 2534289 00:09:00.957 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:09:00.957 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:00.957 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2534289 00:09:00.957 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:09:00.957 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:09:00.957 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2534289' 00:09:00.957 killing process with pid 2534289 00:09:00.957 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 2534289 00:09:00.957 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 2534289 00:09:00.957 nvmf threads initialize successfully 00:09:00.957 bdev subsystem init successfully 00:09:00.957 created a nvmf target service 00:09:00.957 create targets's poll groups done 00:09:00.957 all subsystems of target started 00:09:00.957 nvmf target is running 00:09:00.957 all subsystems of target stopped 00:09:00.957 destroy targets's poll groups done 00:09:00.957 destroyed the nvmf target service 00:09:00.957 bdev subsystem finish successfully 00:09:00.957 nvmf threads destroy successfully 00:09:00.957 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:00.957 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:00.957 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:00.957 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:09:00.957 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:09:00.957 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:00.957 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:09:00.957 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:00.957 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:00.957 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.957 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:00.957 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.216 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:01.216 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:01.216 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:01.216 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:01.216 00:09:01.216 real 0m16.788s 00:09:01.216 user 0m45.695s 00:09:01.216 sys 0m4.054s 00:09:01.216 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.216 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:01.216 ************************************ 00:09:01.216 END TEST nvmf_example 00:09:01.216 ************************************ 00:09:01.216 11:10:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:01.216 11:10:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:01.216 11:10:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:01.216 11:10:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:01.477 ************************************ 00:09:01.477 START TEST nvmf_filesystem 00:09:01.477 ************************************ 00:09:01.477 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:01.477 * Looking for test storage... 00:09:01.477 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:01.477 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:01.477 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:01.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.478 --rc genhtml_branch_coverage=1 00:09:01.478 --rc genhtml_function_coverage=1 00:09:01.478 --rc genhtml_legend=1 00:09:01.478 --rc geninfo_all_blocks=1 00:09:01.478 --rc geninfo_unexecuted_blocks=1 00:09:01.478 00:09:01.478 ' 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:01.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.478 --rc genhtml_branch_coverage=1 00:09:01.478 --rc genhtml_function_coverage=1 00:09:01.478 --rc genhtml_legend=1 00:09:01.478 --rc geninfo_all_blocks=1 00:09:01.478 --rc geninfo_unexecuted_blocks=1 00:09:01.478 00:09:01.478 ' 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:01.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.478 --rc genhtml_branch_coverage=1 00:09:01.478 --rc genhtml_function_coverage=1 00:09:01.478 --rc genhtml_legend=1 00:09:01.478 --rc geninfo_all_blocks=1 00:09:01.478 --rc geninfo_unexecuted_blocks=1 00:09:01.478 00:09:01.478 ' 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:01.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.478 --rc genhtml_branch_coverage=1 00:09:01.478 --rc genhtml_function_coverage=1 00:09:01.478 --rc genhtml_legend=1 00:09:01.478 --rc geninfo_all_blocks=1 00:09:01.478 --rc geninfo_unexecuted_blocks=1 00:09:01.478 00:09:01.478 ' 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:09:01.478 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:09:01.479 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:01.479 #define SPDK_CONFIG_H 00:09:01.479 #define SPDK_CONFIG_AIO_FSDEV 1 00:09:01.479 #define SPDK_CONFIG_APPS 1 00:09:01.479 #define SPDK_CONFIG_ARCH native 00:09:01.479 #undef SPDK_CONFIG_ASAN 00:09:01.479 #undef SPDK_CONFIG_AVAHI 00:09:01.479 #undef SPDK_CONFIG_CET 00:09:01.479 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:09:01.479 #define SPDK_CONFIG_COVERAGE 1 00:09:01.479 #define SPDK_CONFIG_CROSS_PREFIX 00:09:01.479 #undef SPDK_CONFIG_CRYPTO 00:09:01.479 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:01.479 #undef SPDK_CONFIG_CUSTOMOCF 00:09:01.479 #undef SPDK_CONFIG_DAOS 00:09:01.479 #define SPDK_CONFIG_DAOS_DIR 00:09:01.479 #define SPDK_CONFIG_DEBUG 1 00:09:01.479 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:01.479 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:01.479 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:01.479 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:01.479 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:01.479 #undef SPDK_CONFIG_DPDK_UADK 00:09:01.479 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:01.479 #define SPDK_CONFIG_EXAMPLES 1 00:09:01.479 #undef SPDK_CONFIG_FC 00:09:01.479 #define SPDK_CONFIG_FC_PATH 00:09:01.479 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:01.479 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:01.479 #define SPDK_CONFIG_FSDEV 1 00:09:01.479 #undef SPDK_CONFIG_FUSE 00:09:01.479 #undef SPDK_CONFIG_FUZZER 00:09:01.479 #define SPDK_CONFIG_FUZZER_LIB 00:09:01.479 #undef SPDK_CONFIG_GOLANG 00:09:01.479 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:01.479 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:01.479 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:01.479 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:01.479 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:01.479 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:01.479 #undef SPDK_CONFIG_HAVE_LZ4 00:09:01.479 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:09:01.479 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:09:01.479 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:01.479 #define SPDK_CONFIG_IDXD 1 00:09:01.479 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:01.479 #undef SPDK_CONFIG_IPSEC_MB 00:09:01.479 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:01.479 #define SPDK_CONFIG_ISAL 1 00:09:01.479 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:01.479 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:01.479 #define SPDK_CONFIG_LIBDIR 00:09:01.479 #undef SPDK_CONFIG_LTO 00:09:01.479 #define SPDK_CONFIG_MAX_LCORES 128 00:09:01.479 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:09:01.479 #define SPDK_CONFIG_NVME_CUSE 1 00:09:01.479 #undef SPDK_CONFIG_OCF 00:09:01.479 #define SPDK_CONFIG_OCF_PATH 00:09:01.479 #define SPDK_CONFIG_OPENSSL_PATH 00:09:01.479 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:01.479 #define SPDK_CONFIG_PGO_DIR 00:09:01.479 #undef SPDK_CONFIG_PGO_USE 00:09:01.479 #define SPDK_CONFIG_PREFIX /usr/local 00:09:01.479 #undef SPDK_CONFIG_RAID5F 00:09:01.479 #undef SPDK_CONFIG_RBD 00:09:01.479 #define SPDK_CONFIG_RDMA 1 00:09:01.479 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:01.479 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:01.479 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:01.479 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:01.479 #define SPDK_CONFIG_SHARED 1 00:09:01.479 #undef SPDK_CONFIG_SMA 00:09:01.479 #define SPDK_CONFIG_TESTS 1 00:09:01.479 #undef SPDK_CONFIG_TSAN 00:09:01.479 #define SPDK_CONFIG_UBLK 1 00:09:01.479 #define SPDK_CONFIG_UBSAN 1 00:09:01.479 #undef SPDK_CONFIG_UNIT_TESTS 00:09:01.479 #undef SPDK_CONFIG_URING 00:09:01.479 #define SPDK_CONFIG_URING_PATH 00:09:01.479 #undef SPDK_CONFIG_URING_ZNS 00:09:01.479 #undef SPDK_CONFIG_USDT 00:09:01.479 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:01.479 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:01.479 #define SPDK_CONFIG_VFIO_USER 1 00:09:01.479 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:01.479 #define SPDK_CONFIG_VHOST 1 00:09:01.479 #define SPDK_CONFIG_VIRTIO 1 00:09:01.479 #undef SPDK_CONFIG_VTUNE 00:09:01.479 #define SPDK_CONFIG_VTUNE_DIR 00:09:01.479 #define SPDK_CONFIG_WERROR 1 00:09:01.479 #define SPDK_CONFIG_WPDK_DIR 00:09:01.479 #undef SPDK_CONFIG_XNVME 00:09:01.479 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:01.480 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:09:01.481 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j48 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 2536093 ]] 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 2536093 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.TcX1R8 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.TcX1R8/tests/target /tmp/spdk.TcX1R8 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:01.482 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:09:01.741 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:09:01.741 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:09:01.741 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:09:01.741 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:09:01.741 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:09:01.741 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:09:01.741 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:01.741 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:09:01.741 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:09:01.741 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:09:01.741 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:09:01.741 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:09:01.741 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:01.741 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:09:01.741 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:09:01.741 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=56076926976 00:09:01.741 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=61988528128 00:09:01.741 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5911601152 00:09:01.741 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:01.741 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:01.741 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:01.741 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30982897664 00:09:01.741 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994264064 00:09:01.741 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:09:01.741 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:01.741 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:01.741 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:01.741 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12375080960 00:09:01.741 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12397707264 00:09:01.741 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=22626304 00:09:01.741 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:01.741 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:01.741 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:01.741 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30993797120 00:09:01.741 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994264064 00:09:01.741 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=466944 00:09:01.741 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:01.741 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:01.741 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:01.741 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6198837248 00:09:01.742 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6198849536 00:09:01.742 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:09:01.742 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:01.742 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:09:01.742 * Looking for test storage... 00:09:01.742 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:09:01.742 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:09:01.742 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:01.742 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:01.742 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:09:01.742 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=56076926976 00:09:01.742 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:09:01.742 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:09:01.742 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:09:01.742 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:09:01.742 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:09:01.742 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8126193664 00:09:01.742 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:01.742 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:01.742 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:01.742 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:01.742 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:01.742 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:09:01.742 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:09:01.742 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:09:01.742 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:01.742 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:01.742 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:09:01.742 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:09:01.742 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:09:01.742 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:09:01.742 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:01.742 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:01.742 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:01.742 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:01.742 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:01.742 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:01.742 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:01.742 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:09:01.742 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:01.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.742 --rc genhtml_branch_coverage=1 00:09:01.742 --rc genhtml_function_coverage=1 00:09:01.742 --rc genhtml_legend=1 00:09:01.742 --rc geninfo_all_blocks=1 00:09:01.742 --rc geninfo_unexecuted_blocks=1 00:09:01.742 00:09:01.742 ' 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:01.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.742 --rc genhtml_branch_coverage=1 00:09:01.742 --rc genhtml_function_coverage=1 00:09:01.742 --rc genhtml_legend=1 00:09:01.742 --rc geninfo_all_blocks=1 00:09:01.742 --rc geninfo_unexecuted_blocks=1 00:09:01.742 00:09:01.742 ' 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:01.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.742 --rc genhtml_branch_coverage=1 00:09:01.742 --rc genhtml_function_coverage=1 00:09:01.742 --rc genhtml_legend=1 00:09:01.742 --rc geninfo_all_blocks=1 00:09:01.742 --rc geninfo_unexecuted_blocks=1 00:09:01.742 00:09:01.742 ' 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:01.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.742 --rc genhtml_branch_coverage=1 00:09:01.742 --rc genhtml_function_coverage=1 00:09:01.742 --rc genhtml_legend=1 00:09:01.742 --rc geninfo_all_blocks=1 00:09:01.742 --rc geninfo_unexecuted_blocks=1 00:09:01.742 00:09:01.742 ' 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:01.742 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:01.743 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:01.743 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:01.743 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:01.743 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.743 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.743 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.743 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.743 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.743 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.743 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:01.743 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.743 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:09:01.743 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:01.743 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:01.743 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:01.743 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:01.743 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:01.743 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:01.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:01.743 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:01.743 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:01.743 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:01.743 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:01.743 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:01.743 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:01.743 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:01.743 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:01.743 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:01.743 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:01.743 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:01.743 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.743 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:01.743 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.743 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:01.743 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:01.743 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:09:01.743 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:04.279 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:04.279 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:09:04.279 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:04.279 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:04.279 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:04.279 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:04.279 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:04.279 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:09:04.279 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:04.279 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:09:04.279 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:09:04.279 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:09:04.279 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:09:04.279 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:09:04.279 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:09:04.279 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:04.279 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:04.279 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:04.279 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:04.279 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:04.279 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:09:04.280 Found 0000:82:00.0 (0x8086 - 0x159b) 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:09:04.280 Found 0000:82:00.1 (0x8086 - 0x159b) 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:09:04.280 Found net devices under 0000:82:00.0: cvl_0_0 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:09:04.280 Found net devices under 0000:82:00.1: cvl_0_1 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:04.280 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:04.538 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:04.538 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:04.538 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:04.538 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:04.538 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:04.538 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:04.538 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:04.538 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:04.538 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:04.538 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:09:04.538 00:09:04.538 --- 10.0.0.2 ping statistics --- 00:09:04.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.538 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:09:04.538 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:04.538 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:04.538 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:09:04.538 00:09:04.538 --- 10.0.0.1 ping statistics --- 00:09:04.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.539 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:09:04.539 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:04.539 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:09:04.539 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:04.539 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:04.539 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:04.539 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:04.539 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:04.539 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:04.539 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:04.539 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:04.539 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:04.539 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:04.539 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:04.539 ************************************ 00:09:04.539 START TEST nvmf_filesystem_no_in_capsule 00:09:04.539 ************************************ 00:09:04.539 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:09:04.539 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:04.539 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:04.539 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:04.539 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:04.539 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:04.539 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2538035 00:09:04.539 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:04.539 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2538035 00:09:04.539 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2538035 ']' 00:09:04.539 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.539 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:04.539 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.539 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:04.539 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:04.539 [2024-11-19 11:10:59.956931] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:09:04.539 [2024-11-19 11:10:59.956999] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:04.798 [2024-11-19 11:11:00.067704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:04.798 [2024-11-19 11:11:00.146179] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:04.798 [2024-11-19 11:11:00.146259] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:04.798 [2024-11-19 11:11:00.146303] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:04.798 [2024-11-19 11:11:00.146342] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:04.798 [2024-11-19 11:11:00.146372] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:04.798 [2024-11-19 11:11:00.148578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.798 [2024-11-19 11:11:00.148640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:04.798 [2024-11-19 11:11:00.148784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:04.798 [2024-11-19 11:11:00.148796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.056 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:05.056 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:09:05.056 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:05.056 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:05.056 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:05.056 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:05.056 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:05.056 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:05.056 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.056 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:05.056 [2024-11-19 11:11:00.373446] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:05.056 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.056 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:05.056 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.056 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:05.056 Malloc1 00:09:05.056 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.056 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:05.056 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.056 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:05.056 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.056 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:05.056 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.056 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:05.315 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.315 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:05.315 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.315 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:05.315 [2024-11-19 11:11:00.564779] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:05.315 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.315 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:05.315 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:09:05.315 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:09:05.315 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:09:05.315 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:09:05.315 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:05.315 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.315 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:05.315 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.315 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:09:05.315 { 00:09:05.315 "name": "Malloc1", 00:09:05.315 "aliases": [ 00:09:05.315 "3be3ddb8-92df-4d03-a8c5-72f97e331665" 00:09:05.315 ], 00:09:05.315 "product_name": "Malloc disk", 00:09:05.315 "block_size": 512, 00:09:05.315 "num_blocks": 1048576, 00:09:05.315 "uuid": "3be3ddb8-92df-4d03-a8c5-72f97e331665", 00:09:05.315 "assigned_rate_limits": { 00:09:05.315 "rw_ios_per_sec": 0, 00:09:05.315 "rw_mbytes_per_sec": 0, 00:09:05.315 "r_mbytes_per_sec": 0, 00:09:05.315 "w_mbytes_per_sec": 0 00:09:05.315 }, 00:09:05.315 "claimed": true, 00:09:05.315 "claim_type": "exclusive_write", 00:09:05.315 "zoned": false, 00:09:05.315 "supported_io_types": { 00:09:05.315 "read": true, 00:09:05.315 "write": true, 00:09:05.315 "unmap": true, 00:09:05.315 "flush": true, 00:09:05.315 "reset": true, 00:09:05.315 "nvme_admin": false, 00:09:05.315 "nvme_io": false, 00:09:05.315 "nvme_io_md": false, 00:09:05.315 "write_zeroes": true, 00:09:05.315 "zcopy": true, 00:09:05.315 "get_zone_info": false, 00:09:05.315 "zone_management": false, 00:09:05.315 "zone_append": false, 00:09:05.315 "compare": false, 00:09:05.315 "compare_and_write": false, 00:09:05.315 "abort": true, 00:09:05.315 "seek_hole": false, 00:09:05.315 "seek_data": false, 00:09:05.315 "copy": true, 00:09:05.315 "nvme_iov_md": false 00:09:05.315 }, 00:09:05.315 "memory_domains": [ 00:09:05.315 { 00:09:05.315 "dma_device_id": "system", 00:09:05.315 "dma_device_type": 1 00:09:05.315 }, 00:09:05.315 { 00:09:05.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.315 "dma_device_type": 2 00:09:05.315 } 00:09:05.315 ], 00:09:05.315 "driver_specific": {} 00:09:05.315 } 00:09:05.315 ]' 00:09:05.315 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:09:05.315 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:09:05.315 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:09:05.315 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:09:05.315 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:09:05.315 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:09:05.315 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:05.315 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:05.880 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:05.880 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:09:05.880 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:05.880 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:05.880 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:09:08.407 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:08.407 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:08.407 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:08.407 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:08.407 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:08.407 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:09:08.407 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:08.407 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:08.407 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:08.407 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:08.407 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:08.407 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:08.407 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:08.407 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:08.407 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:08.407 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:08.407 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:08.407 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:09.340 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:10.274 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:10.274 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:10.274 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:10.274 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:10.274 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:10.274 ************************************ 00:09:10.274 START TEST filesystem_ext4 00:09:10.274 ************************************ 00:09:10.274 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:10.274 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:10.274 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:10.274 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:10.274 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:09:10.274 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:10.274 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:09:10.274 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:09:10.274 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:09:10.274 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:09:10.274 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:10.274 mke2fs 1.47.0 (5-Feb-2023) 00:09:10.274 Discarding device blocks: 0/522240 done 00:09:10.274 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:10.274 Filesystem UUID: 62a57e32-9a04-403c-9d04-e8e1fa199ed3 00:09:10.274 Superblock backups stored on blocks: 00:09:10.274 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:10.274 00:09:10.274 Allocating group tables: 0/64 done 00:09:10.274 Writing inode tables: 0/64 done 00:09:10.533 Creating journal (8192 blocks): done 00:09:12.465 Writing superblocks and filesystem accounting information: 0/64 done 00:09:12.465 00:09:12.465 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:09:12.465 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:19.044 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:19.044 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:09:19.044 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:19.044 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:09:19.044 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:19.044 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:19.044 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2538035 00:09:19.044 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:19.044 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:19.044 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:19.045 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:19.045 00:09:19.045 real 0m7.906s 00:09:19.045 user 0m0.013s 00:09:19.045 sys 0m0.074s 00:09:19.045 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.045 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:19.045 ************************************ 00:09:19.045 END TEST filesystem_ext4 00:09:19.045 ************************************ 00:09:19.045 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:19.045 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:19.045 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.045 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:19.045 ************************************ 00:09:19.045 START TEST filesystem_btrfs 00:09:19.045 ************************************ 00:09:19.045 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:19.045 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:19.045 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:19.045 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:19.045 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:09:19.045 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:19.045 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:09:19.045 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:09:19.045 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:09:19.045 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:09:19.045 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:19.045 btrfs-progs v6.8.1 00:09:19.045 See https://btrfs.readthedocs.io for more information. 00:09:19.045 00:09:19.045 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:19.045 NOTE: several default settings have changed in version 5.15, please make sure 00:09:19.045 this does not affect your deployments: 00:09:19.045 - DUP for metadata (-m dup) 00:09:19.045 - enabled no-holes (-O no-holes) 00:09:19.045 - enabled free-space-tree (-R free-space-tree) 00:09:19.045 00:09:19.045 Label: (null) 00:09:19.045 UUID: 5e9ee9ae-0e4e-4386-8000-a240338bf3b4 00:09:19.045 Node size: 16384 00:09:19.045 Sector size: 4096 (CPU page size: 4096) 00:09:19.045 Filesystem size: 510.00MiB 00:09:19.045 Block group profiles: 00:09:19.045 Data: single 8.00MiB 00:09:19.045 Metadata: DUP 32.00MiB 00:09:19.045 System: DUP 8.00MiB 00:09:19.045 SSD detected: yes 00:09:19.045 Zoned device: no 00:09:19.045 Features: extref, skinny-metadata, no-holes, free-space-tree 00:09:19.045 Checksum: crc32c 00:09:19.045 Number of devices: 1 00:09:19.045 Devices: 00:09:19.045 ID SIZE PATH 00:09:19.045 1 510.00MiB /dev/nvme0n1p1 00:09:19.045 00:09:19.045 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:09:19.045 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:19.045 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:19.045 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:09:19.045 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:19.045 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:09:19.045 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:19.045 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:19.045 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2538035 00:09:19.045 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:19.045 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:19.045 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:19.045 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:19.045 00:09:19.045 real 0m0.490s 00:09:19.045 user 0m0.020s 00:09:19.045 sys 0m0.092s 00:09:19.045 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.045 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:19.045 ************************************ 00:09:19.045 END TEST filesystem_btrfs 00:09:19.045 ************************************ 00:09:19.045 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:09:19.045 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:19.045 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.045 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:19.045 ************************************ 00:09:19.045 START TEST filesystem_xfs 00:09:19.045 ************************************ 00:09:19.045 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:09:19.045 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:19.045 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:19.045 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:19.045 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:09:19.045 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:19.045 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:09:19.045 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:09:19.045 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:09:19.045 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:09:19.046 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:19.046 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:19.046 = sectsz=512 attr=2, projid32bit=1 00:09:19.046 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:19.046 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:19.046 data = bsize=4096 blocks=130560, imaxpct=25 00:09:19.046 = sunit=0 swidth=0 blks 00:09:19.046 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:19.046 log =internal log bsize=4096 blocks=16384, version=2 00:09:19.046 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:19.046 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:19.977 Discarding blocks...Done. 00:09:19.977 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:09:19.977 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:21.874 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:21.874 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:09:21.874 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:21.874 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:09:21.874 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:09:21.874 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:21.874 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2538035 00:09:21.874 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:21.874 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:21.874 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:21.874 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:21.874 00:09:21.874 real 0m3.214s 00:09:21.874 user 0m0.028s 00:09:21.874 sys 0m0.048s 00:09:21.874 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.874 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:21.874 ************************************ 00:09:21.874 END TEST filesystem_xfs 00:09:21.874 ************************************ 00:09:21.874 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:22.132 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:22.132 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:22.390 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.390 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:22.390 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:09:22.390 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:22.390 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:22.390 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:22.390 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:22.390 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:09:22.390 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:22.390 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.390 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:22.390 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.390 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:22.390 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2538035 00:09:22.390 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2538035 ']' 00:09:22.390 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2538035 00:09:22.390 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:09:22.390 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:22.390 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2538035 00:09:22.390 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:22.390 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:22.390 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2538035' 00:09:22.390 killing process with pid 2538035 00:09:22.390 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 2538035 00:09:22.390 11:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 2538035 00:09:22.956 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:22.956 00:09:22.956 real 0m18.313s 00:09:22.956 user 1m10.917s 00:09:22.956 sys 0m2.237s 00:09:22.956 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:22.956 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:22.956 ************************************ 00:09:22.956 END TEST nvmf_filesystem_no_in_capsule 00:09:22.956 ************************************ 00:09:22.956 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:09:22.956 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:22.956 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:22.956 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:22.956 ************************************ 00:09:22.956 START TEST nvmf_filesystem_in_capsule 00:09:22.956 ************************************ 00:09:22.956 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:09:22.956 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:09:22.956 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:22.956 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:22.956 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:22.956 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:22.956 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2540467 00:09:22.956 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:22.956 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2540467 00:09:22.956 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2540467 ']' 00:09:22.956 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.956 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:22.956 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.956 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:22.956 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:22.956 [2024-11-19 11:11:18.329975] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:09:22.956 [2024-11-19 11:11:18.330062] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:22.956 [2024-11-19 11:11:18.421095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:23.214 [2024-11-19 11:11:18.482799] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:23.214 [2024-11-19 11:11:18.482856] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:23.214 [2024-11-19 11:11:18.482885] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:23.214 [2024-11-19 11:11:18.482898] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:23.214 [2024-11-19 11:11:18.482908] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:23.214 [2024-11-19 11:11:18.484554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:23.214 [2024-11-19 11:11:18.484580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:23.214 [2024-11-19 11:11:18.484640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:23.214 [2024-11-19 11:11:18.484644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.214 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:23.214 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:09:23.214 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:23.214 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:23.214 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:23.214 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:23.214 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:23.214 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:09:23.214 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.214 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:23.214 [2024-11-19 11:11:18.641625] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:23.214 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.214 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:23.214 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.214 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:23.472 Malloc1 00:09:23.472 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.472 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:23.472 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.472 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:23.472 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.472 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:23.472 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.472 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:23.472 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.472 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:23.472 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.472 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:23.472 [2024-11-19 11:11:18.829798] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:23.472 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.472 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:23.472 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:09:23.472 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:09:23.472 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:09:23.472 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:09:23.472 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:23.472 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.472 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:23.472 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.472 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:09:23.472 { 00:09:23.472 "name": "Malloc1", 00:09:23.472 "aliases": [ 00:09:23.472 "10d2c232-d18e-47d6-999a-26fc11b5aef1" 00:09:23.472 ], 00:09:23.472 "product_name": "Malloc disk", 00:09:23.472 "block_size": 512, 00:09:23.472 "num_blocks": 1048576, 00:09:23.472 "uuid": "10d2c232-d18e-47d6-999a-26fc11b5aef1", 00:09:23.472 "assigned_rate_limits": { 00:09:23.472 "rw_ios_per_sec": 0, 00:09:23.472 "rw_mbytes_per_sec": 0, 00:09:23.472 "r_mbytes_per_sec": 0, 00:09:23.472 "w_mbytes_per_sec": 0 00:09:23.472 }, 00:09:23.472 "claimed": true, 00:09:23.472 "claim_type": "exclusive_write", 00:09:23.472 "zoned": false, 00:09:23.472 "supported_io_types": { 00:09:23.472 "read": true, 00:09:23.472 "write": true, 00:09:23.472 "unmap": true, 00:09:23.472 "flush": true, 00:09:23.472 "reset": true, 00:09:23.472 "nvme_admin": false, 00:09:23.472 "nvme_io": false, 00:09:23.472 "nvme_io_md": false, 00:09:23.472 "write_zeroes": true, 00:09:23.472 "zcopy": true, 00:09:23.472 "get_zone_info": false, 00:09:23.472 "zone_management": false, 00:09:23.472 "zone_append": false, 00:09:23.472 "compare": false, 00:09:23.472 "compare_and_write": false, 00:09:23.472 "abort": true, 00:09:23.472 "seek_hole": false, 00:09:23.472 "seek_data": false, 00:09:23.472 "copy": true, 00:09:23.472 "nvme_iov_md": false 00:09:23.472 }, 00:09:23.472 "memory_domains": [ 00:09:23.472 { 00:09:23.472 "dma_device_id": "system", 00:09:23.472 "dma_device_type": 1 00:09:23.472 }, 00:09:23.472 { 00:09:23.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.472 "dma_device_type": 2 00:09:23.472 } 00:09:23.472 ], 00:09:23.472 "driver_specific": {} 00:09:23.472 } 00:09:23.472 ]' 00:09:23.472 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:09:23.472 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:09:23.472 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:09:23.472 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:09:23.472 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:09:23.472 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:09:23.472 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:23.472 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:24.402 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:24.402 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:09:24.402 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:24.402 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:24.402 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:09:26.301 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:26.301 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:26.301 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:26.301 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:26.301 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:26.301 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:09:26.301 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:26.301 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:26.301 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:26.301 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:26.301 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:26.301 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:26.301 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:26.301 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:26.301 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:26.301 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:26.301 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:26.301 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:27.234 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:28.189 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:09:28.189 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:28.189 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:28.189 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.189 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:28.189 ************************************ 00:09:28.189 START TEST filesystem_in_capsule_ext4 00:09:28.189 ************************************ 00:09:28.189 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:28.189 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:28.189 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:28.189 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:28.189 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:09:28.189 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:28.189 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:09:28.189 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:09:28.190 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:09:28.190 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:09:28.190 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:28.190 mke2fs 1.47.0 (5-Feb-2023) 00:09:28.190 Discarding device blocks: 0/522240 done 00:09:28.190 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:28.190 Filesystem UUID: 519d3046-77c0-4339-9ec8-e9a04ff22449 00:09:28.190 Superblock backups stored on blocks: 00:09:28.190 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:28.190 00:09:28.190 Allocating group tables: 0/64 done 00:09:28.190 Writing inode tables: 0/64 done 00:09:30.716 Creating journal (8192 blocks): done 00:09:30.716 Writing superblocks and filesystem accounting information: 0/64 done 00:09:30.716 00:09:30.716 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:09:30.716 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:37.270 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:37.270 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:09:37.270 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:37.270 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:09:37.270 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:37.270 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:37.270 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2540467 00:09:37.270 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:37.270 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:37.270 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:37.270 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:37.270 00:09:37.270 real 0m8.777s 00:09:37.270 user 0m0.027s 00:09:37.270 sys 0m0.060s 00:09:37.270 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.270 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:37.270 ************************************ 00:09:37.270 END TEST filesystem_in_capsule_ext4 00:09:37.270 ************************************ 00:09:37.270 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:37.270 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:37.270 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.270 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:37.270 ************************************ 00:09:37.270 START TEST filesystem_in_capsule_btrfs 00:09:37.270 ************************************ 00:09:37.270 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:37.270 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:37.270 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:37.270 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:37.270 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:09:37.270 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:37.270 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:09:37.270 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:09:37.270 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:09:37.270 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:09:37.270 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:37.270 btrfs-progs v6.8.1 00:09:37.270 See https://btrfs.readthedocs.io for more information. 00:09:37.270 00:09:37.270 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:37.270 NOTE: several default settings have changed in version 5.15, please make sure 00:09:37.270 this does not affect your deployments: 00:09:37.270 - DUP for metadata (-m dup) 00:09:37.270 - enabled no-holes (-O no-holes) 00:09:37.270 - enabled free-space-tree (-R free-space-tree) 00:09:37.270 00:09:37.270 Label: (null) 00:09:37.270 UUID: 09057dea-99a9-497e-ac5f-4f12571c5da9 00:09:37.270 Node size: 16384 00:09:37.270 Sector size: 4096 (CPU page size: 4096) 00:09:37.270 Filesystem size: 510.00MiB 00:09:37.270 Block group profiles: 00:09:37.270 Data: single 8.00MiB 00:09:37.270 Metadata: DUP 32.00MiB 00:09:37.270 System: DUP 8.00MiB 00:09:37.270 SSD detected: yes 00:09:37.270 Zoned device: no 00:09:37.270 Features: extref, skinny-metadata, no-holes, free-space-tree 00:09:37.270 Checksum: crc32c 00:09:37.270 Number of devices: 1 00:09:37.270 Devices: 00:09:37.270 ID SIZE PATH 00:09:37.270 1 510.00MiB /dev/nvme0n1p1 00:09:37.270 00:09:37.270 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:09:37.271 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:37.529 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:37.529 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:09:37.529 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:37.529 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:09:37.529 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:37.529 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:37.529 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2540467 00:09:37.529 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:37.529 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:37.529 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:37.529 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:37.529 00:09:37.529 real 0m0.739s 00:09:37.529 user 0m0.021s 00:09:37.529 sys 0m0.107s 00:09:37.529 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.529 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:37.529 ************************************ 00:09:37.529 END TEST filesystem_in_capsule_btrfs 00:09:37.529 ************************************ 00:09:37.529 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:09:37.529 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:37.529 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.529 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:37.786 ************************************ 00:09:37.786 START TEST filesystem_in_capsule_xfs 00:09:37.786 ************************************ 00:09:37.786 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:09:37.786 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:37.786 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:37.786 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:37.786 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:09:37.786 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:37.786 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:09:37.786 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:09:37.786 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:09:37.786 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:09:37.786 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:37.786 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:37.786 = sectsz=512 attr=2, projid32bit=1 00:09:37.786 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:37.786 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:37.786 data = bsize=4096 blocks=130560, imaxpct=25 00:09:37.786 = sunit=0 swidth=0 blks 00:09:37.786 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:37.786 log =internal log bsize=4096 blocks=16384, version=2 00:09:37.786 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:37.786 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:38.717 Discarding blocks...Done. 00:09:38.718 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:09:38.718 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:40.616 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:40.616 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:09:40.616 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:40.616 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:09:40.616 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:09:40.616 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:40.616 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2540467 00:09:40.874 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:40.874 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:40.874 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:40.874 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:40.874 00:09:40.874 real 0m3.096s 00:09:40.874 user 0m0.017s 00:09:40.874 sys 0m0.060s 00:09:40.874 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.874 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:40.874 ************************************ 00:09:40.874 END TEST filesystem_in_capsule_xfs 00:09:40.874 ************************************ 00:09:40.874 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:40.874 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:40.874 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:41.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.132 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:41.132 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:09:41.132 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:41.132 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:41.132 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:41.132 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:41.132 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:09:41.132 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:41.132 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.132 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:41.132 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.132 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:41.132 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2540467 00:09:41.132 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2540467 ']' 00:09:41.132 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2540467 00:09:41.132 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:09:41.132 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:41.132 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2540467 00:09:41.132 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:41.132 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:41.132 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2540467' 00:09:41.132 killing process with pid 2540467 00:09:41.133 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 2540467 00:09:41.133 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 2540467 00:09:41.700 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:41.700 00:09:41.700 real 0m18.669s 00:09:41.700 user 1m12.218s 00:09:41.700 sys 0m2.332s 00:09:41.700 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.700 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:41.700 ************************************ 00:09:41.700 END TEST nvmf_filesystem_in_capsule 00:09:41.700 ************************************ 00:09:41.700 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:09:41.700 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:41.700 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:09:41.700 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:41.700 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:09:41.700 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:41.700 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:41.700 rmmod nvme_tcp 00:09:41.700 rmmod nvme_fabrics 00:09:41.700 rmmod nvme_keyring 00:09:41.700 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:41.700 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:09:41.700 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:09:41.700 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:41.700 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:41.700 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:41.700 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:41.700 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:09:41.700 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:09:41.700 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:41.700 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:09:41.700 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:41.700 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:41.700 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.700 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:41.700 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.607 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:43.607 00:09:43.607 real 0m42.358s 00:09:43.607 user 2m24.402s 00:09:43.607 sys 0m6.721s 00:09:43.607 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.607 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:43.607 ************************************ 00:09:43.607 END TEST nvmf_filesystem 00:09:43.607 ************************************ 00:09:43.607 11:11:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:43.607 11:11:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:43.607 11:11:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.607 11:11:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:43.867 ************************************ 00:09:43.867 START TEST nvmf_target_discovery 00:09:43.867 ************************************ 00:09:43.867 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:43.867 * Looking for test storage... 00:09:43.867 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:43.867 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:43.867 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:09:43.867 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:43.867 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:43.867 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:43.867 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:43.867 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:43.867 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:09:43.867 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:09:43.867 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:09:43.867 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:09:43.867 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:09:43.867 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:09:43.867 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:09:43.867 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:43.867 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:09:43.867 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:09:43.867 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:43.867 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:43.867 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:09:43.867 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:09:43.867 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:43.867 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:09:43.867 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:09:43.867 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:09:43.867 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:09:43.867 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:43.867 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:09:43.867 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:09:43.867 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:43.867 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:43.867 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:09:43.867 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:43.867 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:43.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.867 --rc genhtml_branch_coverage=1 00:09:43.867 --rc genhtml_function_coverage=1 00:09:43.867 --rc genhtml_legend=1 00:09:43.867 --rc geninfo_all_blocks=1 00:09:43.867 --rc geninfo_unexecuted_blocks=1 00:09:43.867 00:09:43.867 ' 00:09:43.867 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:43.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.867 --rc genhtml_branch_coverage=1 00:09:43.867 --rc genhtml_function_coverage=1 00:09:43.867 --rc genhtml_legend=1 00:09:43.867 --rc geninfo_all_blocks=1 00:09:43.867 --rc geninfo_unexecuted_blocks=1 00:09:43.868 00:09:43.868 ' 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:43.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.868 --rc genhtml_branch_coverage=1 00:09:43.868 --rc genhtml_function_coverage=1 00:09:43.868 --rc genhtml_legend=1 00:09:43.868 --rc geninfo_all_blocks=1 00:09:43.868 --rc geninfo_unexecuted_blocks=1 00:09:43.868 00:09:43.868 ' 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:43.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.868 --rc genhtml_branch_coverage=1 00:09:43.868 --rc genhtml_function_coverage=1 00:09:43.868 --rc genhtml_legend=1 00:09:43.868 --rc geninfo_all_blocks=1 00:09:43.868 --rc geninfo_unexecuted_blocks=1 00:09:43.868 00:09:43.868 ' 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:43.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:09:43.868 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.152 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:47.152 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:09:47.152 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:47.152 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:47.152 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:47.152 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:47.152 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:47.152 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:09:47.152 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:47.152 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:09:47.152 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:09:47.152 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:09:47.152 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:09:47.152 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:09:47.152 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:09:47.152 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:47.152 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:47.152 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:47.152 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:09:47.153 Found 0000:82:00.0 (0x8086 - 0x159b) 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:09:47.153 Found 0000:82:00.1 (0x8086 - 0x159b) 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:09:47.153 Found net devices under 0000:82:00.0: cvl_0_0 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:09:47.153 Found net devices under 0000:82:00.1: cvl_0_1 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:47.153 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:47.153 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:47.153 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:47.153 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:47.153 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:47.153 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:47.153 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:47.153 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:47.153 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:47.153 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:47.153 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:47.153 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:47.153 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:47.153 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:09:47.153 00:09:47.153 --- 10.0.0.2 ping statistics --- 00:09:47.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.153 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:09:47.153 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:47.153 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:47.153 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:09:47.153 00:09:47.153 --- 10.0.0.1 ping statistics --- 00:09:47.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.153 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:09:47.153 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:47.153 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:09:47.153 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:47.153 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:47.153 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:47.153 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:47.153 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:47.153 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:47.153 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:47.153 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:09:47.153 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:47.153 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:47.153 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.153 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2545113 00:09:47.153 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2545113 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 2545113 ']' 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.154 [2024-11-19 11:11:42.212966] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:09:47.154 [2024-11-19 11:11:42.213042] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.154 [2024-11-19 11:11:42.297046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:47.154 [2024-11-19 11:11:42.354099] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:47.154 [2024-11-19 11:11:42.354157] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:47.154 [2024-11-19 11:11:42.354186] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:47.154 [2024-11-19 11:11:42.354197] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:47.154 [2024-11-19 11:11:42.354206] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:47.154 [2024-11-19 11:11:42.355809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.154 [2024-11-19 11:11:42.355920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:47.154 [2024-11-19 11:11:42.356055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:47.154 [2024-11-19 11:11:42.356059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.154 [2024-11-19 11:11:42.502468] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.154 Null1 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.154 [2024-11-19 11:11:42.542818] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.154 Null2 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.154 Null3 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.154 Null4 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:09:47.154 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.155 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.155 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.155 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:47.155 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.155 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.411 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.411 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:09:47.411 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.411 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.411 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.412 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 4420 00:09:47.412 00:09:47.412 Discovery Log Number of Records 6, Generation counter 6 00:09:47.412 =====Discovery Log Entry 0====== 00:09:47.412 trtype: tcp 00:09:47.412 adrfam: ipv4 00:09:47.412 subtype: current discovery subsystem 00:09:47.412 treq: not required 00:09:47.412 portid: 0 00:09:47.412 trsvcid: 4420 00:09:47.412 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:47.412 traddr: 10.0.0.2 00:09:47.412 eflags: explicit discovery connections, duplicate discovery information 00:09:47.412 sectype: none 00:09:47.412 =====Discovery Log Entry 1====== 00:09:47.412 trtype: tcp 00:09:47.412 adrfam: ipv4 00:09:47.412 subtype: nvme subsystem 00:09:47.412 treq: not required 00:09:47.412 portid: 0 00:09:47.412 trsvcid: 4420 00:09:47.412 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:47.412 traddr: 10.0.0.2 00:09:47.412 eflags: none 00:09:47.412 sectype: none 00:09:47.412 =====Discovery Log Entry 2====== 00:09:47.412 trtype: tcp 00:09:47.412 adrfam: ipv4 00:09:47.412 subtype: nvme subsystem 00:09:47.412 treq: not required 00:09:47.412 portid: 0 00:09:47.412 trsvcid: 4420 00:09:47.412 subnqn: nqn.2016-06.io.spdk:cnode2 00:09:47.412 traddr: 10.0.0.2 00:09:47.412 eflags: none 00:09:47.412 sectype: none 00:09:47.412 =====Discovery Log Entry 3====== 00:09:47.412 trtype: tcp 00:09:47.412 adrfam: ipv4 00:09:47.412 subtype: nvme subsystem 00:09:47.412 treq: not required 00:09:47.412 portid: 0 00:09:47.412 trsvcid: 4420 00:09:47.412 subnqn: nqn.2016-06.io.spdk:cnode3 00:09:47.412 traddr: 10.0.0.2 00:09:47.412 eflags: none 00:09:47.412 sectype: none 00:09:47.412 =====Discovery Log Entry 4====== 00:09:47.412 trtype: tcp 00:09:47.412 adrfam: ipv4 00:09:47.412 subtype: nvme subsystem 00:09:47.412 treq: not required 00:09:47.412 portid: 0 00:09:47.412 trsvcid: 4420 00:09:47.412 subnqn: nqn.2016-06.io.spdk:cnode4 00:09:47.412 traddr: 10.0.0.2 00:09:47.412 eflags: none 00:09:47.412 sectype: none 00:09:47.412 =====Discovery Log Entry 5====== 00:09:47.412 trtype: tcp 00:09:47.412 adrfam: ipv4 00:09:47.412 subtype: discovery subsystem referral 00:09:47.412 treq: not required 00:09:47.412 portid: 0 00:09:47.412 trsvcid: 4430 00:09:47.412 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:47.412 traddr: 10.0.0.2 00:09:47.412 eflags: none 00:09:47.412 sectype: none 00:09:47.412 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:09:47.412 Perform nvmf subsystem discovery via RPC 00:09:47.412 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:09:47.412 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.412 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.412 [ 00:09:47.412 { 00:09:47.412 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:47.412 "subtype": "Discovery", 00:09:47.412 "listen_addresses": [ 00:09:47.412 { 00:09:47.412 "trtype": "TCP", 00:09:47.412 "adrfam": "IPv4", 00:09:47.412 "traddr": "10.0.0.2", 00:09:47.412 "trsvcid": "4420" 00:09:47.412 } 00:09:47.412 ], 00:09:47.412 "allow_any_host": true, 00:09:47.412 "hosts": [] 00:09:47.412 }, 00:09:47.412 { 00:09:47.412 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:47.412 "subtype": "NVMe", 00:09:47.412 "listen_addresses": [ 00:09:47.412 { 00:09:47.412 "trtype": "TCP", 00:09:47.412 "adrfam": "IPv4", 00:09:47.412 "traddr": "10.0.0.2", 00:09:47.412 "trsvcid": "4420" 00:09:47.412 } 00:09:47.412 ], 00:09:47.412 "allow_any_host": true, 00:09:47.412 "hosts": [], 00:09:47.412 "serial_number": "SPDK00000000000001", 00:09:47.412 "model_number": "SPDK bdev Controller", 00:09:47.412 "max_namespaces": 32, 00:09:47.412 "min_cntlid": 1, 00:09:47.412 "max_cntlid": 65519, 00:09:47.412 "namespaces": [ 00:09:47.412 { 00:09:47.412 "nsid": 1, 00:09:47.412 "bdev_name": "Null1", 00:09:47.412 "name": "Null1", 00:09:47.412 "nguid": "4FA38E0C26B349448287EC2CD826D421", 00:09:47.412 "uuid": "4fa38e0c-26b3-4944-8287-ec2cd826d421" 00:09:47.412 } 00:09:47.412 ] 00:09:47.412 }, 00:09:47.412 { 00:09:47.412 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:47.412 "subtype": "NVMe", 00:09:47.412 "listen_addresses": [ 00:09:47.412 { 00:09:47.412 "trtype": "TCP", 00:09:47.412 "adrfam": "IPv4", 00:09:47.412 "traddr": "10.0.0.2", 00:09:47.412 "trsvcid": "4420" 00:09:47.412 } 00:09:47.412 ], 00:09:47.412 "allow_any_host": true, 00:09:47.412 "hosts": [], 00:09:47.412 "serial_number": "SPDK00000000000002", 00:09:47.412 "model_number": "SPDK bdev Controller", 00:09:47.412 "max_namespaces": 32, 00:09:47.412 "min_cntlid": 1, 00:09:47.412 "max_cntlid": 65519, 00:09:47.412 "namespaces": [ 00:09:47.412 { 00:09:47.412 "nsid": 1, 00:09:47.412 "bdev_name": "Null2", 00:09:47.412 "name": "Null2", 00:09:47.412 "nguid": "EFF20B60AD8D44BDAEDCCC4F82FAF2D7", 00:09:47.412 "uuid": "eff20b60-ad8d-44bd-aedc-cc4f82faf2d7" 00:09:47.412 } 00:09:47.412 ] 00:09:47.412 }, 00:09:47.412 { 00:09:47.412 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:09:47.412 "subtype": "NVMe", 00:09:47.412 "listen_addresses": [ 00:09:47.412 { 00:09:47.412 "trtype": "TCP", 00:09:47.412 "adrfam": "IPv4", 00:09:47.412 "traddr": "10.0.0.2", 00:09:47.412 "trsvcid": "4420" 00:09:47.412 } 00:09:47.412 ], 00:09:47.412 "allow_any_host": true, 00:09:47.412 "hosts": [], 00:09:47.412 "serial_number": "SPDK00000000000003", 00:09:47.412 "model_number": "SPDK bdev Controller", 00:09:47.412 "max_namespaces": 32, 00:09:47.412 "min_cntlid": 1, 00:09:47.412 "max_cntlid": 65519, 00:09:47.412 "namespaces": [ 00:09:47.412 { 00:09:47.412 "nsid": 1, 00:09:47.412 "bdev_name": "Null3", 00:09:47.412 "name": "Null3", 00:09:47.412 "nguid": "7FCF0197167849708991E222D96ECC38", 00:09:47.412 "uuid": "7fcf0197-1678-4970-8991-e222d96ecc38" 00:09:47.412 } 00:09:47.412 ] 00:09:47.412 }, 00:09:47.412 { 00:09:47.412 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:09:47.412 "subtype": "NVMe", 00:09:47.412 "listen_addresses": [ 00:09:47.412 { 00:09:47.412 "trtype": "TCP", 00:09:47.412 "adrfam": "IPv4", 00:09:47.412 "traddr": "10.0.0.2", 00:09:47.412 "trsvcid": "4420" 00:09:47.412 } 00:09:47.412 ], 00:09:47.412 "allow_any_host": true, 00:09:47.412 "hosts": [], 00:09:47.412 "serial_number": "SPDK00000000000004", 00:09:47.412 "model_number": "SPDK bdev Controller", 00:09:47.412 "max_namespaces": 32, 00:09:47.412 "min_cntlid": 1, 00:09:47.412 "max_cntlid": 65519, 00:09:47.412 "namespaces": [ 00:09:47.412 { 00:09:47.412 "nsid": 1, 00:09:47.412 "bdev_name": "Null4", 00:09:47.412 "name": "Null4", 00:09:47.412 "nguid": "4E5C49B2E11E47CCA4D3F6A941C3925A", 00:09:47.412 "uuid": "4e5c49b2-e11e-47cc-a4d3-f6a941c3925a" 00:09:47.412 } 00:09:47.412 ] 00:09:47.412 } 00:09:47.412 ] 00:09:47.412 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.412 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:09:47.412 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:47.412 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:47.412 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.412 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.412 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.412 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:09:47.412 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.412 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.696 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.696 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:47.696 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:09:47.696 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.696 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.696 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.696 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:09:47.696 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.696 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.696 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.696 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:47.696 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:09:47.696 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.696 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.696 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.696 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:09:47.696 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.696 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.696 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.696 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:47.696 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:09:47.696 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.696 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.696 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.696 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:09:47.696 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.696 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.696 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.696 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:09:47.696 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.696 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.696 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.696 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:09:47.696 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:09:47.696 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.696 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.696 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.696 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:09:47.696 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:09:47.696 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:09:47.696 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:09:47.696 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:47.696 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:09:47.696 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:47.696 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:09:47.696 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:47.696 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:47.696 rmmod nvme_tcp 00:09:47.696 rmmod nvme_fabrics 00:09:47.696 rmmod nvme_keyring 00:09:47.696 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:47.696 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:09:47.696 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:09:47.696 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2545113 ']' 00:09:47.697 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2545113 00:09:47.697 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 2545113 ']' 00:09:47.697 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 2545113 00:09:47.697 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:09:47.697 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:47.697 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2545113 00:09:47.697 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:47.697 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:47.697 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2545113' 00:09:47.697 killing process with pid 2545113 00:09:47.697 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 2545113 00:09:47.697 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 2545113 00:09:48.001 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:48.001 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:48.001 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:48.001 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:09:48.001 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:09:48.001 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:48.001 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:09:48.001 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:48.001 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:48.001 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:48.001 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:48.001 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.906 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:49.906 00:09:49.906 real 0m6.236s 00:09:49.906 user 0m4.924s 00:09:49.906 sys 0m2.410s 00:09:49.906 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:49.906 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:49.906 ************************************ 00:09:49.906 END TEST nvmf_target_discovery 00:09:49.906 ************************************ 00:09:49.906 11:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:49.906 11:11:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:49.906 11:11:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:49.906 11:11:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:50.165 ************************************ 00:09:50.165 START TEST nvmf_referrals 00:09:50.165 ************************************ 00:09:50.165 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:50.165 * Looking for test storage... 00:09:50.165 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:50.165 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:50.165 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:09:50.165 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:50.165 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:50.165 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:50.165 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:50.165 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:50.165 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:09:50.165 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:09:50.165 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:09:50.165 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:09:50.165 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:09:50.165 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:09:50.165 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:09:50.165 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:50.165 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:09:50.165 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:09:50.165 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:50.165 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:50.165 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:09:50.165 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:09:50.165 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:50.165 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:09:50.165 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:09:50.165 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:09:50.165 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:09:50.165 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:50.165 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:09:50.165 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:09:50.165 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:50.165 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:50.165 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:09:50.165 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:50.165 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:50.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.165 --rc genhtml_branch_coverage=1 00:09:50.165 --rc genhtml_function_coverage=1 00:09:50.165 --rc genhtml_legend=1 00:09:50.165 --rc geninfo_all_blocks=1 00:09:50.165 --rc geninfo_unexecuted_blocks=1 00:09:50.165 00:09:50.165 ' 00:09:50.165 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:50.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.165 --rc genhtml_branch_coverage=1 00:09:50.165 --rc genhtml_function_coverage=1 00:09:50.165 --rc genhtml_legend=1 00:09:50.165 --rc geninfo_all_blocks=1 00:09:50.165 --rc geninfo_unexecuted_blocks=1 00:09:50.165 00:09:50.165 ' 00:09:50.165 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:50.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.165 --rc genhtml_branch_coverage=1 00:09:50.165 --rc genhtml_function_coverage=1 00:09:50.165 --rc genhtml_legend=1 00:09:50.165 --rc geninfo_all_blocks=1 00:09:50.165 --rc geninfo_unexecuted_blocks=1 00:09:50.165 00:09:50.165 ' 00:09:50.165 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:50.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.165 --rc genhtml_branch_coverage=1 00:09:50.165 --rc genhtml_function_coverage=1 00:09:50.165 --rc genhtml_legend=1 00:09:50.165 --rc geninfo_all_blocks=1 00:09:50.165 --rc geninfo_unexecuted_blocks=1 00:09:50.165 00:09:50.165 ' 00:09:50.165 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:50.165 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:09:50.165 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:50.165 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:50.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:09:50.166 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:53.453 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:53.453 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:09:53.453 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:53.453 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:53.453 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:53.453 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:53.453 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:53.453 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:09:53.453 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:53.453 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:09:53.453 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:09:53.453 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:09:53.453 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:09:53.453 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:09:53.453 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:09:53.453 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:53.453 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:53.453 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:53.453 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:53.453 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:53.453 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:53.453 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:53.453 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:53.453 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:53.453 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:53.453 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:53.453 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:53.453 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:53.453 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:53.453 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:53.453 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:53.453 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:53.453 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:53.453 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:53.453 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:09:53.453 Found 0000:82:00.0 (0x8086 - 0x159b) 00:09:53.453 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:53.453 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:53.453 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:53.453 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:53.453 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:53.453 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:53.453 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:09:53.453 Found 0000:82:00.1 (0x8086 - 0x159b) 00:09:53.453 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:53.453 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:09:53.454 Found net devices under 0000:82:00.0: cvl_0_0 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:09:53.454 Found net devices under 0000:82:00.1: cvl_0_1 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:53.454 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:53.454 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:09:53.454 00:09:53.454 --- 10.0.0.2 ping statistics --- 00:09:53.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.454 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:53.454 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:53.454 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:09:53.454 00:09:53.454 --- 10.0.0.1 ping statistics --- 00:09:53.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.454 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2547623 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2547623 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 2547623 ']' 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:53.454 [2024-11-19 11:11:48.449250] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:09:53.454 [2024-11-19 11:11:48.449323] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:53.454 [2024-11-19 11:11:48.530477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:53.454 [2024-11-19 11:11:48.587118] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:53.454 [2024-11-19 11:11:48.587187] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:53.454 [2024-11-19 11:11:48.587215] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:53.454 [2024-11-19 11:11:48.587227] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:53.454 [2024-11-19 11:11:48.587236] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:53.454 [2024-11-19 11:11:48.588863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:53.454 [2024-11-19 11:11:48.588916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:53.454 [2024-11-19 11:11:48.588984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:53.454 [2024-11-19 11:11:48.588988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:53.454 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.455 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:53.455 [2024-11-19 11:11:48.743876] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:53.455 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.455 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:09:53.455 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.455 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:53.455 [2024-11-19 11:11:48.756090] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:09:53.455 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.455 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:09:53.455 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.455 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:53.455 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.455 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:09:53.455 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.455 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:53.455 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.455 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:09:53.455 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.455 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:53.455 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.455 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:53.455 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:09:53.455 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.455 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:53.455 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.455 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:09:53.455 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:09:53.455 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:53.455 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:53.455 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:53.455 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.455 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:53.455 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:53.455 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.455 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:53.455 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:53.455 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:09:53.455 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:53.455 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:53.455 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:53.455 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:53.455 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:53.713 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:53.713 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:53.713 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:09:53.713 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.713 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:53.713 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.713 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:09:53.713 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.713 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:53.713 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.713 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:09:53.713 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.713 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:53.713 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.713 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:53.713 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:09:53.713 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.713 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:53.713 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.713 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:09:53.713 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:09:53.713 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:53.713 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:53.713 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:53.713 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:53.713 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:53.969 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:53.969 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:09:53.969 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:09:53.969 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.969 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:53.969 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.969 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:53.969 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.969 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:53.969 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.969 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:09:53.969 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:53.969 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:53.969 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:53.969 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.969 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:53.969 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:53.969 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.969 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:09:53.969 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:53.969 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:09:53.969 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:53.969 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:53.969 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:53.969 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:53.969 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:54.227 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:09:54.227 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:54.227 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:09:54.227 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:09:54.227 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:54.227 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:54.227 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:54.227 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:54.227 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:09:54.227 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:09:54.227 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:54.227 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:54.227 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:54.483 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:54.483 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:54.483 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.483 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:54.483 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.483 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:09:54.483 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:54.483 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:54.483 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:54.483 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.483 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:54.483 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:54.483 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.483 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:09:54.483 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:54.483 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:09:54.483 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:54.483 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:54.483 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:54.483 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:54.483 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:54.739 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:09:54.739 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:54.739 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:09:54.739 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:54.739 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:09:54.739 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:54.739 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:54.739 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:09:54.997 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:09:54.997 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:09:54.997 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:54.997 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:54.997 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:54.997 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:54.997 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:09:54.997 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.997 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:54.997 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.997 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:54.997 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.997 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:09:54.997 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:54.997 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.997 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:09:54.997 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:09:54.997 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:54.997 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:54.997 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:54.997 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:54.997 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:55.254 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:55.254 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:09:55.254 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:09:55.254 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:09:55.254 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:55.254 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:09:55.254 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:55.254 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:09:55.254 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:55.254 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:55.254 rmmod nvme_tcp 00:09:55.254 rmmod nvme_fabrics 00:09:55.254 rmmod nvme_keyring 00:09:55.254 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:55.254 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:09:55.254 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:09:55.254 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2547623 ']' 00:09:55.254 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2547623 00:09:55.254 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 2547623 ']' 00:09:55.254 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 2547623 00:09:55.254 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:09:55.254 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:55.254 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2547623 00:09:55.254 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:55.254 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:55.254 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2547623' 00:09:55.254 killing process with pid 2547623 00:09:55.254 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 2547623 00:09:55.254 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 2547623 00:09:55.512 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:55.512 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:55.512 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:55.512 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:09:55.512 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:09:55.512 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:55.512 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:09:55.512 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:55.512 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:55.512 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.512 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.512 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.049 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:58.049 00:09:58.049 real 0m7.560s 00:09:58.049 user 0m10.986s 00:09:58.049 sys 0m2.700s 00:09:58.049 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.049 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:58.049 ************************************ 00:09:58.049 END TEST nvmf_referrals 00:09:58.049 ************************************ 00:09:58.049 11:11:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:58.049 11:11:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:58.049 11:11:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.049 11:11:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:58.049 ************************************ 00:09:58.049 START TEST nvmf_connect_disconnect 00:09:58.049 ************************************ 00:09:58.049 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:58.049 * Looking for test storage... 00:09:58.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:58.049 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:58.049 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:09:58.049 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:58.049 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:58.049 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:58.049 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:58.049 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:58.049 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:09:58.049 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:09:58.049 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:09:58.049 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:09:58.049 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:09:58.049 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:09:58.049 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:09:58.049 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:58.049 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:09:58.049 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:09:58.049 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:58.049 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:58.049 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:58.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.050 --rc genhtml_branch_coverage=1 00:09:58.050 --rc genhtml_function_coverage=1 00:09:58.050 --rc genhtml_legend=1 00:09:58.050 --rc geninfo_all_blocks=1 00:09:58.050 --rc geninfo_unexecuted_blocks=1 00:09:58.050 00:09:58.050 ' 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:58.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.050 --rc genhtml_branch_coverage=1 00:09:58.050 --rc genhtml_function_coverage=1 00:09:58.050 --rc genhtml_legend=1 00:09:58.050 --rc geninfo_all_blocks=1 00:09:58.050 --rc geninfo_unexecuted_blocks=1 00:09:58.050 00:09:58.050 ' 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:58.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.050 --rc genhtml_branch_coverage=1 00:09:58.050 --rc genhtml_function_coverage=1 00:09:58.050 --rc genhtml_legend=1 00:09:58.050 --rc geninfo_all_blocks=1 00:09:58.050 --rc geninfo_unexecuted_blocks=1 00:09:58.050 00:09:58.050 ' 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:58.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.050 --rc genhtml_branch_coverage=1 00:09:58.050 --rc genhtml_function_coverage=1 00:09:58.050 --rc genhtml_legend=1 00:09:58.050 --rc geninfo_all_blocks=1 00:09:58.050 --rc geninfo_unexecuted_blocks=1 00:09:58.050 00:09:58.050 ' 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:58.050 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:09:58.050 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:10:00.581 Found 0000:82:00.0 (0x8086 - 0x159b) 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:10:00.581 Found 0000:82:00.1 (0x8086 - 0x159b) 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:10:00.581 Found net devices under 0000:82:00.0: cvl_0_0 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:10:00.581 Found net devices under 0000:82:00.1: cvl_0_1 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:00.581 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:00.582 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:00.582 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:00.582 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:00.582 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:00.582 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:00.582 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:00.582 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:00.582 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:00.582 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:00.582 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:00.582 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:00.582 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:00.582 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:00.582 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:00.582 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:10:00.582 00:10:00.582 --- 10.0.0.2 ping statistics --- 00:10:00.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.582 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:10:00.582 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:00.582 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:00.582 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:10:00.582 00:10:00.582 --- 10.0.0.1 ping statistics --- 00:10:00.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.582 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:10:00.582 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:00.582 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:10:00.582 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:00.582 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:00.582 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:00.582 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:00.582 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:00.582 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:00.582 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:00.582 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:10:00.582 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:00.582 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:00.582 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:00.582 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2550341 00:10:00.582 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:00.582 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2550341 00:10:00.582 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 2550341 ']' 00:10:00.582 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.582 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:00.582 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.582 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:00.582 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:00.841 [2024-11-19 11:11:56.120001] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:10:00.841 [2024-11-19 11:11:56.120098] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:00.841 [2024-11-19 11:11:56.200907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:00.841 [2024-11-19 11:11:56.257669] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:00.841 [2024-11-19 11:11:56.257713] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:00.841 [2024-11-19 11:11:56.257751] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:00.841 [2024-11-19 11:11:56.257762] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:00.841 [2024-11-19 11:11:56.257772] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:00.841 [2024-11-19 11:11:56.259354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.841 [2024-11-19 11:11:56.259415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:00.841 [2024-11-19 11:11:56.259453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:00.841 [2024-11-19 11:11:56.259456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.099 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:01.099 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:10:01.099 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:01.099 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:01.099 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:01.099 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:01.099 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:01.099 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.099 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:01.099 [2024-11-19 11:11:56.397790] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:01.099 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.099 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:10:01.099 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.099 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:01.099 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.099 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:10:01.099 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:01.099 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.099 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:01.099 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.099 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:01.099 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.099 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:01.099 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.099 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:01.099 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.099 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:01.099 [2024-11-19 11:11:56.459201] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:01.099 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.099 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:10:01.099 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:10:01.099 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:10:04.381 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.909 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.436 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.717 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.244 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.244 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:10:15.244 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:10:15.244 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:15.244 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:10:15.244 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:15.244 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:10:15.244 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:15.244 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:15.244 rmmod nvme_tcp 00:10:15.244 rmmod nvme_fabrics 00:10:15.244 rmmod nvme_keyring 00:10:15.244 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:15.244 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:10:15.244 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:10:15.244 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2550341 ']' 00:10:15.244 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2550341 00:10:15.244 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2550341 ']' 00:10:15.244 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 2550341 00:10:15.244 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:10:15.244 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:15.244 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2550341 00:10:15.244 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:15.244 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:15.244 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2550341' 00:10:15.244 killing process with pid 2550341 00:10:15.244 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 2550341 00:10:15.244 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 2550341 00:10:15.244 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:15.244 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:15.244 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:15.244 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:10:15.244 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:10:15.244 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:15.244 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:10:15.244 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:15.244 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:15.244 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:15.244 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:15.244 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.149 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:17.149 00:10:17.149 real 0m19.612s 00:10:17.149 user 0m56.991s 00:10:17.149 sys 0m3.958s 00:10:17.149 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.149 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:17.149 ************************************ 00:10:17.149 END TEST nvmf_connect_disconnect 00:10:17.149 ************************************ 00:10:17.408 11:12:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:17.408 11:12:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:17.408 11:12:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.408 11:12:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:17.408 ************************************ 00:10:17.408 START TEST nvmf_multitarget 00:10:17.408 ************************************ 00:10:17.408 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:17.408 * Looking for test storage... 00:10:17.408 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:17.408 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:17.408 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:10:17.408 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:17.408 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:17.408 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:17.408 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:17.408 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:17.408 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:10:17.408 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:10:17.408 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:10:17.408 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:10:17.408 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:10:17.408 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:10:17.408 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:10:17.408 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:17.408 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:10:17.408 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:10:17.408 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:17.408 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:17.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.409 --rc genhtml_branch_coverage=1 00:10:17.409 --rc genhtml_function_coverage=1 00:10:17.409 --rc genhtml_legend=1 00:10:17.409 --rc geninfo_all_blocks=1 00:10:17.409 --rc geninfo_unexecuted_blocks=1 00:10:17.409 00:10:17.409 ' 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:17.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.409 --rc genhtml_branch_coverage=1 00:10:17.409 --rc genhtml_function_coverage=1 00:10:17.409 --rc genhtml_legend=1 00:10:17.409 --rc geninfo_all_blocks=1 00:10:17.409 --rc geninfo_unexecuted_blocks=1 00:10:17.409 00:10:17.409 ' 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:17.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.409 --rc genhtml_branch_coverage=1 00:10:17.409 --rc genhtml_function_coverage=1 00:10:17.409 --rc genhtml_legend=1 00:10:17.409 --rc geninfo_all_blocks=1 00:10:17.409 --rc geninfo_unexecuted_blocks=1 00:10:17.409 00:10:17.409 ' 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:17.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.409 --rc genhtml_branch_coverage=1 00:10:17.409 --rc genhtml_function_coverage=1 00:10:17.409 --rc genhtml_legend=1 00:10:17.409 --rc geninfo_all_blocks=1 00:10:17.409 --rc geninfo_unexecuted_blocks=1 00:10:17.409 00:10:17.409 ' 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:17.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:17.409 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:17.410 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:10:17.410 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:20.694 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:20.694 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:10:20.694 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:20.694 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:20.694 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:20.694 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:20.694 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:20.694 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:10:20.694 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:20.694 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:10:20.694 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:10:20.694 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:10:20.694 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:10:20.694 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:10:20.694 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:10:20.694 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:20.694 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:20.694 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:20.694 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:20.694 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:20.694 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:20.694 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:20.694 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:20.694 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:20.694 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:20.694 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:20.694 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:20.694 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:10:20.695 Found 0000:82:00.0 (0x8086 - 0x159b) 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:10:20.695 Found 0000:82:00.1 (0x8086 - 0x159b) 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:10:20.695 Found net devices under 0000:82:00.0: cvl_0_0 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:10:20.695 Found net devices under 0000:82:00.1: cvl_0_1 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:20.695 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:20.695 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:10:20.695 00:10:20.695 --- 10.0.0.2 ping statistics --- 00:10:20.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.695 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:20.695 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:20.695 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:10:20.695 00:10:20.695 --- 10.0.0.1 ping statistics --- 00:10:20.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.695 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2555016 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2555016 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 2555016 ']' 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:20.695 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:20.696 [2024-11-19 11:12:15.817473] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:10:20.696 [2024-11-19 11:12:15.817559] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:20.696 [2024-11-19 11:12:15.897863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:20.696 [2024-11-19 11:12:15.953379] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:20.696 [2024-11-19 11:12:15.953437] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:20.696 [2024-11-19 11:12:15.953465] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:20.696 [2024-11-19 11:12:15.953476] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:20.696 [2024-11-19 11:12:15.953485] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:20.696 [2024-11-19 11:12:15.954976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:20.696 [2024-11-19 11:12:15.955084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:20.696 [2024-11-19 11:12:15.955160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:20.696 [2024-11-19 11:12:15.955163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.696 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:20.696 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:10:20.696 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:20.696 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:20.696 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:20.696 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:20.696 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:20.696 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:20.696 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:10:20.953 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:10:20.953 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:10:20.953 "nvmf_tgt_1" 00:10:20.953 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:10:21.212 "nvmf_tgt_2" 00:10:21.212 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:21.212 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:10:21.212 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:10:21.212 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:10:21.212 true 00:10:21.212 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:10:21.471 true 00:10:21.471 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:21.471 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:10:21.471 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:10:21.471 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:21.471 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:10:21.471 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:21.471 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:10:21.471 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:21.471 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:10:21.471 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:21.471 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:21.471 rmmod nvme_tcp 00:10:21.471 rmmod nvme_fabrics 00:10:21.729 rmmod nvme_keyring 00:10:21.729 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:21.729 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:10:21.729 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:10:21.729 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2555016 ']' 00:10:21.729 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2555016 00:10:21.729 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 2555016 ']' 00:10:21.729 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 2555016 00:10:21.729 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:10:21.729 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:21.729 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2555016 00:10:21.729 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:21.729 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:21.729 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2555016' 00:10:21.729 killing process with pid 2555016 00:10:21.729 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 2555016 00:10:21.729 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 2555016 00:10:21.991 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:21.991 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:21.991 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:21.991 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:10:21.991 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:10:21.991 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:21.991 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:10:21.991 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:21.991 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:21.991 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.991 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:21.991 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:23.932 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:23.932 00:10:23.932 real 0m6.625s 00:10:23.932 user 0m7.082s 00:10:23.932 sys 0m2.492s 00:10:23.932 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:23.932 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:23.932 ************************************ 00:10:23.932 END TEST nvmf_multitarget 00:10:23.932 ************************************ 00:10:23.932 11:12:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:23.932 11:12:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:23.932 11:12:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:23.932 11:12:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:23.932 ************************************ 00:10:23.932 START TEST nvmf_rpc 00:10:23.932 ************************************ 00:10:23.932 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:23.932 * Looking for test storage... 00:10:23.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:23.932 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:23.932 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:10:23.932 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:24.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.191 --rc genhtml_branch_coverage=1 00:10:24.191 --rc genhtml_function_coverage=1 00:10:24.191 --rc genhtml_legend=1 00:10:24.191 --rc geninfo_all_blocks=1 00:10:24.191 --rc geninfo_unexecuted_blocks=1 00:10:24.191 00:10:24.191 ' 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:24.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.191 --rc genhtml_branch_coverage=1 00:10:24.191 --rc genhtml_function_coverage=1 00:10:24.191 --rc genhtml_legend=1 00:10:24.191 --rc geninfo_all_blocks=1 00:10:24.191 --rc geninfo_unexecuted_blocks=1 00:10:24.191 00:10:24.191 ' 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:24.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.191 --rc genhtml_branch_coverage=1 00:10:24.191 --rc genhtml_function_coverage=1 00:10:24.191 --rc genhtml_legend=1 00:10:24.191 --rc geninfo_all_blocks=1 00:10:24.191 --rc geninfo_unexecuted_blocks=1 00:10:24.191 00:10:24.191 ' 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:24.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.191 --rc genhtml_branch_coverage=1 00:10:24.191 --rc genhtml_function_coverage=1 00:10:24.191 --rc genhtml_legend=1 00:10:24.191 --rc geninfo_all_blocks=1 00:10:24.191 --rc geninfo_unexecuted_blocks=1 00:10:24.191 00:10:24.191 ' 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:24.191 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:24.192 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:24.192 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:24.192 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:24.192 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:24.192 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:24.192 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:24.192 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:24.192 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:24.192 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:10:24.192 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:10:24.192 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:24.192 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:24.192 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:24.192 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:24.192 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:24.192 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:24.192 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:24.192 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:24.192 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:24.192 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:24.192 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:10:24.192 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:10:27.477 Found 0000:82:00.0 (0x8086 - 0x159b) 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:10:27.477 Found 0000:82:00.1 (0x8086 - 0x159b) 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:10:27.477 Found net devices under 0000:82:00.0: cvl_0_0 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:10:27.477 Found net devices under 0000:82:00.1: cvl_0_1 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:27.477 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:27.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:27.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:10:27.478 00:10:27.478 --- 10.0.0.2 ping statistics --- 00:10:27.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:27.478 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:27.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:27.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:10:27.478 00:10:27.478 --- 10.0.0.1 ping statistics --- 00:10:27.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:27.478 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2557541 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2557541 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 2557541 ']' 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:27.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.478 [2024-11-19 11:12:22.452978] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:10:27.478 [2024-11-19 11:12:22.453067] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:27.478 [2024-11-19 11:12:22.535238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:27.478 [2024-11-19 11:12:22.594951] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:27.478 [2024-11-19 11:12:22.595017] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:27.478 [2024-11-19 11:12:22.595046] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:27.478 [2024-11-19 11:12:22.595058] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:27.478 [2024-11-19 11:12:22.595068] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:27.478 [2024-11-19 11:12:22.596800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:27.478 [2024-11-19 11:12:22.596867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:27.478 [2024-11-19 11:12:22.596933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:27.478 [2024-11-19 11:12:22.596937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:10:27.478 "tick_rate": 2700000000, 00:10:27.478 "poll_groups": [ 00:10:27.478 { 00:10:27.478 "name": "nvmf_tgt_poll_group_000", 00:10:27.478 "admin_qpairs": 0, 00:10:27.478 "io_qpairs": 0, 00:10:27.478 "current_admin_qpairs": 0, 00:10:27.478 "current_io_qpairs": 0, 00:10:27.478 "pending_bdev_io": 0, 00:10:27.478 "completed_nvme_io": 0, 00:10:27.478 "transports": [] 00:10:27.478 }, 00:10:27.478 { 00:10:27.478 "name": "nvmf_tgt_poll_group_001", 00:10:27.478 "admin_qpairs": 0, 00:10:27.478 "io_qpairs": 0, 00:10:27.478 "current_admin_qpairs": 0, 00:10:27.478 "current_io_qpairs": 0, 00:10:27.478 "pending_bdev_io": 0, 00:10:27.478 "completed_nvme_io": 0, 00:10:27.478 "transports": [] 00:10:27.478 }, 00:10:27.478 { 00:10:27.478 "name": "nvmf_tgt_poll_group_002", 00:10:27.478 "admin_qpairs": 0, 00:10:27.478 "io_qpairs": 0, 00:10:27.478 "current_admin_qpairs": 0, 00:10:27.478 "current_io_qpairs": 0, 00:10:27.478 "pending_bdev_io": 0, 00:10:27.478 "completed_nvme_io": 0, 00:10:27.478 "transports": [] 00:10:27.478 }, 00:10:27.478 { 00:10:27.478 "name": "nvmf_tgt_poll_group_003", 00:10:27.478 "admin_qpairs": 0, 00:10:27.478 "io_qpairs": 0, 00:10:27.478 "current_admin_qpairs": 0, 00:10:27.478 "current_io_qpairs": 0, 00:10:27.478 "pending_bdev_io": 0, 00:10:27.478 "completed_nvme_io": 0, 00:10:27.478 "transports": [] 00:10:27.478 } 00:10:27.478 ] 00:10:27.478 }' 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.478 [2024-11-19 11:12:22.849731] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:27.478 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.479 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:10:27.479 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.479 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.479 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.479 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:10:27.479 "tick_rate": 2700000000, 00:10:27.479 "poll_groups": [ 00:10:27.479 { 00:10:27.479 "name": "nvmf_tgt_poll_group_000", 00:10:27.479 "admin_qpairs": 0, 00:10:27.479 "io_qpairs": 0, 00:10:27.479 "current_admin_qpairs": 0, 00:10:27.479 "current_io_qpairs": 0, 00:10:27.479 "pending_bdev_io": 0, 00:10:27.479 "completed_nvme_io": 0, 00:10:27.479 "transports": [ 00:10:27.479 { 00:10:27.479 "trtype": "TCP" 00:10:27.479 } 00:10:27.479 ] 00:10:27.479 }, 00:10:27.479 { 00:10:27.479 "name": "nvmf_tgt_poll_group_001", 00:10:27.479 "admin_qpairs": 0, 00:10:27.479 "io_qpairs": 0, 00:10:27.479 "current_admin_qpairs": 0, 00:10:27.479 "current_io_qpairs": 0, 00:10:27.479 "pending_bdev_io": 0, 00:10:27.479 "completed_nvme_io": 0, 00:10:27.479 "transports": [ 00:10:27.479 { 00:10:27.479 "trtype": "TCP" 00:10:27.479 } 00:10:27.479 ] 00:10:27.479 }, 00:10:27.479 { 00:10:27.479 "name": "nvmf_tgt_poll_group_002", 00:10:27.479 "admin_qpairs": 0, 00:10:27.479 "io_qpairs": 0, 00:10:27.479 "current_admin_qpairs": 0, 00:10:27.479 "current_io_qpairs": 0, 00:10:27.479 "pending_bdev_io": 0, 00:10:27.479 "completed_nvme_io": 0, 00:10:27.479 "transports": [ 00:10:27.479 { 00:10:27.479 "trtype": "TCP" 00:10:27.479 } 00:10:27.479 ] 00:10:27.479 }, 00:10:27.479 { 00:10:27.479 "name": "nvmf_tgt_poll_group_003", 00:10:27.479 "admin_qpairs": 0, 00:10:27.479 "io_qpairs": 0, 00:10:27.479 "current_admin_qpairs": 0, 00:10:27.479 "current_io_qpairs": 0, 00:10:27.479 "pending_bdev_io": 0, 00:10:27.479 "completed_nvme_io": 0, 00:10:27.479 "transports": [ 00:10:27.479 { 00:10:27.479 "trtype": "TCP" 00:10:27.479 } 00:10:27.479 ] 00:10:27.479 } 00:10:27.479 ] 00:10:27.479 }' 00:10:27.479 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:10:27.479 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:27.479 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:27.479 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:27.479 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:10:27.479 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:10:27.479 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:27.479 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:27.479 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:27.479 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:10:27.479 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:10:27.479 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:10:27.479 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:10:27.479 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:27.479 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.479 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.737 Malloc1 00:10:27.737 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.737 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:27.737 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.737 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.737 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.737 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:27.737 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.737 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.737 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.737 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:10:27.737 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.737 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.737 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.737 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:27.737 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.737 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.737 [2024-11-19 11:12:23.019621] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:27.737 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.737 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -a 10.0.0.2 -s 4420 00:10:27.737 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:27.737 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -a 10.0.0.2 -s 4420 00:10:27.737 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:10:27.737 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:27.737 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:10:27.738 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:27.738 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:10:27.738 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:27.738 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:10:27.738 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:10:27.738 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -a 10.0.0.2 -s 4420 00:10:27.738 [2024-11-19 11:12:23.042308] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd' 00:10:27.738 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:27.738 could not add new controller: failed to write to nvme-fabrics device 00:10:27.738 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:27.738 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:27.738 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:27.738 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:27.738 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:10:27.738 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.738 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.738 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.738 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:28.303 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:10:28.303 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:28.303 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:28.303 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:28.303 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:30.199 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:30.199 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:30.199 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:30.457 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:30.457 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:30.457 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:30.457 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:30.457 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.457 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:30.457 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:30.457 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:30.457 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:30.457 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:30.457 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:30.457 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:30.457 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:10:30.457 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.457 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:30.457 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.457 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:30.457 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:30.457 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:30.457 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:10:30.457 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:30.457 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:10:30.457 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:30.457 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:10:30.457 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:30.457 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:10:30.457 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:10:30.457 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:30.457 [2024-11-19 11:12:25.801975] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd' 00:10:30.457 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:30.457 could not add new controller: failed to write to nvme-fabrics device 00:10:30.457 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:30.457 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:30.457 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:30.457 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:30.457 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:10:30.457 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.457 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:30.457 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.457 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:31.023 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:10:31.023 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:31.023 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:31.023 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:31.023 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:33.550 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:33.550 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:33.550 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:33.550 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:33.550 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:33.550 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:33.550 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:33.550 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.550 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:33.550 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:33.550 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:33.550 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:33.550 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:33.550 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:33.550 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:33.550 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:33.550 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.550 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.550 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.550 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:10:33.550 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:33.550 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:33.550 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.550 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.550 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.550 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:33.550 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.550 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.550 [2024-11-19 11:12:28.597869] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:33.550 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.550 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:33.550 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.550 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.550 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.550 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:33.550 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.550 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.550 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.550 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:34.115 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:34.115 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:34.115 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:34.115 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:34.115 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:36.018 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:36.018 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:36.018 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:36.018 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:36.018 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:36.018 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:36.018 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:36.018 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.018 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:36.018 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:36.018 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:36.018 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:36.018 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:36.018 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:36.018 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:36.018 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:36.018 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.018 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:36.018 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.018 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:36.018 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.018 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:36.018 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.018 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:36.018 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:36.018 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.018 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:36.018 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.018 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:36.018 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.018 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:36.018 [2024-11-19 11:12:31.462504] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:36.018 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.018 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:36.018 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.018 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:36.018 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.018 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:36.018 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.018 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:36.018 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.018 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:36.951 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:36.951 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:36.951 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:36.951 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:36.951 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:38.852 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:38.853 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:38.853 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:38.853 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:38.853 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:38.853 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:38.853 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:38.853 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.853 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:38.853 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:38.853 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:38.853 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:38.853 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:38.853 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:38.853 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:38.853 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:38.853 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.853 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:38.853 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.853 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:38.853 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.853 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:38.853 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.853 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:38.853 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:38.853 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.853 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:38.853 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.853 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:38.853 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.853 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:38.853 [2024-11-19 11:12:34.278750] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:38.853 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.853 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:38.853 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.853 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:38.853 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.853 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:38.853 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.853 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:38.853 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.853 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:39.418 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:39.418 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:39.418 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:39.418 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:39.418 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:41.946 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:41.946 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:41.946 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:41.946 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:41.946 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:41.946 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:41.946 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:41.946 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.946 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:41.946 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:41.946 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:41.946 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:41.946 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:41.946 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:41.946 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:41.946 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:41.946 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.946 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:41.946 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.946 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:41.946 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.946 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:41.946 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.946 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:41.946 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:41.946 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.946 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:41.946 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.946 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:41.946 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.946 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:41.946 [2024-11-19 11:12:37.066431] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:41.946 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.946 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:41.946 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.946 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:41.946 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.946 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:41.946 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.946 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:41.946 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.946 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:42.513 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:42.513 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:42.513 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:42.513 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:42.513 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:44.411 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:44.411 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:44.411 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:44.411 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:44.411 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:44.411 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:44.411 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:44.411 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.411 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:44.411 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:44.411 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:44.411 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:44.411 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:44.411 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:44.411 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:44.411 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:44.411 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.411 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.411 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.411 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:44.411 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.411 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.411 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.411 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:44.411 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:44.411 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.411 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.411 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.411 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:44.411 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.411 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.411 [2024-11-19 11:12:39.902007] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:44.411 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.411 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:44.411 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.411 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.668 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.668 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:44.668 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.668 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.668 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.668 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:45.238 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:45.238 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:45.238 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:45.238 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:45.238 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:47.150 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:47.150 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:47.150 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:47.150 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:47.150 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:47.150 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:47.150 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:47.408 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.408 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:47.408 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:47.408 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:47.408 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:47.408 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:47.408 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:47.408 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:47.408 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:47.408 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.408 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.408 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.408 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:47.408 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.408 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.408 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.408 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:10:47.408 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:47.408 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:47.408 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.408 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.408 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.408 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:47.408 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.408 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.408 [2024-11-19 11:12:42.740404] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:47.408 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.408 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:47.408 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.408 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.408 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.408 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:47.408 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.408 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.408 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.408 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.408 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.408 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.408 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.408 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:47.408 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.408 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.409 [2024-11-19 11:12:42.788486] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.409 [2024-11-19 11:12:42.836660] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.409 [2024-11-19 11:12:42.884819] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.409 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.667 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.667 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.667 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:47.667 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.667 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.667 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.667 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:47.667 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:47.667 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.667 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.667 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.667 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:47.667 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.667 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.667 [2024-11-19 11:12:42.932955] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:47.667 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.667 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:47.667 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.667 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.667 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.667 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:47.667 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.667 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.667 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.667 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.667 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.667 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.667 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.667 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:47.667 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.667 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.667 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.667 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:10:47.667 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.667 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.667 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.667 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:10:47.667 "tick_rate": 2700000000, 00:10:47.667 "poll_groups": [ 00:10:47.667 { 00:10:47.667 "name": "nvmf_tgt_poll_group_000", 00:10:47.667 "admin_qpairs": 2, 00:10:47.667 "io_qpairs": 84, 00:10:47.667 "current_admin_qpairs": 0, 00:10:47.667 "current_io_qpairs": 0, 00:10:47.667 "pending_bdev_io": 0, 00:10:47.667 "completed_nvme_io": 184, 00:10:47.667 "transports": [ 00:10:47.667 { 00:10:47.667 "trtype": "TCP" 00:10:47.667 } 00:10:47.667 ] 00:10:47.667 }, 00:10:47.667 { 00:10:47.667 "name": "nvmf_tgt_poll_group_001", 00:10:47.667 "admin_qpairs": 2, 00:10:47.667 "io_qpairs": 84, 00:10:47.667 "current_admin_qpairs": 0, 00:10:47.667 "current_io_qpairs": 0, 00:10:47.667 "pending_bdev_io": 0, 00:10:47.667 "completed_nvme_io": 183, 00:10:47.667 "transports": [ 00:10:47.667 { 00:10:47.667 "trtype": "TCP" 00:10:47.667 } 00:10:47.667 ] 00:10:47.667 }, 00:10:47.667 { 00:10:47.667 "name": "nvmf_tgt_poll_group_002", 00:10:47.667 "admin_qpairs": 1, 00:10:47.667 "io_qpairs": 84, 00:10:47.667 "current_admin_qpairs": 0, 00:10:47.667 "current_io_qpairs": 0, 00:10:47.667 "pending_bdev_io": 0, 00:10:47.667 "completed_nvme_io": 150, 00:10:47.667 "transports": [ 00:10:47.667 { 00:10:47.668 "trtype": "TCP" 00:10:47.668 } 00:10:47.668 ] 00:10:47.668 }, 00:10:47.668 { 00:10:47.668 "name": "nvmf_tgt_poll_group_003", 00:10:47.668 "admin_qpairs": 2, 00:10:47.668 "io_qpairs": 84, 00:10:47.668 "current_admin_qpairs": 0, 00:10:47.668 "current_io_qpairs": 0, 00:10:47.668 "pending_bdev_io": 0, 00:10:47.668 "completed_nvme_io": 169, 00:10:47.668 "transports": [ 00:10:47.668 { 00:10:47.668 "trtype": "TCP" 00:10:47.668 } 00:10:47.668 ] 00:10:47.668 } 00:10:47.668 ] 00:10:47.668 }' 00:10:47.668 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:10:47.668 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:47.668 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:47.668 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:47.668 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:10:47.668 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:10:47.668 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:47.668 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:47.668 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:47.668 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:10:47.668 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:10:47.668 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:10:47.668 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:10:47.668 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:47.668 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:10:47.668 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:47.668 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:10:47.668 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:47.668 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:47.668 rmmod nvme_tcp 00:10:47.668 rmmod nvme_fabrics 00:10:47.668 rmmod nvme_keyring 00:10:47.668 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:47.668 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:10:47.668 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:10:47.668 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2557541 ']' 00:10:47.668 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2557541 00:10:47.668 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 2557541 ']' 00:10:47.668 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 2557541 00:10:47.668 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:10:47.668 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:47.668 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2557541 00:10:47.668 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:47.668 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:47.668 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2557541' 00:10:47.668 killing process with pid 2557541 00:10:47.668 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 2557541 00:10:47.668 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 2557541 00:10:47.926 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:47.926 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:47.926 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:47.926 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:10:47.926 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:10:47.926 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:47.926 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:10:47.926 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:47.926 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:47.926 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.926 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:47.926 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:50.462 00:10:50.462 real 0m26.106s 00:10:50.462 user 1m22.734s 00:10:50.462 sys 0m4.867s 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:50.462 ************************************ 00:10:50.462 END TEST nvmf_rpc 00:10:50.462 ************************************ 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:50.462 ************************************ 00:10:50.462 START TEST nvmf_invalid 00:10:50.462 ************************************ 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:50.462 * Looking for test storage... 00:10:50.462 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:50.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.462 --rc genhtml_branch_coverage=1 00:10:50.462 --rc genhtml_function_coverage=1 00:10:50.462 --rc genhtml_legend=1 00:10:50.462 --rc geninfo_all_blocks=1 00:10:50.462 --rc geninfo_unexecuted_blocks=1 00:10:50.462 00:10:50.462 ' 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:50.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.462 --rc genhtml_branch_coverage=1 00:10:50.462 --rc genhtml_function_coverage=1 00:10:50.462 --rc genhtml_legend=1 00:10:50.462 --rc geninfo_all_blocks=1 00:10:50.462 --rc geninfo_unexecuted_blocks=1 00:10:50.462 00:10:50.462 ' 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:50.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.462 --rc genhtml_branch_coverage=1 00:10:50.462 --rc genhtml_function_coverage=1 00:10:50.462 --rc genhtml_legend=1 00:10:50.462 --rc geninfo_all_blocks=1 00:10:50.462 --rc geninfo_unexecuted_blocks=1 00:10:50.462 00:10:50.462 ' 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:50.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.462 --rc genhtml_branch_coverage=1 00:10:50.462 --rc genhtml_function_coverage=1 00:10:50.462 --rc genhtml_legend=1 00:10:50.462 --rc geninfo_all_blocks=1 00:10:50.462 --rc geninfo_unexecuted_blocks=1 00:10:50.462 00:10:50.462 ' 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:50.462 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:50.463 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:10:50.463 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:10:50.463 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:50.463 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:50.463 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:50.463 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:50.463 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:50.463 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:10:50.463 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:50.463 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:50.463 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:50.463 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.463 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.463 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.463 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:10:50.463 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.463 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:10:50.463 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:50.463 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:50.463 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:50.463 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:50.463 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:50.463 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:50.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:50.463 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:50.463 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:50.463 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:50.463 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:50.463 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:50.463 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:10:50.463 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:10:50.463 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:10:50.463 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:10:50.463 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:50.463 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:50.463 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:50.463 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:50.463 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:50.463 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.463 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:50.463 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.463 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:50.463 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:50.463 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:10:50.463 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:10:52.994 Found 0000:82:00.0 (0x8086 - 0x159b) 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:10:52.994 Found 0000:82:00.1 (0x8086 - 0x159b) 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:10:52.994 Found net devices under 0000:82:00.0: cvl_0_0 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:52.994 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:52.995 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:52.995 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:10:52.995 Found net devices under 0000:82:00.1: cvl_0_1 00:10:52.995 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:52.995 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:52.995 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:10:52.995 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:52.995 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:52.995 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:52.995 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:52.995 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:52.995 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:52.995 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:52.995 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:52.995 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:52.995 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:52.995 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:52.995 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:52.995 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:52.995 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:52.995 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:52.995 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:52.995 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:52.995 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:52.995 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:52.995 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:52.995 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:52.995 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:53.253 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:53.253 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:53.253 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:53.253 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:53.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:53.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:10:53.253 00:10:53.253 --- 10.0.0.2 ping statistics --- 00:10:53.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.253 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:10:53.253 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:53.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:53.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:10:53.253 00:10:53.253 --- 10.0.0.1 ping statistics --- 00:10:53.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.253 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:10:53.253 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:53.253 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:10:53.253 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:53.253 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:53.253 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:53.253 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:53.253 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:53.253 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:53.253 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:53.253 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:10:53.253 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:53.253 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:53.253 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:53.253 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2562417 00:10:53.253 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:53.253 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2562417 00:10:53.253 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 2562417 ']' 00:10:53.253 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.253 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:53.253 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.253 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:53.253 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:53.253 [2024-11-19 11:12:48.595139] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:10:53.253 [2024-11-19 11:12:48.595225] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:53.253 [2024-11-19 11:12:48.677276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:53.253 [2024-11-19 11:12:48.734036] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:53.253 [2024-11-19 11:12:48.734091] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:53.253 [2024-11-19 11:12:48.734120] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:53.253 [2024-11-19 11:12:48.734137] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:53.253 [2024-11-19 11:12:48.734147] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:53.253 [2024-11-19 11:12:48.735762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:53.253 [2024-11-19 11:12:48.735821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:53.253 [2024-11-19 11:12:48.735887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:53.253 [2024-11-19 11:12:48.735890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.510 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:53.510 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:10:53.510 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:53.510 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:53.510 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:53.510 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:53.511 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:53.511 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode16106 00:10:53.766 [2024-11-19 11:12:49.117160] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:10:53.766 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:10:53.766 { 00:10:53.766 "nqn": "nqn.2016-06.io.spdk:cnode16106", 00:10:53.766 "tgt_name": "foobar", 00:10:53.766 "method": "nvmf_create_subsystem", 00:10:53.766 "req_id": 1 00:10:53.766 } 00:10:53.766 Got JSON-RPC error response 00:10:53.766 response: 00:10:53.766 { 00:10:53.766 "code": -32603, 00:10:53.766 "message": "Unable to find target foobar" 00:10:53.766 }' 00:10:53.766 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:10:53.766 { 00:10:53.767 "nqn": "nqn.2016-06.io.spdk:cnode16106", 00:10:53.767 "tgt_name": "foobar", 00:10:53.767 "method": "nvmf_create_subsystem", 00:10:53.767 "req_id": 1 00:10:53.767 } 00:10:53.767 Got JSON-RPC error response 00:10:53.767 response: 00:10:53.767 { 00:10:53.767 "code": -32603, 00:10:53.767 "message": "Unable to find target foobar" 00:10:53.767 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:10:53.767 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:10:53.767 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode6703 00:10:54.023 [2024-11-19 11:12:49.398121] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6703: invalid serial number 'SPDKISFASTANDAWESOME' 00:10:54.023 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:10:54.023 { 00:10:54.023 "nqn": "nqn.2016-06.io.spdk:cnode6703", 00:10:54.023 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:54.023 "method": "nvmf_create_subsystem", 00:10:54.023 "req_id": 1 00:10:54.023 } 00:10:54.023 Got JSON-RPC error response 00:10:54.023 response: 00:10:54.023 { 00:10:54.023 "code": -32602, 00:10:54.023 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:54.023 }' 00:10:54.023 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:10:54.023 { 00:10:54.023 "nqn": "nqn.2016-06.io.spdk:cnode6703", 00:10:54.023 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:54.023 "method": "nvmf_create_subsystem", 00:10:54.023 "req_id": 1 00:10:54.023 } 00:10:54.023 Got JSON-RPC error response 00:10:54.023 response: 00:10:54.023 { 00:10:54.023 "code": -32602, 00:10:54.023 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:54.023 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:54.023 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:10:54.023 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode27161 00:10:54.339 [2024-11-19 11:12:49.683029] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27161: invalid model number 'SPDK_Controller' 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:10:54.339 { 00:10:54.339 "nqn": "nqn.2016-06.io.spdk:cnode27161", 00:10:54.339 "model_number": "SPDK_Controller\u001f", 00:10:54.339 "method": "nvmf_create_subsystem", 00:10:54.339 "req_id": 1 00:10:54.339 } 00:10:54.339 Got JSON-RPC error response 00:10:54.339 response: 00:10:54.339 { 00:10:54.339 "code": -32602, 00:10:54.339 "message": "Invalid MN SPDK_Controller\u001f" 00:10:54.339 }' 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:10:54.339 { 00:10:54.339 "nqn": "nqn.2016-06.io.spdk:cnode27161", 00:10:54.339 "model_number": "SPDK_Controller\u001f", 00:10:54.339 "method": "nvmf_create_subsystem", 00:10:54.339 "req_id": 1 00:10:54.339 } 00:10:54.339 Got JSON-RPC error response 00:10:54.339 response: 00:10:54.339 { 00:10:54.339 "code": -32602, 00:10:54.339 "message": "Invalid MN SPDK_Controller\u001f" 00:10:54.339 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:10:54.339 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:10:54.629 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:10:54.629 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.629 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.629 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ g == \- ]] 00:10:54.629 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'gD)sD=cCUyt9~K!`hF5X]' 00:10:54.629 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'gD)sD=cCUyt9~K!`hF5X]' nqn.2016-06.io.spdk:cnode15826 00:10:54.629 [2024-11-19 11:12:50.036421] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15826: invalid serial number 'gD)sD=cCUyt9~K!`hF5X]' 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:10:54.629 { 00:10:54.629 "nqn": "nqn.2016-06.io.spdk:cnode15826", 00:10:54.629 "serial_number": "gD)sD=cCUyt9~K!`hF5X]", 00:10:54.629 "method": "nvmf_create_subsystem", 00:10:54.629 "req_id": 1 00:10:54.629 } 00:10:54.629 Got JSON-RPC error response 00:10:54.629 response: 00:10:54.629 { 00:10:54.629 "code": -32602, 00:10:54.629 "message": "Invalid SN gD)sD=cCUyt9~K!`hF5X]" 00:10:54.629 }' 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:10:54.629 { 00:10:54.629 "nqn": "nqn.2016-06.io.spdk:cnode15826", 00:10:54.629 "serial_number": "gD)sD=cCUyt9~K!`hF5X]", 00:10:54.629 "method": "nvmf_create_subsystem", 00:10:54.629 "req_id": 1 00:10:54.629 } 00:10:54.629 Got JSON-RPC error response 00:10:54.629 response: 00:10:54.629 { 00:10:54.629 "code": -32602, 00:10:54.629 "message": "Invalid SN gD)sD=cCUyt9~K!`hF5X]" 00:10:54.629 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.629 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.886 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ { == \- ]] 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '{E;u"VP8Dw;}L_Q/s{`2]NV' 00:10:54.887 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '{E;u"VP8Dw;}L_Q/s{`2]NV' nqn.2016-06.io.spdk:cnode25980 00:10:55.144 [2024-11-19 11:12:50.465805] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25980: invalid model number '{E;u"VP8Dw;}L_Q/s{`2]NV' 00:10:55.144 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:10:55.144 { 00:10:55.144 "nqn": "nqn.2016-06.io.spdk:cnode25980", 00:10:55.144 "model_number": "{E;u\"VP8Dw;}L_Q/s{`2]NV", 00:10:55.144 "method": "nvmf_create_subsystem", 00:10:55.144 "req_id": 1 00:10:55.144 } 00:10:55.144 Got JSON-RPC error response 00:10:55.144 response: 00:10:55.144 { 00:10:55.144 "code": -32602, 00:10:55.144 "message": "Invalid MN {E;u\"VP8Dw;}L_Q/s{`2]NV" 00:10:55.144 }' 00:10:55.144 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:10:55.144 { 00:10:55.144 "nqn": "nqn.2016-06.io.spdk:cnode25980", 00:10:55.144 "model_number": "{E;u\"VP8Dw;}L_Q/s{`2]NV", 00:10:55.144 "method": "nvmf_create_subsystem", 00:10:55.144 "req_id": 1 00:10:55.144 } 00:10:55.144 Got JSON-RPC error response 00:10:55.144 response: 00:10:55.144 { 00:10:55.144 "code": -32602, 00:10:55.144 "message": "Invalid MN {E;u\"VP8Dw;}L_Q/s{`2]NV" 00:10:55.144 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:55.144 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:10:55.401 [2024-11-19 11:12:50.754795] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:55.401 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:10:55.658 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:10:55.658 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:10:55.658 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:10:55.658 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:10:55.658 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:10:55.916 [2024-11-19 11:12:51.292525] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:10:55.916 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:10:55.916 { 00:10:55.916 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:55.916 "listen_address": { 00:10:55.916 "trtype": "tcp", 00:10:55.916 "traddr": "", 00:10:55.916 "trsvcid": "4421" 00:10:55.916 }, 00:10:55.916 "method": "nvmf_subsystem_remove_listener", 00:10:55.916 "req_id": 1 00:10:55.916 } 00:10:55.916 Got JSON-RPC error response 00:10:55.916 response: 00:10:55.916 { 00:10:55.916 "code": -32602, 00:10:55.916 "message": "Invalid parameters" 00:10:55.916 }' 00:10:55.916 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:10:55.916 { 00:10:55.916 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:55.916 "listen_address": { 00:10:55.916 "trtype": "tcp", 00:10:55.916 "traddr": "", 00:10:55.916 "trsvcid": "4421" 00:10:55.916 }, 00:10:55.916 "method": "nvmf_subsystem_remove_listener", 00:10:55.916 "req_id": 1 00:10:55.916 } 00:10:55.916 Got JSON-RPC error response 00:10:55.916 response: 00:10:55.916 { 00:10:55.916 "code": -32602, 00:10:55.916 "message": "Invalid parameters" 00:10:55.916 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:10:55.916 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4414 -i 0 00:10:56.174 [2024-11-19 11:12:51.557356] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4414: invalid cntlid range [0-65519] 00:10:56.174 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:10:56.174 { 00:10:56.174 "nqn": "nqn.2016-06.io.spdk:cnode4414", 00:10:56.174 "min_cntlid": 0, 00:10:56.174 "method": "nvmf_create_subsystem", 00:10:56.174 "req_id": 1 00:10:56.174 } 00:10:56.174 Got JSON-RPC error response 00:10:56.174 response: 00:10:56.174 { 00:10:56.174 "code": -32602, 00:10:56.174 "message": "Invalid cntlid range [0-65519]" 00:10:56.174 }' 00:10:56.174 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:10:56.174 { 00:10:56.174 "nqn": "nqn.2016-06.io.spdk:cnode4414", 00:10:56.174 "min_cntlid": 0, 00:10:56.174 "method": "nvmf_create_subsystem", 00:10:56.174 "req_id": 1 00:10:56.174 } 00:10:56.174 Got JSON-RPC error response 00:10:56.174 response: 00:10:56.174 { 00:10:56.174 "code": -32602, 00:10:56.174 "message": "Invalid cntlid range [0-65519]" 00:10:56.174 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:56.174 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5730 -i 65520 00:10:56.431 [2024-11-19 11:12:51.838276] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5730: invalid cntlid range [65520-65519] 00:10:56.431 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:10:56.431 { 00:10:56.431 "nqn": "nqn.2016-06.io.spdk:cnode5730", 00:10:56.431 "min_cntlid": 65520, 00:10:56.431 "method": "nvmf_create_subsystem", 00:10:56.431 "req_id": 1 00:10:56.431 } 00:10:56.431 Got JSON-RPC error response 00:10:56.431 response: 00:10:56.431 { 00:10:56.431 "code": -32602, 00:10:56.431 "message": "Invalid cntlid range [65520-65519]" 00:10:56.431 }' 00:10:56.431 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:10:56.431 { 00:10:56.431 "nqn": "nqn.2016-06.io.spdk:cnode5730", 00:10:56.431 "min_cntlid": 65520, 00:10:56.431 "method": "nvmf_create_subsystem", 00:10:56.431 "req_id": 1 00:10:56.431 } 00:10:56.431 Got JSON-RPC error response 00:10:56.431 response: 00:10:56.431 { 00:10:56.431 "code": -32602, 00:10:56.431 "message": "Invalid cntlid range [65520-65519]" 00:10:56.431 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:56.432 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6743 -I 0 00:10:56.690 [2024-11-19 11:12:52.115170] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6743: invalid cntlid range [1-0] 00:10:56.690 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:10:56.690 { 00:10:56.690 "nqn": "nqn.2016-06.io.spdk:cnode6743", 00:10:56.690 "max_cntlid": 0, 00:10:56.690 "method": "nvmf_create_subsystem", 00:10:56.690 "req_id": 1 00:10:56.690 } 00:10:56.690 Got JSON-RPC error response 00:10:56.690 response: 00:10:56.690 { 00:10:56.690 "code": -32602, 00:10:56.690 "message": "Invalid cntlid range [1-0]" 00:10:56.690 }' 00:10:56.690 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:10:56.690 { 00:10:56.690 "nqn": "nqn.2016-06.io.spdk:cnode6743", 00:10:56.690 "max_cntlid": 0, 00:10:56.690 "method": "nvmf_create_subsystem", 00:10:56.690 "req_id": 1 00:10:56.690 } 00:10:56.690 Got JSON-RPC error response 00:10:56.690 response: 00:10:56.690 { 00:10:56.690 "code": -32602, 00:10:56.690 "message": "Invalid cntlid range [1-0]" 00:10:56.690 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:56.690 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18541 -I 65520 00:10:56.947 [2024-11-19 11:12:52.380043] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18541: invalid cntlid range [1-65520] 00:10:56.947 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:10:56.947 { 00:10:56.947 "nqn": "nqn.2016-06.io.spdk:cnode18541", 00:10:56.947 "max_cntlid": 65520, 00:10:56.947 "method": "nvmf_create_subsystem", 00:10:56.947 "req_id": 1 00:10:56.947 } 00:10:56.947 Got JSON-RPC error response 00:10:56.947 response: 00:10:56.947 { 00:10:56.947 "code": -32602, 00:10:56.947 "message": "Invalid cntlid range [1-65520]" 00:10:56.947 }' 00:10:56.947 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:10:56.947 { 00:10:56.947 "nqn": "nqn.2016-06.io.spdk:cnode18541", 00:10:56.947 "max_cntlid": 65520, 00:10:56.947 "method": "nvmf_create_subsystem", 00:10:56.947 "req_id": 1 00:10:56.947 } 00:10:56.947 Got JSON-RPC error response 00:10:56.947 response: 00:10:56.947 { 00:10:56.947 "code": -32602, 00:10:56.947 "message": "Invalid cntlid range [1-65520]" 00:10:56.947 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:56.947 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20938 -i 6 -I 5 00:10:57.205 [2024-11-19 11:12:52.640855] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20938: invalid cntlid range [6-5] 00:10:57.205 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:10:57.205 { 00:10:57.205 "nqn": "nqn.2016-06.io.spdk:cnode20938", 00:10:57.205 "min_cntlid": 6, 00:10:57.205 "max_cntlid": 5, 00:10:57.205 "method": "nvmf_create_subsystem", 00:10:57.205 "req_id": 1 00:10:57.205 } 00:10:57.205 Got JSON-RPC error response 00:10:57.205 response: 00:10:57.205 { 00:10:57.205 "code": -32602, 00:10:57.205 "message": "Invalid cntlid range [6-5]" 00:10:57.205 }' 00:10:57.205 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:10:57.205 { 00:10:57.205 "nqn": "nqn.2016-06.io.spdk:cnode20938", 00:10:57.205 "min_cntlid": 6, 00:10:57.205 "max_cntlid": 5, 00:10:57.205 "method": "nvmf_create_subsystem", 00:10:57.205 "req_id": 1 00:10:57.205 } 00:10:57.205 Got JSON-RPC error response 00:10:57.205 response: 00:10:57.205 { 00:10:57.205 "code": -32602, 00:10:57.205 "message": "Invalid cntlid range [6-5]" 00:10:57.205 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:57.205 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:10:57.462 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:10:57.462 { 00:10:57.462 "name": "foobar", 00:10:57.462 "method": "nvmf_delete_target", 00:10:57.462 "req_id": 1 00:10:57.462 } 00:10:57.462 Got JSON-RPC error response 00:10:57.462 response: 00:10:57.462 { 00:10:57.462 "code": -32602, 00:10:57.462 "message": "The specified target doesn'\''t exist, cannot delete it." 00:10:57.462 }' 00:10:57.462 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:10:57.462 { 00:10:57.462 "name": "foobar", 00:10:57.462 "method": "nvmf_delete_target", 00:10:57.462 "req_id": 1 00:10:57.462 } 00:10:57.462 Got JSON-RPC error response 00:10:57.462 response: 00:10:57.462 { 00:10:57.462 "code": -32602, 00:10:57.462 "message": "The specified target doesn't exist, cannot delete it." 00:10:57.462 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:10:57.462 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:10:57.462 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:10:57.462 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:57.462 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:10:57.462 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:57.462 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:10:57.462 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:57.462 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:57.462 rmmod nvme_tcp 00:10:57.462 rmmod nvme_fabrics 00:10:57.462 rmmod nvme_keyring 00:10:57.462 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:57.462 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:10:57.463 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:10:57.463 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 2562417 ']' 00:10:57.463 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 2562417 00:10:57.463 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 2562417 ']' 00:10:57.463 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 2562417 00:10:57.463 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:10:57.463 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:57.463 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2562417 00:10:57.463 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:57.463 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:57.463 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2562417' 00:10:57.463 killing process with pid 2562417 00:10:57.463 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 2562417 00:10:57.463 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 2562417 00:10:57.722 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:57.722 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:57.722 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:57.722 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:10:57.722 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:10:57.722 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:57.722 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:10:57.722 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:57.722 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:57.722 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.722 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:57.722 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.261 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:00.261 00:11:00.261 real 0m9.618s 00:11:00.261 user 0m21.614s 00:11:00.261 sys 0m2.942s 00:11:00.261 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:00.261 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:00.261 ************************************ 00:11:00.261 END TEST nvmf_invalid 00:11:00.261 ************************************ 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:00.262 ************************************ 00:11:00.262 START TEST nvmf_connect_stress 00:11:00.262 ************************************ 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:00.262 * Looking for test storage... 00:11:00.262 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:00.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.262 --rc genhtml_branch_coverage=1 00:11:00.262 --rc genhtml_function_coverage=1 00:11:00.262 --rc genhtml_legend=1 00:11:00.262 --rc geninfo_all_blocks=1 00:11:00.262 --rc geninfo_unexecuted_blocks=1 00:11:00.262 00:11:00.262 ' 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:00.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.262 --rc genhtml_branch_coverage=1 00:11:00.262 --rc genhtml_function_coverage=1 00:11:00.262 --rc genhtml_legend=1 00:11:00.262 --rc geninfo_all_blocks=1 00:11:00.262 --rc geninfo_unexecuted_blocks=1 00:11:00.262 00:11:00.262 ' 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:00.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.262 --rc genhtml_branch_coverage=1 00:11:00.262 --rc genhtml_function_coverage=1 00:11:00.262 --rc genhtml_legend=1 00:11:00.262 --rc geninfo_all_blocks=1 00:11:00.262 --rc geninfo_unexecuted_blocks=1 00:11:00.262 00:11:00.262 ' 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:00.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.262 --rc genhtml_branch_coverage=1 00:11:00.262 --rc genhtml_function_coverage=1 00:11:00.262 --rc genhtml_legend=1 00:11:00.262 --rc geninfo_all_blocks=1 00:11:00.262 --rc geninfo_unexecuted_blocks=1 00:11:00.262 00:11:00.262 ' 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.262 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:00.263 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.263 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:11:00.263 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:00.263 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:00.263 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:00.263 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:00.263 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:00.263 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:00.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:00.263 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:00.263 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:00.263 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:00.263 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:00.263 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:00.263 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:00.263 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:00.263 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:00.263 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:00.263 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.263 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.263 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.263 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:00.263 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:00.263 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:11:00.263 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:11:02.796 Found 0000:82:00.0 (0x8086 - 0x159b) 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:11:02.796 Found 0000:82:00.1 (0x8086 - 0x159b) 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:11:02.796 Found net devices under 0000:82:00.0: cvl_0_0 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:02.796 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:02.796 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:02.796 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:02.796 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:02.796 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:02.796 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:02.796 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:11:02.796 Found net devices under 0000:82:00.1: cvl_0_1 00:11:02.796 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:02.796 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:02.796 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:11:02.796 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:02.796 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:02.796 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:02.796 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:02.797 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:02.797 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:02.797 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:02.797 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:02.797 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:02.797 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:02.797 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:02.797 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:02.797 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:02.797 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:02.797 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:02.797 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:02.797 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:02.797 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:02.797 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:02.797 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:02.797 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:02.797 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:02.797 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:02.797 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:02.797 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:02.797 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:02.797 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:02.797 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:11:02.797 00:11:02.797 --- 10.0.0.2 ping statistics --- 00:11:02.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.797 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:11:02.797 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:02.797 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:02.797 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:11:02.797 00:11:02.797 --- 10.0.0.1 ping statistics --- 00:11:02.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.797 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:11:02.797 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:02.797 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:11:02.797 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:02.797 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:02.797 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:02.797 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:02.797 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:02.797 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:02.797 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:02.797 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:02.797 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:02.797 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:02.797 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.797 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2565393 00:11:02.797 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2565393 00:11:02.797 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2565393 ']' 00:11:02.797 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.797 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:02.797 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:02.797 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.797 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:02.797 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.797 [2024-11-19 11:12:58.225429] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:11:02.797 [2024-11-19 11:12:58.225528] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:03.056 [2024-11-19 11:12:58.312862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:03.056 [2024-11-19 11:12:58.372306] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:03.056 [2024-11-19 11:12:58.372359] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:03.056 [2024-11-19 11:12:58.372398] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:03.056 [2024-11-19 11:12:58.372410] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:03.056 [2024-11-19 11:12:58.372420] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:03.056 [2024-11-19 11:12:58.374010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:03.056 [2024-11-19 11:12:58.374075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:03.056 [2024-11-19 11:12:58.374079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:03.056 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:03.056 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:11:03.056 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:03.056 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:03.056 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.056 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:03.056 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:03.056 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.056 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.056 [2024-11-19 11:12:58.529848] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:03.056 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.056 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:03.056 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.056 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.056 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.056 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:03.056 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.056 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.056 [2024-11-19 11:12:58.547080] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:03.056 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.056 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:03.056 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.056 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.315 NULL1 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2565414 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2565414 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.315 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.573 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.573 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2565414 00:11:03.573 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:03.573 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.573 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.831 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.831 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2565414 00:11:03.831 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:03.831 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.831 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.089 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.089 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2565414 00:11:04.089 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:04.089 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.089 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.655 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.655 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2565414 00:11:04.655 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:04.656 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.656 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.913 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.913 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2565414 00:11:04.913 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:04.913 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.913 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.171 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.171 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2565414 00:11:05.171 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:05.171 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.171 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.429 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.429 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2565414 00:11:05.429 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:05.429 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.429 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.686 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.686 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2565414 00:11:05.686 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:05.686 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.686 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:06.251 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.251 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2565414 00:11:06.251 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:06.251 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.251 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:06.509 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.509 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2565414 00:11:06.509 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:06.509 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.509 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:06.767 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.767 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2565414 00:11:06.767 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:06.767 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.767 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.026 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.026 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2565414 00:11:07.026 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:07.026 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.026 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.591 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.591 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2565414 00:11:07.591 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:07.591 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.591 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.848 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.848 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2565414 00:11:07.848 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:07.848 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.848 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.106 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.106 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2565414 00:11:08.106 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:08.106 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.106 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.363 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.363 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2565414 00:11:08.363 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:08.363 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.363 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.620 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.620 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2565414 00:11:08.620 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:08.620 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.620 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.185 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.185 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2565414 00:11:09.185 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.185 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.185 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.442 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.442 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2565414 00:11:09.442 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.443 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.443 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.700 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.700 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2565414 00:11:09.700 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.700 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.700 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.957 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.957 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2565414 00:11:09.957 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.957 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.957 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.214 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.214 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2565414 00:11:10.214 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.214 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.214 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.779 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.779 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2565414 00:11:10.779 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.779 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.779 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.036 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.036 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2565414 00:11:11.036 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.036 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.036 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.293 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.293 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2565414 00:11:11.293 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.293 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.293 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.551 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.551 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2565414 00:11:11.551 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.551 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.551 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.809 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.809 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2565414 00:11:11.809 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.809 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.809 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.375 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.375 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2565414 00:11:12.375 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.375 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.375 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.633 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.633 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2565414 00:11:12.633 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.633 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.633 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.891 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.891 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2565414 00:11:12.891 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.891 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.891 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.149 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.149 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2565414 00:11:13.149 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.149 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.149 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.406 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:13.406 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.406 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2565414 00:11:13.406 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2565414) - No such process 00:11:13.406 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2565414 00:11:13.406 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:13.406 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:13.406 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:13.406 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:13.406 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:11:13.406 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:13.407 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:11:13.407 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:13.407 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:13.407 rmmod nvme_tcp 00:11:13.665 rmmod nvme_fabrics 00:11:13.665 rmmod nvme_keyring 00:11:13.665 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:13.665 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:11:13.665 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:11:13.665 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2565393 ']' 00:11:13.665 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2565393 00:11:13.665 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2565393 ']' 00:11:13.665 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2565393 00:11:13.665 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:11:13.665 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:13.665 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2565393 00:11:13.665 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:13.665 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:13.665 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2565393' 00:11:13.665 killing process with pid 2565393 00:11:13.665 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2565393 00:11:13.665 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2565393 00:11:13.925 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:13.925 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:13.925 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:13.925 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:11:13.925 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:11:13.925 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:13.925 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:11:13.925 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:13.925 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:13.925 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:13.925 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:13.925 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:15.831 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:15.831 00:11:15.831 real 0m16.088s 00:11:15.831 user 0m38.312s 00:11:15.831 sys 0m6.730s 00:11:15.831 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:15.831 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:15.831 ************************************ 00:11:15.831 END TEST nvmf_connect_stress 00:11:15.831 ************************************ 00:11:15.831 11:13:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:15.831 11:13:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:15.831 11:13:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:15.831 11:13:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:15.831 ************************************ 00:11:15.831 START TEST nvmf_fused_ordering 00:11:15.831 ************************************ 00:11:15.831 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:16.090 * Looking for test storage... 00:11:16.090 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:16.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.090 --rc genhtml_branch_coverage=1 00:11:16.090 --rc genhtml_function_coverage=1 00:11:16.090 --rc genhtml_legend=1 00:11:16.090 --rc geninfo_all_blocks=1 00:11:16.090 --rc geninfo_unexecuted_blocks=1 00:11:16.090 00:11:16.090 ' 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:16.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.090 --rc genhtml_branch_coverage=1 00:11:16.090 --rc genhtml_function_coverage=1 00:11:16.090 --rc genhtml_legend=1 00:11:16.090 --rc geninfo_all_blocks=1 00:11:16.090 --rc geninfo_unexecuted_blocks=1 00:11:16.090 00:11:16.090 ' 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:16.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.090 --rc genhtml_branch_coverage=1 00:11:16.090 --rc genhtml_function_coverage=1 00:11:16.090 --rc genhtml_legend=1 00:11:16.090 --rc geninfo_all_blocks=1 00:11:16.090 --rc geninfo_unexecuted_blocks=1 00:11:16.090 00:11:16.090 ' 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:16.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.090 --rc genhtml_branch_coverage=1 00:11:16.090 --rc genhtml_function_coverage=1 00:11:16.090 --rc genhtml_legend=1 00:11:16.090 --rc geninfo_all_blocks=1 00:11:16.090 --rc geninfo_unexecuted_blocks=1 00:11:16.090 00:11:16.090 ' 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.090 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:16.091 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.091 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:11:16.091 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:16.091 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:16.091 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:16.091 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:16.091 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:16.091 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:16.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:16.091 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:16.091 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:16.091 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:16.091 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:16.091 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:16.091 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:16.091 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:16.091 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:16.091 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:16.091 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.091 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:16.091 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.091 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:16.091 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:16.091 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:11:16.091 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:18.624 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:18.625 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:11:18.625 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:18.625 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:18.625 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:18.625 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:18.625 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:18.625 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:11:18.625 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:18.625 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:11:18.625 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:11:18.625 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:11:18.625 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:11:18.625 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:11:18.625 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:11:18.625 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:18.625 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:18.625 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:18.625 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:18.625 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:18.625 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:18.625 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:18.625 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:18.625 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:18.625 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:18.625 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:18.625 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:18.625 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:18.625 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:18.625 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:18.625 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:18.625 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:18.625 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:18.625 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:18.625 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:11:18.625 Found 0000:82:00.0 (0x8086 - 0x159b) 00:11:18.625 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:18.625 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:18.625 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:18.625 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:11:18.884 Found 0000:82:00.1 (0x8086 - 0x159b) 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:11:18.884 Found net devices under 0000:82:00.0: cvl_0_0 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:11:18.884 Found net devices under 0000:82:00.1: cvl_0_1 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:18.884 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:18.884 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:18.884 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:11:18.884 00:11:18.885 --- 10.0.0.2 ping statistics --- 00:11:18.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.885 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:11:18.885 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:18.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:18.885 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:11:18.885 00:11:18.885 --- 10.0.0.1 ping statistics --- 00:11:18.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.885 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:11:18.885 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:18.885 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:11:18.885 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:18.885 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:18.885 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:18.885 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:18.885 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:18.885 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:18.885 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:18.885 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:18.885 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:18.885 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:18.885 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:18.885 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2568983 00:11:18.885 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:18.885 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2568983 00:11:18.885 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 2568983 ']' 00:11:18.885 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.885 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:18.885 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.885 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:18.885 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:18.885 [2024-11-19 11:13:14.323048] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:11:18.885 [2024-11-19 11:13:14.323122] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:19.143 [2024-11-19 11:13:14.406499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.143 [2024-11-19 11:13:14.464234] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:19.143 [2024-11-19 11:13:14.464303] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:19.143 [2024-11-19 11:13:14.464331] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:19.143 [2024-11-19 11:13:14.464342] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:19.143 [2024-11-19 11:13:14.464352] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:19.143 [2024-11-19 11:13:14.465091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:19.143 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:19.143 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:11:19.143 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:19.143 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:19.143 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:19.143 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:19.143 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:19.143 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.143 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:19.143 [2024-11-19 11:13:14.611863] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:19.143 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.143 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:19.143 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.143 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:19.143 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.143 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:19.143 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.143 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:19.143 [2024-11-19 11:13:14.628059] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:19.143 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.143 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:19.143 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.143 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:19.143 NULL1 00:11:19.143 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.143 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:19.143 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.143 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:19.401 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.401 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:19.401 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.401 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:19.401 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.401 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:19.401 [2024-11-19 11:13:14.672393] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:11:19.401 [2024-11-19 11:13:14.672455] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2569104 ] 00:11:19.660 Attached to nqn.2016-06.io.spdk:cnode1 00:11:19.660 Namespace ID: 1 size: 1GB 00:11:19.660 fused_ordering(0) 00:11:19.660 fused_ordering(1) 00:11:19.660 fused_ordering(2) 00:11:19.660 fused_ordering(3) 00:11:19.660 fused_ordering(4) 00:11:19.660 fused_ordering(5) 00:11:19.660 fused_ordering(6) 00:11:19.660 fused_ordering(7) 00:11:19.660 fused_ordering(8) 00:11:19.660 fused_ordering(9) 00:11:19.660 fused_ordering(10) 00:11:19.660 fused_ordering(11) 00:11:19.660 fused_ordering(12) 00:11:19.660 fused_ordering(13) 00:11:19.660 fused_ordering(14) 00:11:19.660 fused_ordering(15) 00:11:19.660 fused_ordering(16) 00:11:19.660 fused_ordering(17) 00:11:19.660 fused_ordering(18) 00:11:19.660 fused_ordering(19) 00:11:19.660 fused_ordering(20) 00:11:19.660 fused_ordering(21) 00:11:19.660 fused_ordering(22) 00:11:19.660 fused_ordering(23) 00:11:19.660 fused_ordering(24) 00:11:19.660 fused_ordering(25) 00:11:19.660 fused_ordering(26) 00:11:19.660 fused_ordering(27) 00:11:19.660 fused_ordering(28) 00:11:19.660 fused_ordering(29) 00:11:19.660 fused_ordering(30) 00:11:19.660 fused_ordering(31) 00:11:19.660 fused_ordering(32) 00:11:19.660 fused_ordering(33) 00:11:19.660 fused_ordering(34) 00:11:19.660 fused_ordering(35) 00:11:19.660 fused_ordering(36) 00:11:19.660 fused_ordering(37) 00:11:19.660 fused_ordering(38) 00:11:19.660 fused_ordering(39) 00:11:19.660 fused_ordering(40) 00:11:19.660 fused_ordering(41) 00:11:19.660 fused_ordering(42) 00:11:19.660 fused_ordering(43) 00:11:19.660 fused_ordering(44) 00:11:19.660 fused_ordering(45) 00:11:19.660 fused_ordering(46) 00:11:19.660 fused_ordering(47) 00:11:19.660 fused_ordering(48) 00:11:19.660 fused_ordering(49) 00:11:19.660 fused_ordering(50) 00:11:19.660 fused_ordering(51) 00:11:19.660 fused_ordering(52) 00:11:19.660 fused_ordering(53) 00:11:19.660 fused_ordering(54) 00:11:19.660 fused_ordering(55) 00:11:19.660 fused_ordering(56) 00:11:19.660 fused_ordering(57) 00:11:19.660 fused_ordering(58) 00:11:19.660 fused_ordering(59) 00:11:19.660 fused_ordering(60) 00:11:19.660 fused_ordering(61) 00:11:19.660 fused_ordering(62) 00:11:19.660 fused_ordering(63) 00:11:19.660 fused_ordering(64) 00:11:19.660 fused_ordering(65) 00:11:19.660 fused_ordering(66) 00:11:19.660 fused_ordering(67) 00:11:19.660 fused_ordering(68) 00:11:19.660 fused_ordering(69) 00:11:19.660 fused_ordering(70) 00:11:19.660 fused_ordering(71) 00:11:19.660 fused_ordering(72) 00:11:19.660 fused_ordering(73) 00:11:19.660 fused_ordering(74) 00:11:19.660 fused_ordering(75) 00:11:19.660 fused_ordering(76) 00:11:19.660 fused_ordering(77) 00:11:19.660 fused_ordering(78) 00:11:19.660 fused_ordering(79) 00:11:19.660 fused_ordering(80) 00:11:19.660 fused_ordering(81) 00:11:19.660 fused_ordering(82) 00:11:19.660 fused_ordering(83) 00:11:19.660 fused_ordering(84) 00:11:19.660 fused_ordering(85) 00:11:19.660 fused_ordering(86) 00:11:19.660 fused_ordering(87) 00:11:19.660 fused_ordering(88) 00:11:19.660 fused_ordering(89) 00:11:19.660 fused_ordering(90) 00:11:19.660 fused_ordering(91) 00:11:19.660 fused_ordering(92) 00:11:19.660 fused_ordering(93) 00:11:19.661 fused_ordering(94) 00:11:19.661 fused_ordering(95) 00:11:19.661 fused_ordering(96) 00:11:19.661 fused_ordering(97) 00:11:19.661 fused_ordering(98) 00:11:19.661 fused_ordering(99) 00:11:19.661 fused_ordering(100) 00:11:19.661 fused_ordering(101) 00:11:19.661 fused_ordering(102) 00:11:19.661 fused_ordering(103) 00:11:19.661 fused_ordering(104) 00:11:19.661 fused_ordering(105) 00:11:19.661 fused_ordering(106) 00:11:19.661 fused_ordering(107) 00:11:19.661 fused_ordering(108) 00:11:19.661 fused_ordering(109) 00:11:19.661 fused_ordering(110) 00:11:19.661 fused_ordering(111) 00:11:19.661 fused_ordering(112) 00:11:19.661 fused_ordering(113) 00:11:19.661 fused_ordering(114) 00:11:19.661 fused_ordering(115) 00:11:19.661 fused_ordering(116) 00:11:19.661 fused_ordering(117) 00:11:19.661 fused_ordering(118) 00:11:19.661 fused_ordering(119) 00:11:19.661 fused_ordering(120) 00:11:19.661 fused_ordering(121) 00:11:19.661 fused_ordering(122) 00:11:19.661 fused_ordering(123) 00:11:19.661 fused_ordering(124) 00:11:19.661 fused_ordering(125) 00:11:19.661 fused_ordering(126) 00:11:19.661 fused_ordering(127) 00:11:19.661 fused_ordering(128) 00:11:19.661 fused_ordering(129) 00:11:19.661 fused_ordering(130) 00:11:19.661 fused_ordering(131) 00:11:19.661 fused_ordering(132) 00:11:19.661 fused_ordering(133) 00:11:19.661 fused_ordering(134) 00:11:19.661 fused_ordering(135) 00:11:19.661 fused_ordering(136) 00:11:19.661 fused_ordering(137) 00:11:19.661 fused_ordering(138) 00:11:19.661 fused_ordering(139) 00:11:19.661 fused_ordering(140) 00:11:19.661 fused_ordering(141) 00:11:19.661 fused_ordering(142) 00:11:19.661 fused_ordering(143) 00:11:19.661 fused_ordering(144) 00:11:19.661 fused_ordering(145) 00:11:19.661 fused_ordering(146) 00:11:19.661 fused_ordering(147) 00:11:19.661 fused_ordering(148) 00:11:19.661 fused_ordering(149) 00:11:19.661 fused_ordering(150) 00:11:19.661 fused_ordering(151) 00:11:19.661 fused_ordering(152) 00:11:19.661 fused_ordering(153) 00:11:19.661 fused_ordering(154) 00:11:19.661 fused_ordering(155) 00:11:19.661 fused_ordering(156) 00:11:19.661 fused_ordering(157) 00:11:19.661 fused_ordering(158) 00:11:19.661 fused_ordering(159) 00:11:19.661 fused_ordering(160) 00:11:19.661 fused_ordering(161) 00:11:19.661 fused_ordering(162) 00:11:19.661 fused_ordering(163) 00:11:19.661 fused_ordering(164) 00:11:19.661 fused_ordering(165) 00:11:19.661 fused_ordering(166) 00:11:19.661 fused_ordering(167) 00:11:19.661 fused_ordering(168) 00:11:19.661 fused_ordering(169) 00:11:19.661 fused_ordering(170) 00:11:19.661 fused_ordering(171) 00:11:19.661 fused_ordering(172) 00:11:19.661 fused_ordering(173) 00:11:19.661 fused_ordering(174) 00:11:19.661 fused_ordering(175) 00:11:19.661 fused_ordering(176) 00:11:19.661 fused_ordering(177) 00:11:19.661 fused_ordering(178) 00:11:19.661 fused_ordering(179) 00:11:19.661 fused_ordering(180) 00:11:19.661 fused_ordering(181) 00:11:19.661 fused_ordering(182) 00:11:19.661 fused_ordering(183) 00:11:19.661 fused_ordering(184) 00:11:19.661 fused_ordering(185) 00:11:19.661 fused_ordering(186) 00:11:19.661 fused_ordering(187) 00:11:19.661 fused_ordering(188) 00:11:19.661 fused_ordering(189) 00:11:19.661 fused_ordering(190) 00:11:19.661 fused_ordering(191) 00:11:19.661 fused_ordering(192) 00:11:19.661 fused_ordering(193) 00:11:19.661 fused_ordering(194) 00:11:19.661 fused_ordering(195) 00:11:19.661 fused_ordering(196) 00:11:19.661 fused_ordering(197) 00:11:19.661 fused_ordering(198) 00:11:19.661 fused_ordering(199) 00:11:19.661 fused_ordering(200) 00:11:19.661 fused_ordering(201) 00:11:19.661 fused_ordering(202) 00:11:19.661 fused_ordering(203) 00:11:19.661 fused_ordering(204) 00:11:19.661 fused_ordering(205) 00:11:20.227 fused_ordering(206) 00:11:20.227 fused_ordering(207) 00:11:20.227 fused_ordering(208) 00:11:20.227 fused_ordering(209) 00:11:20.227 fused_ordering(210) 00:11:20.227 fused_ordering(211) 00:11:20.227 fused_ordering(212) 00:11:20.227 fused_ordering(213) 00:11:20.227 fused_ordering(214) 00:11:20.227 fused_ordering(215) 00:11:20.227 fused_ordering(216) 00:11:20.227 fused_ordering(217) 00:11:20.227 fused_ordering(218) 00:11:20.227 fused_ordering(219) 00:11:20.227 fused_ordering(220) 00:11:20.227 fused_ordering(221) 00:11:20.227 fused_ordering(222) 00:11:20.227 fused_ordering(223) 00:11:20.227 fused_ordering(224) 00:11:20.227 fused_ordering(225) 00:11:20.227 fused_ordering(226) 00:11:20.227 fused_ordering(227) 00:11:20.227 fused_ordering(228) 00:11:20.227 fused_ordering(229) 00:11:20.227 fused_ordering(230) 00:11:20.227 fused_ordering(231) 00:11:20.227 fused_ordering(232) 00:11:20.227 fused_ordering(233) 00:11:20.227 fused_ordering(234) 00:11:20.227 fused_ordering(235) 00:11:20.227 fused_ordering(236) 00:11:20.227 fused_ordering(237) 00:11:20.227 fused_ordering(238) 00:11:20.227 fused_ordering(239) 00:11:20.227 fused_ordering(240) 00:11:20.227 fused_ordering(241) 00:11:20.227 fused_ordering(242) 00:11:20.227 fused_ordering(243) 00:11:20.227 fused_ordering(244) 00:11:20.227 fused_ordering(245) 00:11:20.227 fused_ordering(246) 00:11:20.227 fused_ordering(247) 00:11:20.227 fused_ordering(248) 00:11:20.227 fused_ordering(249) 00:11:20.227 fused_ordering(250) 00:11:20.227 fused_ordering(251) 00:11:20.227 fused_ordering(252) 00:11:20.227 fused_ordering(253) 00:11:20.227 fused_ordering(254) 00:11:20.227 fused_ordering(255) 00:11:20.227 fused_ordering(256) 00:11:20.227 fused_ordering(257) 00:11:20.227 fused_ordering(258) 00:11:20.227 fused_ordering(259) 00:11:20.227 fused_ordering(260) 00:11:20.227 fused_ordering(261) 00:11:20.227 fused_ordering(262) 00:11:20.227 fused_ordering(263) 00:11:20.227 fused_ordering(264) 00:11:20.227 fused_ordering(265) 00:11:20.227 fused_ordering(266) 00:11:20.227 fused_ordering(267) 00:11:20.227 fused_ordering(268) 00:11:20.227 fused_ordering(269) 00:11:20.227 fused_ordering(270) 00:11:20.227 fused_ordering(271) 00:11:20.227 fused_ordering(272) 00:11:20.227 fused_ordering(273) 00:11:20.227 fused_ordering(274) 00:11:20.227 fused_ordering(275) 00:11:20.227 fused_ordering(276) 00:11:20.227 fused_ordering(277) 00:11:20.227 fused_ordering(278) 00:11:20.227 fused_ordering(279) 00:11:20.227 fused_ordering(280) 00:11:20.227 fused_ordering(281) 00:11:20.227 fused_ordering(282) 00:11:20.227 fused_ordering(283) 00:11:20.227 fused_ordering(284) 00:11:20.228 fused_ordering(285) 00:11:20.228 fused_ordering(286) 00:11:20.228 fused_ordering(287) 00:11:20.228 fused_ordering(288) 00:11:20.228 fused_ordering(289) 00:11:20.228 fused_ordering(290) 00:11:20.228 fused_ordering(291) 00:11:20.228 fused_ordering(292) 00:11:20.228 fused_ordering(293) 00:11:20.228 fused_ordering(294) 00:11:20.228 fused_ordering(295) 00:11:20.228 fused_ordering(296) 00:11:20.228 fused_ordering(297) 00:11:20.228 fused_ordering(298) 00:11:20.228 fused_ordering(299) 00:11:20.228 fused_ordering(300) 00:11:20.228 fused_ordering(301) 00:11:20.228 fused_ordering(302) 00:11:20.228 fused_ordering(303) 00:11:20.228 fused_ordering(304) 00:11:20.228 fused_ordering(305) 00:11:20.228 fused_ordering(306) 00:11:20.228 fused_ordering(307) 00:11:20.228 fused_ordering(308) 00:11:20.228 fused_ordering(309) 00:11:20.228 fused_ordering(310) 00:11:20.228 fused_ordering(311) 00:11:20.228 fused_ordering(312) 00:11:20.228 fused_ordering(313) 00:11:20.228 fused_ordering(314) 00:11:20.228 fused_ordering(315) 00:11:20.228 fused_ordering(316) 00:11:20.228 fused_ordering(317) 00:11:20.228 fused_ordering(318) 00:11:20.228 fused_ordering(319) 00:11:20.228 fused_ordering(320) 00:11:20.228 fused_ordering(321) 00:11:20.228 fused_ordering(322) 00:11:20.228 fused_ordering(323) 00:11:20.228 fused_ordering(324) 00:11:20.228 fused_ordering(325) 00:11:20.228 fused_ordering(326) 00:11:20.228 fused_ordering(327) 00:11:20.228 fused_ordering(328) 00:11:20.228 fused_ordering(329) 00:11:20.228 fused_ordering(330) 00:11:20.228 fused_ordering(331) 00:11:20.228 fused_ordering(332) 00:11:20.228 fused_ordering(333) 00:11:20.228 fused_ordering(334) 00:11:20.228 fused_ordering(335) 00:11:20.228 fused_ordering(336) 00:11:20.228 fused_ordering(337) 00:11:20.228 fused_ordering(338) 00:11:20.228 fused_ordering(339) 00:11:20.228 fused_ordering(340) 00:11:20.228 fused_ordering(341) 00:11:20.228 fused_ordering(342) 00:11:20.228 fused_ordering(343) 00:11:20.228 fused_ordering(344) 00:11:20.228 fused_ordering(345) 00:11:20.228 fused_ordering(346) 00:11:20.228 fused_ordering(347) 00:11:20.228 fused_ordering(348) 00:11:20.228 fused_ordering(349) 00:11:20.228 fused_ordering(350) 00:11:20.228 fused_ordering(351) 00:11:20.228 fused_ordering(352) 00:11:20.228 fused_ordering(353) 00:11:20.228 fused_ordering(354) 00:11:20.228 fused_ordering(355) 00:11:20.228 fused_ordering(356) 00:11:20.228 fused_ordering(357) 00:11:20.228 fused_ordering(358) 00:11:20.228 fused_ordering(359) 00:11:20.228 fused_ordering(360) 00:11:20.228 fused_ordering(361) 00:11:20.228 fused_ordering(362) 00:11:20.228 fused_ordering(363) 00:11:20.228 fused_ordering(364) 00:11:20.228 fused_ordering(365) 00:11:20.228 fused_ordering(366) 00:11:20.228 fused_ordering(367) 00:11:20.228 fused_ordering(368) 00:11:20.228 fused_ordering(369) 00:11:20.228 fused_ordering(370) 00:11:20.228 fused_ordering(371) 00:11:20.228 fused_ordering(372) 00:11:20.228 fused_ordering(373) 00:11:20.228 fused_ordering(374) 00:11:20.228 fused_ordering(375) 00:11:20.228 fused_ordering(376) 00:11:20.228 fused_ordering(377) 00:11:20.228 fused_ordering(378) 00:11:20.228 fused_ordering(379) 00:11:20.228 fused_ordering(380) 00:11:20.228 fused_ordering(381) 00:11:20.228 fused_ordering(382) 00:11:20.228 fused_ordering(383) 00:11:20.228 fused_ordering(384) 00:11:20.228 fused_ordering(385) 00:11:20.228 fused_ordering(386) 00:11:20.228 fused_ordering(387) 00:11:20.228 fused_ordering(388) 00:11:20.228 fused_ordering(389) 00:11:20.228 fused_ordering(390) 00:11:20.228 fused_ordering(391) 00:11:20.228 fused_ordering(392) 00:11:20.228 fused_ordering(393) 00:11:20.228 fused_ordering(394) 00:11:20.228 fused_ordering(395) 00:11:20.228 fused_ordering(396) 00:11:20.228 fused_ordering(397) 00:11:20.228 fused_ordering(398) 00:11:20.228 fused_ordering(399) 00:11:20.228 fused_ordering(400) 00:11:20.228 fused_ordering(401) 00:11:20.228 fused_ordering(402) 00:11:20.228 fused_ordering(403) 00:11:20.228 fused_ordering(404) 00:11:20.228 fused_ordering(405) 00:11:20.228 fused_ordering(406) 00:11:20.228 fused_ordering(407) 00:11:20.228 fused_ordering(408) 00:11:20.228 fused_ordering(409) 00:11:20.228 fused_ordering(410) 00:11:20.487 fused_ordering(411) 00:11:20.488 fused_ordering(412) 00:11:20.488 fused_ordering(413) 00:11:20.488 fused_ordering(414) 00:11:20.488 fused_ordering(415) 00:11:20.488 fused_ordering(416) 00:11:20.488 fused_ordering(417) 00:11:20.488 fused_ordering(418) 00:11:20.488 fused_ordering(419) 00:11:20.488 fused_ordering(420) 00:11:20.488 fused_ordering(421) 00:11:20.488 fused_ordering(422) 00:11:20.488 fused_ordering(423) 00:11:20.488 fused_ordering(424) 00:11:20.488 fused_ordering(425) 00:11:20.488 fused_ordering(426) 00:11:20.488 fused_ordering(427) 00:11:20.488 fused_ordering(428) 00:11:20.488 fused_ordering(429) 00:11:20.488 fused_ordering(430) 00:11:20.488 fused_ordering(431) 00:11:20.488 fused_ordering(432) 00:11:20.488 fused_ordering(433) 00:11:20.488 fused_ordering(434) 00:11:20.488 fused_ordering(435) 00:11:20.488 fused_ordering(436) 00:11:20.488 fused_ordering(437) 00:11:20.488 fused_ordering(438) 00:11:20.488 fused_ordering(439) 00:11:20.488 fused_ordering(440) 00:11:20.488 fused_ordering(441) 00:11:20.488 fused_ordering(442) 00:11:20.488 fused_ordering(443) 00:11:20.488 fused_ordering(444) 00:11:20.488 fused_ordering(445) 00:11:20.488 fused_ordering(446) 00:11:20.488 fused_ordering(447) 00:11:20.488 fused_ordering(448) 00:11:20.488 fused_ordering(449) 00:11:20.488 fused_ordering(450) 00:11:20.488 fused_ordering(451) 00:11:20.488 fused_ordering(452) 00:11:20.488 fused_ordering(453) 00:11:20.488 fused_ordering(454) 00:11:20.488 fused_ordering(455) 00:11:20.488 fused_ordering(456) 00:11:20.488 fused_ordering(457) 00:11:20.488 fused_ordering(458) 00:11:20.488 fused_ordering(459) 00:11:20.488 fused_ordering(460) 00:11:20.488 fused_ordering(461) 00:11:20.488 fused_ordering(462) 00:11:20.488 fused_ordering(463) 00:11:20.488 fused_ordering(464) 00:11:20.488 fused_ordering(465) 00:11:20.488 fused_ordering(466) 00:11:20.488 fused_ordering(467) 00:11:20.488 fused_ordering(468) 00:11:20.488 fused_ordering(469) 00:11:20.488 fused_ordering(470) 00:11:20.488 fused_ordering(471) 00:11:20.488 fused_ordering(472) 00:11:20.488 fused_ordering(473) 00:11:20.488 fused_ordering(474) 00:11:20.488 fused_ordering(475) 00:11:20.488 fused_ordering(476) 00:11:20.488 fused_ordering(477) 00:11:20.488 fused_ordering(478) 00:11:20.488 fused_ordering(479) 00:11:20.488 fused_ordering(480) 00:11:20.488 fused_ordering(481) 00:11:20.488 fused_ordering(482) 00:11:20.488 fused_ordering(483) 00:11:20.488 fused_ordering(484) 00:11:20.488 fused_ordering(485) 00:11:20.488 fused_ordering(486) 00:11:20.488 fused_ordering(487) 00:11:20.488 fused_ordering(488) 00:11:20.488 fused_ordering(489) 00:11:20.488 fused_ordering(490) 00:11:20.488 fused_ordering(491) 00:11:20.488 fused_ordering(492) 00:11:20.488 fused_ordering(493) 00:11:20.488 fused_ordering(494) 00:11:20.488 fused_ordering(495) 00:11:20.488 fused_ordering(496) 00:11:20.488 fused_ordering(497) 00:11:20.488 fused_ordering(498) 00:11:20.488 fused_ordering(499) 00:11:20.488 fused_ordering(500) 00:11:20.488 fused_ordering(501) 00:11:20.488 fused_ordering(502) 00:11:20.488 fused_ordering(503) 00:11:20.488 fused_ordering(504) 00:11:20.488 fused_ordering(505) 00:11:20.488 fused_ordering(506) 00:11:20.488 fused_ordering(507) 00:11:20.488 fused_ordering(508) 00:11:20.488 fused_ordering(509) 00:11:20.488 fused_ordering(510) 00:11:20.488 fused_ordering(511) 00:11:20.488 fused_ordering(512) 00:11:20.488 fused_ordering(513) 00:11:20.488 fused_ordering(514) 00:11:20.488 fused_ordering(515) 00:11:20.488 fused_ordering(516) 00:11:20.488 fused_ordering(517) 00:11:20.488 fused_ordering(518) 00:11:20.488 fused_ordering(519) 00:11:20.488 fused_ordering(520) 00:11:20.488 fused_ordering(521) 00:11:20.488 fused_ordering(522) 00:11:20.488 fused_ordering(523) 00:11:20.488 fused_ordering(524) 00:11:20.488 fused_ordering(525) 00:11:20.488 fused_ordering(526) 00:11:20.488 fused_ordering(527) 00:11:20.488 fused_ordering(528) 00:11:20.488 fused_ordering(529) 00:11:20.488 fused_ordering(530) 00:11:20.488 fused_ordering(531) 00:11:20.488 fused_ordering(532) 00:11:20.488 fused_ordering(533) 00:11:20.488 fused_ordering(534) 00:11:20.488 fused_ordering(535) 00:11:20.488 fused_ordering(536) 00:11:20.488 fused_ordering(537) 00:11:20.488 fused_ordering(538) 00:11:20.488 fused_ordering(539) 00:11:20.488 fused_ordering(540) 00:11:20.488 fused_ordering(541) 00:11:20.488 fused_ordering(542) 00:11:20.488 fused_ordering(543) 00:11:20.488 fused_ordering(544) 00:11:20.488 fused_ordering(545) 00:11:20.488 fused_ordering(546) 00:11:20.488 fused_ordering(547) 00:11:20.488 fused_ordering(548) 00:11:20.488 fused_ordering(549) 00:11:20.488 fused_ordering(550) 00:11:20.488 fused_ordering(551) 00:11:20.488 fused_ordering(552) 00:11:20.488 fused_ordering(553) 00:11:20.488 fused_ordering(554) 00:11:20.488 fused_ordering(555) 00:11:20.488 fused_ordering(556) 00:11:20.488 fused_ordering(557) 00:11:20.488 fused_ordering(558) 00:11:20.488 fused_ordering(559) 00:11:20.488 fused_ordering(560) 00:11:20.488 fused_ordering(561) 00:11:20.488 fused_ordering(562) 00:11:20.488 fused_ordering(563) 00:11:20.488 fused_ordering(564) 00:11:20.488 fused_ordering(565) 00:11:20.488 fused_ordering(566) 00:11:20.488 fused_ordering(567) 00:11:20.488 fused_ordering(568) 00:11:20.488 fused_ordering(569) 00:11:20.488 fused_ordering(570) 00:11:20.488 fused_ordering(571) 00:11:20.488 fused_ordering(572) 00:11:20.488 fused_ordering(573) 00:11:20.488 fused_ordering(574) 00:11:20.488 fused_ordering(575) 00:11:20.488 fused_ordering(576) 00:11:20.488 fused_ordering(577) 00:11:20.488 fused_ordering(578) 00:11:20.488 fused_ordering(579) 00:11:20.488 fused_ordering(580) 00:11:20.488 fused_ordering(581) 00:11:20.488 fused_ordering(582) 00:11:20.488 fused_ordering(583) 00:11:20.488 fused_ordering(584) 00:11:20.488 fused_ordering(585) 00:11:20.488 fused_ordering(586) 00:11:20.488 fused_ordering(587) 00:11:20.488 fused_ordering(588) 00:11:20.488 fused_ordering(589) 00:11:20.488 fused_ordering(590) 00:11:20.488 fused_ordering(591) 00:11:20.488 fused_ordering(592) 00:11:20.488 fused_ordering(593) 00:11:20.488 fused_ordering(594) 00:11:20.488 fused_ordering(595) 00:11:20.488 fused_ordering(596) 00:11:20.488 fused_ordering(597) 00:11:20.488 fused_ordering(598) 00:11:20.488 fused_ordering(599) 00:11:20.488 fused_ordering(600) 00:11:20.488 fused_ordering(601) 00:11:20.488 fused_ordering(602) 00:11:20.488 fused_ordering(603) 00:11:20.488 fused_ordering(604) 00:11:20.488 fused_ordering(605) 00:11:20.488 fused_ordering(606) 00:11:20.488 fused_ordering(607) 00:11:20.488 fused_ordering(608) 00:11:20.488 fused_ordering(609) 00:11:20.488 fused_ordering(610) 00:11:20.488 fused_ordering(611) 00:11:20.488 fused_ordering(612) 00:11:20.488 fused_ordering(613) 00:11:20.488 fused_ordering(614) 00:11:20.488 fused_ordering(615) 00:11:21.120 fused_ordering(616) 00:11:21.120 fused_ordering(617) 00:11:21.120 fused_ordering(618) 00:11:21.120 fused_ordering(619) 00:11:21.120 fused_ordering(620) 00:11:21.120 fused_ordering(621) 00:11:21.120 fused_ordering(622) 00:11:21.120 fused_ordering(623) 00:11:21.120 fused_ordering(624) 00:11:21.120 fused_ordering(625) 00:11:21.120 fused_ordering(626) 00:11:21.120 fused_ordering(627) 00:11:21.120 fused_ordering(628) 00:11:21.120 fused_ordering(629) 00:11:21.120 fused_ordering(630) 00:11:21.120 fused_ordering(631) 00:11:21.120 fused_ordering(632) 00:11:21.120 fused_ordering(633) 00:11:21.120 fused_ordering(634) 00:11:21.120 fused_ordering(635) 00:11:21.120 fused_ordering(636) 00:11:21.120 fused_ordering(637) 00:11:21.120 fused_ordering(638) 00:11:21.120 fused_ordering(639) 00:11:21.120 fused_ordering(640) 00:11:21.120 fused_ordering(641) 00:11:21.120 fused_ordering(642) 00:11:21.120 fused_ordering(643) 00:11:21.120 fused_ordering(644) 00:11:21.120 fused_ordering(645) 00:11:21.120 fused_ordering(646) 00:11:21.120 fused_ordering(647) 00:11:21.120 fused_ordering(648) 00:11:21.120 fused_ordering(649) 00:11:21.120 fused_ordering(650) 00:11:21.120 fused_ordering(651) 00:11:21.120 fused_ordering(652) 00:11:21.120 fused_ordering(653) 00:11:21.120 fused_ordering(654) 00:11:21.120 fused_ordering(655) 00:11:21.120 fused_ordering(656) 00:11:21.120 fused_ordering(657) 00:11:21.120 fused_ordering(658) 00:11:21.120 fused_ordering(659) 00:11:21.120 fused_ordering(660) 00:11:21.120 fused_ordering(661) 00:11:21.120 fused_ordering(662) 00:11:21.120 fused_ordering(663) 00:11:21.120 fused_ordering(664) 00:11:21.120 fused_ordering(665) 00:11:21.120 fused_ordering(666) 00:11:21.120 fused_ordering(667) 00:11:21.120 fused_ordering(668) 00:11:21.120 fused_ordering(669) 00:11:21.120 fused_ordering(670) 00:11:21.120 fused_ordering(671) 00:11:21.120 fused_ordering(672) 00:11:21.120 fused_ordering(673) 00:11:21.120 fused_ordering(674) 00:11:21.120 fused_ordering(675) 00:11:21.120 fused_ordering(676) 00:11:21.120 fused_ordering(677) 00:11:21.120 fused_ordering(678) 00:11:21.120 fused_ordering(679) 00:11:21.120 fused_ordering(680) 00:11:21.120 fused_ordering(681) 00:11:21.120 fused_ordering(682) 00:11:21.120 fused_ordering(683) 00:11:21.120 fused_ordering(684) 00:11:21.120 fused_ordering(685) 00:11:21.120 fused_ordering(686) 00:11:21.120 fused_ordering(687) 00:11:21.120 fused_ordering(688) 00:11:21.120 fused_ordering(689) 00:11:21.120 fused_ordering(690) 00:11:21.120 fused_ordering(691) 00:11:21.120 fused_ordering(692) 00:11:21.120 fused_ordering(693) 00:11:21.120 fused_ordering(694) 00:11:21.120 fused_ordering(695) 00:11:21.120 fused_ordering(696) 00:11:21.120 fused_ordering(697) 00:11:21.120 fused_ordering(698) 00:11:21.120 fused_ordering(699) 00:11:21.120 fused_ordering(700) 00:11:21.120 fused_ordering(701) 00:11:21.120 fused_ordering(702) 00:11:21.120 fused_ordering(703) 00:11:21.120 fused_ordering(704) 00:11:21.120 fused_ordering(705) 00:11:21.120 fused_ordering(706) 00:11:21.120 fused_ordering(707) 00:11:21.120 fused_ordering(708) 00:11:21.120 fused_ordering(709) 00:11:21.120 fused_ordering(710) 00:11:21.120 fused_ordering(711) 00:11:21.120 fused_ordering(712) 00:11:21.120 fused_ordering(713) 00:11:21.120 fused_ordering(714) 00:11:21.120 fused_ordering(715) 00:11:21.120 fused_ordering(716) 00:11:21.120 fused_ordering(717) 00:11:21.121 fused_ordering(718) 00:11:21.121 fused_ordering(719) 00:11:21.121 fused_ordering(720) 00:11:21.121 fused_ordering(721) 00:11:21.121 fused_ordering(722) 00:11:21.121 fused_ordering(723) 00:11:21.121 fused_ordering(724) 00:11:21.121 fused_ordering(725) 00:11:21.121 fused_ordering(726) 00:11:21.121 fused_ordering(727) 00:11:21.121 fused_ordering(728) 00:11:21.121 fused_ordering(729) 00:11:21.121 fused_ordering(730) 00:11:21.121 fused_ordering(731) 00:11:21.121 fused_ordering(732) 00:11:21.121 fused_ordering(733) 00:11:21.121 fused_ordering(734) 00:11:21.121 fused_ordering(735) 00:11:21.121 fused_ordering(736) 00:11:21.121 fused_ordering(737) 00:11:21.121 fused_ordering(738) 00:11:21.121 fused_ordering(739) 00:11:21.121 fused_ordering(740) 00:11:21.121 fused_ordering(741) 00:11:21.121 fused_ordering(742) 00:11:21.121 fused_ordering(743) 00:11:21.121 fused_ordering(744) 00:11:21.121 fused_ordering(745) 00:11:21.121 fused_ordering(746) 00:11:21.121 fused_ordering(747) 00:11:21.121 fused_ordering(748) 00:11:21.121 fused_ordering(749) 00:11:21.121 fused_ordering(750) 00:11:21.121 fused_ordering(751) 00:11:21.121 fused_ordering(752) 00:11:21.121 fused_ordering(753) 00:11:21.121 fused_ordering(754) 00:11:21.121 fused_ordering(755) 00:11:21.121 fused_ordering(756) 00:11:21.121 fused_ordering(757) 00:11:21.121 fused_ordering(758) 00:11:21.121 fused_ordering(759) 00:11:21.121 fused_ordering(760) 00:11:21.121 fused_ordering(761) 00:11:21.121 fused_ordering(762) 00:11:21.121 fused_ordering(763) 00:11:21.121 fused_ordering(764) 00:11:21.121 fused_ordering(765) 00:11:21.121 fused_ordering(766) 00:11:21.121 fused_ordering(767) 00:11:21.121 fused_ordering(768) 00:11:21.121 fused_ordering(769) 00:11:21.121 fused_ordering(770) 00:11:21.121 fused_ordering(771) 00:11:21.121 fused_ordering(772) 00:11:21.121 fused_ordering(773) 00:11:21.121 fused_ordering(774) 00:11:21.121 fused_ordering(775) 00:11:21.121 fused_ordering(776) 00:11:21.121 fused_ordering(777) 00:11:21.121 fused_ordering(778) 00:11:21.121 fused_ordering(779) 00:11:21.121 fused_ordering(780) 00:11:21.121 fused_ordering(781) 00:11:21.121 fused_ordering(782) 00:11:21.121 fused_ordering(783) 00:11:21.121 fused_ordering(784) 00:11:21.121 fused_ordering(785) 00:11:21.121 fused_ordering(786) 00:11:21.121 fused_ordering(787) 00:11:21.121 fused_ordering(788) 00:11:21.121 fused_ordering(789) 00:11:21.121 fused_ordering(790) 00:11:21.121 fused_ordering(791) 00:11:21.121 fused_ordering(792) 00:11:21.121 fused_ordering(793) 00:11:21.121 fused_ordering(794) 00:11:21.121 fused_ordering(795) 00:11:21.121 fused_ordering(796) 00:11:21.121 fused_ordering(797) 00:11:21.121 fused_ordering(798) 00:11:21.121 fused_ordering(799) 00:11:21.121 fused_ordering(800) 00:11:21.121 fused_ordering(801) 00:11:21.121 fused_ordering(802) 00:11:21.121 fused_ordering(803) 00:11:21.121 fused_ordering(804) 00:11:21.121 fused_ordering(805) 00:11:21.121 fused_ordering(806) 00:11:21.121 fused_ordering(807) 00:11:21.121 fused_ordering(808) 00:11:21.121 fused_ordering(809) 00:11:21.121 fused_ordering(810) 00:11:21.121 fused_ordering(811) 00:11:21.121 fused_ordering(812) 00:11:21.121 fused_ordering(813) 00:11:21.121 fused_ordering(814) 00:11:21.121 fused_ordering(815) 00:11:21.121 fused_ordering(816) 00:11:21.121 fused_ordering(817) 00:11:21.121 fused_ordering(818) 00:11:21.121 fused_ordering(819) 00:11:21.121 fused_ordering(820) 00:11:21.687 fused_ordering(821) 00:11:21.687 fused_ordering(822) 00:11:21.687 fused_ordering(823) 00:11:21.687 fused_ordering(824) 00:11:21.687 fused_ordering(825) 00:11:21.687 fused_ordering(826) 00:11:21.687 fused_ordering(827) 00:11:21.687 fused_ordering(828) 00:11:21.687 fused_ordering(829) 00:11:21.687 fused_ordering(830) 00:11:21.687 fused_ordering(831) 00:11:21.687 fused_ordering(832) 00:11:21.687 fused_ordering(833) 00:11:21.687 fused_ordering(834) 00:11:21.687 fused_ordering(835) 00:11:21.687 fused_ordering(836) 00:11:21.687 fused_ordering(837) 00:11:21.687 fused_ordering(838) 00:11:21.687 fused_ordering(839) 00:11:21.687 fused_ordering(840) 00:11:21.687 fused_ordering(841) 00:11:21.687 fused_ordering(842) 00:11:21.687 fused_ordering(843) 00:11:21.687 fused_ordering(844) 00:11:21.687 fused_ordering(845) 00:11:21.687 fused_ordering(846) 00:11:21.687 fused_ordering(847) 00:11:21.687 fused_ordering(848) 00:11:21.687 fused_ordering(849) 00:11:21.687 fused_ordering(850) 00:11:21.687 fused_ordering(851) 00:11:21.687 fused_ordering(852) 00:11:21.687 fused_ordering(853) 00:11:21.687 fused_ordering(854) 00:11:21.687 fused_ordering(855) 00:11:21.687 fused_ordering(856) 00:11:21.687 fused_ordering(857) 00:11:21.687 fused_ordering(858) 00:11:21.687 fused_ordering(859) 00:11:21.687 fused_ordering(860) 00:11:21.687 fused_ordering(861) 00:11:21.687 fused_ordering(862) 00:11:21.687 fused_ordering(863) 00:11:21.687 fused_ordering(864) 00:11:21.687 fused_ordering(865) 00:11:21.687 fused_ordering(866) 00:11:21.687 fused_ordering(867) 00:11:21.687 fused_ordering(868) 00:11:21.687 fused_ordering(869) 00:11:21.687 fused_ordering(870) 00:11:21.687 fused_ordering(871) 00:11:21.687 fused_ordering(872) 00:11:21.687 fused_ordering(873) 00:11:21.687 fused_ordering(874) 00:11:21.687 fused_ordering(875) 00:11:21.687 fused_ordering(876) 00:11:21.687 fused_ordering(877) 00:11:21.687 fused_ordering(878) 00:11:21.687 fused_ordering(879) 00:11:21.687 fused_ordering(880) 00:11:21.687 fused_ordering(881) 00:11:21.687 fused_ordering(882) 00:11:21.687 fused_ordering(883) 00:11:21.687 fused_ordering(884) 00:11:21.687 fused_ordering(885) 00:11:21.687 fused_ordering(886) 00:11:21.687 fused_ordering(887) 00:11:21.687 fused_ordering(888) 00:11:21.687 fused_ordering(889) 00:11:21.687 fused_ordering(890) 00:11:21.687 fused_ordering(891) 00:11:21.687 fused_ordering(892) 00:11:21.687 fused_ordering(893) 00:11:21.687 fused_ordering(894) 00:11:21.687 fused_ordering(895) 00:11:21.687 fused_ordering(896) 00:11:21.687 fused_ordering(897) 00:11:21.687 fused_ordering(898) 00:11:21.687 fused_ordering(899) 00:11:21.687 fused_ordering(900) 00:11:21.687 fused_ordering(901) 00:11:21.687 fused_ordering(902) 00:11:21.687 fused_ordering(903) 00:11:21.687 fused_ordering(904) 00:11:21.687 fused_ordering(905) 00:11:21.687 fused_ordering(906) 00:11:21.687 fused_ordering(907) 00:11:21.687 fused_ordering(908) 00:11:21.687 fused_ordering(909) 00:11:21.687 fused_ordering(910) 00:11:21.687 fused_ordering(911) 00:11:21.687 fused_ordering(912) 00:11:21.687 fused_ordering(913) 00:11:21.687 fused_ordering(914) 00:11:21.687 fused_ordering(915) 00:11:21.687 fused_ordering(916) 00:11:21.687 fused_ordering(917) 00:11:21.687 fused_ordering(918) 00:11:21.687 fused_ordering(919) 00:11:21.687 fused_ordering(920) 00:11:21.687 fused_ordering(921) 00:11:21.687 fused_ordering(922) 00:11:21.687 fused_ordering(923) 00:11:21.687 fused_ordering(924) 00:11:21.687 fused_ordering(925) 00:11:21.687 fused_ordering(926) 00:11:21.687 fused_ordering(927) 00:11:21.687 fused_ordering(928) 00:11:21.687 fused_ordering(929) 00:11:21.687 fused_ordering(930) 00:11:21.687 fused_ordering(931) 00:11:21.687 fused_ordering(932) 00:11:21.687 fused_ordering(933) 00:11:21.687 fused_ordering(934) 00:11:21.687 fused_ordering(935) 00:11:21.687 fused_ordering(936) 00:11:21.687 fused_ordering(937) 00:11:21.687 fused_ordering(938) 00:11:21.687 fused_ordering(939) 00:11:21.687 fused_ordering(940) 00:11:21.687 fused_ordering(941) 00:11:21.687 fused_ordering(942) 00:11:21.687 fused_ordering(943) 00:11:21.687 fused_ordering(944) 00:11:21.687 fused_ordering(945) 00:11:21.687 fused_ordering(946) 00:11:21.687 fused_ordering(947) 00:11:21.687 fused_ordering(948) 00:11:21.687 fused_ordering(949) 00:11:21.687 fused_ordering(950) 00:11:21.687 fused_ordering(951) 00:11:21.687 fused_ordering(952) 00:11:21.687 fused_ordering(953) 00:11:21.687 fused_ordering(954) 00:11:21.687 fused_ordering(955) 00:11:21.687 fused_ordering(956) 00:11:21.687 fused_ordering(957) 00:11:21.687 fused_ordering(958) 00:11:21.687 fused_ordering(959) 00:11:21.687 fused_ordering(960) 00:11:21.687 fused_ordering(961) 00:11:21.687 fused_ordering(962) 00:11:21.687 fused_ordering(963) 00:11:21.687 fused_ordering(964) 00:11:21.687 fused_ordering(965) 00:11:21.687 fused_ordering(966) 00:11:21.687 fused_ordering(967) 00:11:21.687 fused_ordering(968) 00:11:21.687 fused_ordering(969) 00:11:21.687 fused_ordering(970) 00:11:21.687 fused_ordering(971) 00:11:21.687 fused_ordering(972) 00:11:21.687 fused_ordering(973) 00:11:21.687 fused_ordering(974) 00:11:21.687 fused_ordering(975) 00:11:21.687 fused_ordering(976) 00:11:21.687 fused_ordering(977) 00:11:21.687 fused_ordering(978) 00:11:21.687 fused_ordering(979) 00:11:21.687 fused_ordering(980) 00:11:21.687 fused_ordering(981) 00:11:21.687 fused_ordering(982) 00:11:21.687 fused_ordering(983) 00:11:21.687 fused_ordering(984) 00:11:21.687 fused_ordering(985) 00:11:21.687 fused_ordering(986) 00:11:21.687 fused_ordering(987) 00:11:21.687 fused_ordering(988) 00:11:21.687 fused_ordering(989) 00:11:21.687 fused_ordering(990) 00:11:21.687 fused_ordering(991) 00:11:21.687 fused_ordering(992) 00:11:21.687 fused_ordering(993) 00:11:21.687 fused_ordering(994) 00:11:21.687 fused_ordering(995) 00:11:21.687 fused_ordering(996) 00:11:21.687 fused_ordering(997) 00:11:21.687 fused_ordering(998) 00:11:21.687 fused_ordering(999) 00:11:21.687 fused_ordering(1000) 00:11:21.687 fused_ordering(1001) 00:11:21.687 fused_ordering(1002) 00:11:21.687 fused_ordering(1003) 00:11:21.687 fused_ordering(1004) 00:11:21.687 fused_ordering(1005) 00:11:21.687 fused_ordering(1006) 00:11:21.687 fused_ordering(1007) 00:11:21.687 fused_ordering(1008) 00:11:21.687 fused_ordering(1009) 00:11:21.687 fused_ordering(1010) 00:11:21.687 fused_ordering(1011) 00:11:21.687 fused_ordering(1012) 00:11:21.687 fused_ordering(1013) 00:11:21.687 fused_ordering(1014) 00:11:21.687 fused_ordering(1015) 00:11:21.687 fused_ordering(1016) 00:11:21.687 fused_ordering(1017) 00:11:21.688 fused_ordering(1018) 00:11:21.688 fused_ordering(1019) 00:11:21.688 fused_ordering(1020) 00:11:21.688 fused_ordering(1021) 00:11:21.688 fused_ordering(1022) 00:11:21.688 fused_ordering(1023) 00:11:21.688 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:21.688 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:21.688 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:21.688 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:11:21.688 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:21.688 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:11:21.688 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:21.688 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:21.688 rmmod nvme_tcp 00:11:21.688 rmmod nvme_fabrics 00:11:21.688 rmmod nvme_keyring 00:11:21.688 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:21.688 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:11:21.688 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:11:21.688 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2568983 ']' 00:11:21.688 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2568983 00:11:21.688 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 2568983 ']' 00:11:21.688 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 2568983 00:11:21.688 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:11:21.688 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:21.688 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2568983 00:11:21.688 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:21.688 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:21.688 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2568983' 00:11:21.688 killing process with pid 2568983 00:11:21.688 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 2568983 00:11:21.688 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 2568983 00:11:21.945 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:21.945 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:21.945 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:21.945 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:11:21.945 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:11:21.945 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:21.945 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:11:21.945 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:21.945 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:21.946 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.946 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:21.946 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:24.497 00:11:24.497 real 0m8.053s 00:11:24.497 user 0m5.006s 00:11:24.497 sys 0m3.775s 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:24.497 ************************************ 00:11:24.497 END TEST nvmf_fused_ordering 00:11:24.497 ************************************ 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:24.497 ************************************ 00:11:24.497 START TEST nvmf_ns_masking 00:11:24.497 ************************************ 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:24.497 * Looking for test storage... 00:11:24.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:24.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.497 --rc genhtml_branch_coverage=1 00:11:24.497 --rc genhtml_function_coverage=1 00:11:24.497 --rc genhtml_legend=1 00:11:24.497 --rc geninfo_all_blocks=1 00:11:24.497 --rc geninfo_unexecuted_blocks=1 00:11:24.497 00:11:24.497 ' 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:24.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.497 --rc genhtml_branch_coverage=1 00:11:24.497 --rc genhtml_function_coverage=1 00:11:24.497 --rc genhtml_legend=1 00:11:24.497 --rc geninfo_all_blocks=1 00:11:24.497 --rc geninfo_unexecuted_blocks=1 00:11:24.497 00:11:24.497 ' 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:24.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.497 --rc genhtml_branch_coverage=1 00:11:24.497 --rc genhtml_function_coverage=1 00:11:24.497 --rc genhtml_legend=1 00:11:24.497 --rc geninfo_all_blocks=1 00:11:24.497 --rc geninfo_unexecuted_blocks=1 00:11:24.497 00:11:24.497 ' 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:24.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.497 --rc genhtml_branch_coverage=1 00:11:24.497 --rc genhtml_function_coverage=1 00:11:24.497 --rc genhtml_legend=1 00:11:24.497 --rc geninfo_all_blocks=1 00:11:24.497 --rc geninfo_unexecuted_blocks=1 00:11:24.497 00:11:24.497 ' 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.497 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.498 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.498 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:24.498 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.498 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:11:24.498 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:24.498 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:24.498 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:24.498 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:24.498 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:24.498 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:24.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:24.498 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:24.498 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:24.498 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:24.498 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:24.498 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:11:24.498 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:11:24.498 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:11:24.498 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=87edc8ce-bde1-45e3-a16c-85a418d6b8ff 00:11:24.498 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:11:24.498 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=e05c455b-70b3-4b34-8828-6a4a07da6a0e 00:11:24.498 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:24.498 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:11:24.498 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:11:24.498 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:11:24.498 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=a72a9dd5-645b-431d-8cef-b7653ff22fb8 00:11:24.498 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:11:24.498 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:24.498 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:24.498 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:24.498 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:24.498 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:24.498 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.498 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:24.498 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:24.498 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:24.498 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:24.498 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:11:24.498 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:27.025 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:27.025 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:11:27.025 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:27.025 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:27.025 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:27.025 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:27.025 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:27.025 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:11:27.026 Found 0000:82:00.0 (0x8086 - 0x159b) 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:11:27.026 Found 0000:82:00.1 (0x8086 - 0x159b) 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:11:27.026 Found net devices under 0000:82:00.0: cvl_0_0 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:11:27.026 Found net devices under 0000:82:00.1: cvl_0_1 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:27.026 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:27.026 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.335 ms 00:11:27.026 00:11:27.026 --- 10.0.0.2 ping statistics --- 00:11:27.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.026 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:27.026 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:27.026 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:11:27.026 00:11:27.026 --- 10.0.0.1 ping statistics --- 00:11:27.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.026 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:27.026 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:11:27.027 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:27.027 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:27.027 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:27.027 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:27.027 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:27.027 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:27.027 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:27.027 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:27.027 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:27.027 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:27.027 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:27.027 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2571634 00:11:27.027 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:27.027 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2571634 00:11:27.027 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2571634 ']' 00:11:27.027 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.027 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:27.027 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.027 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:27.027 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:27.027 [2024-11-19 11:13:22.504894] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:11:27.027 [2024-11-19 11:13:22.504971] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:27.284 [2024-11-19 11:13:22.589110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.284 [2024-11-19 11:13:22.647546] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:27.284 [2024-11-19 11:13:22.647600] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:27.284 [2024-11-19 11:13:22.647630] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:27.284 [2024-11-19 11:13:22.647652] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:27.284 [2024-11-19 11:13:22.647662] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:27.285 [2024-11-19 11:13:22.648263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.285 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:27.285 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:11:27.285 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:27.285 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:27.285 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:27.542 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:27.542 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:27.800 [2024-11-19 11:13:23.092080] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:27.800 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:11:27.800 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:11:27.800 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:28.059 Malloc1 00:11:28.059 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:28.317 Malloc2 00:11:28.317 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:28.575 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:28.833 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:29.091 [2024-11-19 11:13:24.556195] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:29.091 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:11:29.091 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a72a9dd5-645b-431d-8cef-b7653ff22fb8 -a 10.0.0.2 -s 4420 -i 4 00:11:29.349 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:11:29.349 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:11:29.349 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:29.349 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:29.349 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:11:31.875 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:31.875 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:31.875 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:31.875 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:31.875 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:31.875 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:11:31.875 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:31.876 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:31.876 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:31.876 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:31.876 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:11:31.876 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:31.876 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:31.876 [ 0]:0x1 00:11:31.876 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:31.876 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:31.876 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a8accf0eb7c14c3e9455e908f59d46e4 00:11:31.876 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a8accf0eb7c14c3e9455e908f59d46e4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:31.876 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:31.876 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:11:31.876 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:31.876 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:31.876 [ 0]:0x1 00:11:31.876 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:31.876 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:31.876 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a8accf0eb7c14c3e9455e908f59d46e4 00:11:31.876 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a8accf0eb7c14c3e9455e908f59d46e4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:31.876 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:11:31.876 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:31.876 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:31.876 [ 1]:0x2 00:11:31.876 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:31.876 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:31.876 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=241b7adfc213405fbea0339bc816081f 00:11:31.876 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 241b7adfc213405fbea0339bc816081f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:31.876 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:11:31.876 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:31.876 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.876 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:32.133 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:32.699 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:11:32.699 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a72a9dd5-645b-431d-8cef-b7653ff22fb8 -a 10.0.0.2 -s 4420 -i 4 00:11:32.699 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:32.699 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:11:32.699 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:32.699 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:11:32.699 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:11:32.699 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:11:35.229 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:35.229 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:35.229 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:35.229 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:35.229 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:35.229 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:11:35.229 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:35.229 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:35.229 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:35.229 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:35.229 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:11:35.229 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:35.229 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:11:35.229 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:11:35.229 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:35.229 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:11:35.229 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:35.229 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:11:35.229 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:35.229 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:35.229 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:35.229 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:35.229 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:35.229 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:35.229 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:35.229 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:35.229 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:35.229 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:35.229 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:11:35.229 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:35.229 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:35.229 [ 0]:0x2 00:11:35.229 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:35.229 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:35.229 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=241b7adfc213405fbea0339bc816081f 00:11:35.230 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 241b7adfc213405fbea0339bc816081f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:35.230 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:35.230 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:11:35.230 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:35.230 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:35.230 [ 0]:0x1 00:11:35.230 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:35.230 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:35.230 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a8accf0eb7c14c3e9455e908f59d46e4 00:11:35.230 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a8accf0eb7c14c3e9455e908f59d46e4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:35.230 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:11:35.230 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:35.230 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:35.230 [ 1]:0x2 00:11:35.230 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:35.230 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:35.487 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=241b7adfc213405fbea0339bc816081f 00:11:35.487 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 241b7adfc213405fbea0339bc816081f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:35.487 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:35.746 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:11:35.746 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:35.746 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:11:35.746 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:11:35.746 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:35.746 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:11:35.746 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:35.746 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:11:35.746 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:35.746 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:35.746 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:35.746 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:35.746 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:35.746 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:35.746 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:35.746 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:35.746 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:35.746 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:35.746 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:11:35.746 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:35.746 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:35.746 [ 0]:0x2 00:11:35.746 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:35.746 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:35.746 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=241b7adfc213405fbea0339bc816081f 00:11:35.746 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 241b7adfc213405fbea0339bc816081f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:35.746 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:11:35.746 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:35.746 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.746 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:36.004 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:11:36.004 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a72a9dd5-645b-431d-8cef-b7653ff22fb8 -a 10.0.0.2 -s 4420 -i 4 00:11:36.262 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:36.262 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:11:36.262 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:36.262 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:11:36.262 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:11:36.262 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:11:38.160 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:38.160 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:38.160 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:38.417 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:11:38.417 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:38.417 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:11:38.418 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:38.418 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:38.418 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:38.418 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:38.418 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:11:38.418 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:38.418 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:38.418 [ 0]:0x1 00:11:38.418 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:38.418 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:38.418 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a8accf0eb7c14c3e9455e908f59d46e4 00:11:38.418 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a8accf0eb7c14c3e9455e908f59d46e4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:38.418 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:11:38.418 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:38.418 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:38.418 [ 1]:0x2 00:11:38.418 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:38.418 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:38.418 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=241b7adfc213405fbea0339bc816081f 00:11:38.418 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 241b7adfc213405fbea0339bc816081f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:38.418 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:38.675 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:11:38.675 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:38.675 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:11:38.675 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:11:38.675 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:38.675 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:11:38.675 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:38.676 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:11:38.676 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:38.676 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:38.676 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:38.676 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:38.676 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:38.676 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:38.676 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:38.676 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:38.676 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:38.676 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:38.676 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:11:38.676 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:38.676 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:38.676 [ 0]:0x2 00:11:38.676 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:38.676 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:38.676 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=241b7adfc213405fbea0339bc816081f 00:11:38.676 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 241b7adfc213405fbea0339bc816081f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:38.676 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:38.676 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:38.676 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:38.676 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:38.676 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:38.676 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:38.676 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:38.676 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:38.676 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:38.676 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:38.676 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:38.676 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:39.242 [2024-11-19 11:13:34.449462] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:39.242 request: 00:11:39.242 { 00:11:39.242 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:39.242 "nsid": 2, 00:11:39.242 "host": "nqn.2016-06.io.spdk:host1", 00:11:39.242 "method": "nvmf_ns_remove_host", 00:11:39.242 "req_id": 1 00:11:39.242 } 00:11:39.242 Got JSON-RPC error response 00:11:39.242 response: 00:11:39.242 { 00:11:39.242 "code": -32602, 00:11:39.242 "message": "Invalid parameters" 00:11:39.242 } 00:11:39.242 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:39.242 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:39.242 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:39.242 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:39.242 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:11:39.242 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:39.242 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:11:39.242 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:11:39.242 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:39.242 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:11:39.242 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:39.242 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:11:39.242 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:39.242 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:39.242 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:39.242 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:39.242 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:39.242 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:39.242 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:39.242 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:39.242 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:39.242 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:39.242 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:11:39.242 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:39.242 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:39.242 [ 0]:0x2 00:11:39.242 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:39.242 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:39.242 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=241b7adfc213405fbea0339bc816081f 00:11:39.242 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 241b7adfc213405fbea0339bc816081f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:39.242 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:11:39.242 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:39.242 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.242 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2573254 00:11:39.242 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:11:39.242 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:11:39.242 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2573254 /var/tmp/host.sock 00:11:39.242 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2573254 ']' 00:11:39.242 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:11:39.242 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:39.242 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:39.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:39.242 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:39.242 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:39.242 [2024-11-19 11:13:34.656318] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:11:39.242 [2024-11-19 11:13:34.656422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2573254 ] 00:11:39.242 [2024-11-19 11:13:34.732153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.500 [2024-11-19 11:13:34.792708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:39.757 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:39.757 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:11:39.757 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:40.014 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:40.272 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 87edc8ce-bde1-45e3-a16c-85a418d6b8ff 00:11:40.272 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:11:40.272 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 87EDC8CEBDE145E3A16C85A418D6B8FF -i 00:11:40.530 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid e05c455b-70b3-4b34-8828-6a4a07da6a0e 00:11:40.530 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:11:40.530 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g E05C455B70B34B3488286A4A07DA6A0E -i 00:11:41.096 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:41.096 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:11:41.660 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:41.660 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:41.918 nvme0n1 00:11:41.918 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:41.918 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:42.175 nvme1n2 00:11:42.175 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:11:42.175 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:11:42.175 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:11:42.175 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:11:42.176 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:11:42.433 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:11:42.433 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:11:42.433 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:11:42.433 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:11:42.691 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 87edc8ce-bde1-45e3-a16c-85a418d6b8ff == \8\7\e\d\c\8\c\e\-\b\d\e\1\-\4\5\e\3\-\a\1\6\c\-\8\5\a\4\1\8\d\6\b\8\f\f ]] 00:11:42.691 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:11:42.691 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:11:42.691 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:11:42.948 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ e05c455b-70b3-4b34-8828-6a4a07da6a0e == \e\0\5\c\4\5\5\b\-\7\0\b\3\-\4\b\3\4\-\8\8\2\8\-\6\a\4\a\0\7\d\a\6\a\0\e ]] 00:11:42.948 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:43.206 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:43.770 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 87edc8ce-bde1-45e3-a16c-85a418d6b8ff 00:11:43.770 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:11:43.770 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 87EDC8CEBDE145E3A16C85A418D6B8FF 00:11:43.770 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:43.770 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 87EDC8CEBDE145E3A16C85A418D6B8FF 00:11:43.770 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:43.770 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:43.770 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:43.770 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:43.770 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:43.770 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:43.770 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:43.770 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:43.771 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 87EDC8CEBDE145E3A16C85A418D6B8FF 00:11:43.771 [2024-11-19 11:13:39.243301] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:11:43.771 [2024-11-19 11:13:39.243355] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:11:43.771 [2024-11-19 11:13:39.243382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.771 request: 00:11:43.771 { 00:11:43.771 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:43.771 "namespace": { 00:11:43.771 "bdev_name": "invalid", 00:11:43.771 "nsid": 1, 00:11:43.771 "nguid": "87EDC8CEBDE145E3A16C85A418D6B8FF", 00:11:43.771 "no_auto_visible": false 00:11:43.771 }, 00:11:43.771 "method": "nvmf_subsystem_add_ns", 00:11:43.771 "req_id": 1 00:11:43.771 } 00:11:43.771 Got JSON-RPC error response 00:11:43.771 response: 00:11:43.771 { 00:11:43.771 "code": -32602, 00:11:43.771 "message": "Invalid parameters" 00:11:43.771 } 00:11:43.771 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:43.771 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:43.771 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:43.771 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:43.771 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 87edc8ce-bde1-45e3-a16c-85a418d6b8ff 00:11:43.771 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:11:43.771 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 87EDC8CEBDE145E3A16C85A418D6B8FF -i 00:11:44.335 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:11:46.233 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:11:46.233 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:11:46.233 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:11:46.491 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:11:46.491 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2573254 00:11:46.491 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2573254 ']' 00:11:46.491 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2573254 00:11:46.491 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:11:46.491 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:46.491 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2573254 00:11:46.491 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:46.491 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:46.491 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2573254' 00:11:46.491 killing process with pid 2573254 00:11:46.491 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2573254 00:11:46.491 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2573254 00:11:46.748 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:47.312 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:11:47.312 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:11:47.312 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:47.312 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:11:47.312 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:47.312 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:11:47.312 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:47.312 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:47.312 rmmod nvme_tcp 00:11:47.312 rmmod nvme_fabrics 00:11:47.312 rmmod nvme_keyring 00:11:47.312 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:47.312 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:11:47.312 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:11:47.312 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2571634 ']' 00:11:47.312 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2571634 00:11:47.312 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2571634 ']' 00:11:47.312 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2571634 00:11:47.312 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:11:47.312 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:47.312 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2571634 00:11:47.312 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:47.312 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:47.312 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2571634' 00:11:47.312 killing process with pid 2571634 00:11:47.312 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2571634 00:11:47.312 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2571634 00:11:47.570 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:47.570 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:47.570 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:47.570 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:11:47.570 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:11:47.570 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:47.570 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:11:47.570 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:47.570 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:47.570 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.570 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:47.570 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.477 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:49.477 00:11:49.477 real 0m25.544s 00:11:49.477 user 0m36.559s 00:11:49.477 sys 0m5.239s 00:11:49.477 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:49.769 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:49.769 ************************************ 00:11:49.769 END TEST nvmf_ns_masking 00:11:49.769 ************************************ 00:11:49.769 11:13:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:11:49.769 11:13:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:49.769 11:13:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:49.769 11:13:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:49.769 11:13:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:49.769 ************************************ 00:11:49.769 START TEST nvmf_nvme_cli 00:11:49.769 ************************************ 00:11:49.769 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:49.769 * Looking for test storage... 00:11:49.769 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:49.769 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:49.769 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:11:49.769 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:49.769 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:49.769 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:49.769 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:49.769 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:49.769 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:11:49.769 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:11:49.769 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:11:49.769 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:11:49.769 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:11:49.769 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:11:49.769 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:11:49.769 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:49.769 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:11:49.769 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:11:49.769 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:49.769 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:49.769 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:11:49.769 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:11:49.769 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:49.769 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:11:49.769 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:11:49.769 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:11:49.769 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:11:49.769 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:49.769 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:11:49.769 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:11:49.769 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:49.769 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:49.769 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:11:49.769 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:49.769 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:49.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.769 --rc genhtml_branch_coverage=1 00:11:49.769 --rc genhtml_function_coverage=1 00:11:49.769 --rc genhtml_legend=1 00:11:49.769 --rc geninfo_all_blocks=1 00:11:49.769 --rc geninfo_unexecuted_blocks=1 00:11:49.769 00:11:49.769 ' 00:11:49.769 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:49.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.769 --rc genhtml_branch_coverage=1 00:11:49.769 --rc genhtml_function_coverage=1 00:11:49.769 --rc genhtml_legend=1 00:11:49.769 --rc geninfo_all_blocks=1 00:11:49.769 --rc geninfo_unexecuted_blocks=1 00:11:49.769 00:11:49.769 ' 00:11:49.769 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:49.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.769 --rc genhtml_branch_coverage=1 00:11:49.769 --rc genhtml_function_coverage=1 00:11:49.769 --rc genhtml_legend=1 00:11:49.769 --rc geninfo_all_blocks=1 00:11:49.769 --rc geninfo_unexecuted_blocks=1 00:11:49.769 00:11:49.769 ' 00:11:49.769 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:49.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.769 --rc genhtml_branch_coverage=1 00:11:49.769 --rc genhtml_function_coverage=1 00:11:49.769 --rc genhtml_legend=1 00:11:49.769 --rc geninfo_all_blocks=1 00:11:49.769 --rc geninfo_unexecuted_blocks=1 00:11:49.769 00:11:49.769 ' 00:11:49.769 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:49.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:11:49.770 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:52.327 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:52.327 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:11:52.327 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:52.327 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:11:52.587 Found 0000:82:00.0 (0x8086 - 0x159b) 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:11:52.587 Found 0000:82:00.1 (0x8086 - 0x159b) 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:11:52.587 Found net devices under 0000:82:00.0: cvl_0_0 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:11:52.587 Found net devices under 0000:82:00.1: cvl_0_1 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:52.587 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:52.588 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:52.588 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:52.588 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:52.588 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:52.588 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:52.588 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:52.588 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:52.588 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:52.588 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:52.588 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:52.588 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:52.588 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:52.588 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:52.588 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:52.588 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:52.588 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:52.588 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:52.588 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:52.588 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:52.588 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:52.588 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:52.588 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:52.588 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:52.588 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:52.588 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:52.588 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:11:52.588 00:11:52.588 --- 10.0.0.2 ping statistics --- 00:11:52.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.588 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:11:52.588 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:52.588 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:52.588 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:11:52.588 00:11:52.588 --- 10.0.0.1 ping statistics --- 00:11:52.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.588 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:11:52.588 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:52.588 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:11:52.588 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:52.588 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:52.588 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:52.588 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:52.588 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:52.588 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:52.588 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:52.588 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:11:52.588 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:52.588 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:52.588 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:52.588 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2576580 00:11:52.588 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:52.588 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2576580 00:11:52.588 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2576580 ']' 00:11:52.588 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.588 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:52.588 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.588 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:52.588 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:52.588 [2024-11-19 11:13:48.049146] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:11:52.588 [2024-11-19 11:13:48.049232] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:52.847 [2024-11-19 11:13:48.130275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:52.847 [2024-11-19 11:13:48.186577] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:52.847 [2024-11-19 11:13:48.186636] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:52.847 [2024-11-19 11:13:48.186665] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:52.847 [2024-11-19 11:13:48.186676] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:52.847 [2024-11-19 11:13:48.186686] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:52.847 [2024-11-19 11:13:48.188326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.847 [2024-11-19 11:13:48.188468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:52.847 [2024-11-19 11:13:48.188496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:52.847 [2024-11-19 11:13:48.188499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.847 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:52.847 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:11:52.848 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:52.848 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:52.848 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:52.848 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:52.848 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:52.848 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.848 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:52.848 [2024-11-19 11:13:48.334825] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:52.848 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.848 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:52.848 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.848 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:53.106 Malloc0 00:11:53.106 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.106 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:53.106 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.106 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:53.106 Malloc1 00:11:53.106 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.106 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:11:53.106 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.106 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:53.106 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.106 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:53.106 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.106 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:53.106 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.106 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:53.106 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.106 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:53.106 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.106 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:53.106 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.106 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:53.106 [2024-11-19 11:13:48.436370] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:53.106 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.106 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:53.106 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.106 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:53.106 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.106 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 4420 00:11:53.364 00:11:53.364 Discovery Log Number of Records 2, Generation counter 2 00:11:53.364 =====Discovery Log Entry 0====== 00:11:53.364 trtype: tcp 00:11:53.364 adrfam: ipv4 00:11:53.364 subtype: current discovery subsystem 00:11:53.364 treq: not required 00:11:53.364 portid: 0 00:11:53.364 trsvcid: 4420 00:11:53.364 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:53.364 traddr: 10.0.0.2 00:11:53.364 eflags: explicit discovery connections, duplicate discovery information 00:11:53.364 sectype: none 00:11:53.364 =====Discovery Log Entry 1====== 00:11:53.364 trtype: tcp 00:11:53.364 adrfam: ipv4 00:11:53.364 subtype: nvme subsystem 00:11:53.364 treq: not required 00:11:53.364 portid: 0 00:11:53.364 trsvcid: 4420 00:11:53.364 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:53.364 traddr: 10.0.0.2 00:11:53.364 eflags: none 00:11:53.364 sectype: none 00:11:53.364 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:11:53.364 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:11:53.364 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:11:53.364 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:11:53.364 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:11:53.364 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:11:53.364 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:11:53.364 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:11:53.364 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:11:53.364 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:11:53.364 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:53.930 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:53.930 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:11:53.930 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:53.930 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:11:53.930 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:11:53.930 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:11:55.827 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:55.827 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:55.827 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:55.827 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:11:55.827 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:55.827 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:11:55.827 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:11:55.827 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:11:55.827 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:11:55.827 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:11:56.085 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:11:56.085 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:11:56.085 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:11:56.085 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:11:56.085 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:56.085 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:11:56.085 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:11:56.085 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:56.085 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:11:56.085 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:11:56.085 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:11:56.085 /dev/nvme0n2 ]] 00:11:56.085 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:11:56.085 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:11:56.085 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:11:56.085 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:11:56.085 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:11:56.085 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:11:56.085 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:11:56.085 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:11:56.085 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:11:56.085 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:56.085 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:11:56.085 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:11:56.085 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:56.085 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:11:56.085 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:11:56.085 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:11:56.085 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:56.343 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.344 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:56.344 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:11:56.344 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:56.344 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.602 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:56.602 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.602 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:11:56.602 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:11:56.602 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:56.602 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.602 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:56.602 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.602 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:56.602 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:11:56.602 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:56.602 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:11:56.602 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:56.602 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:11:56.602 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:56.602 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:56.602 rmmod nvme_tcp 00:11:56.602 rmmod nvme_fabrics 00:11:56.602 rmmod nvme_keyring 00:11:56.602 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:56.602 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:11:56.602 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:11:56.602 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2576580 ']' 00:11:56.602 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2576580 00:11:56.602 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2576580 ']' 00:11:56.602 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2576580 00:11:56.602 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:11:56.602 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:56.602 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2576580 00:11:56.602 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:56.602 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:56.603 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2576580' 00:11:56.603 killing process with pid 2576580 00:11:56.603 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2576580 00:11:56.603 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2576580 00:11:56.863 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:56.863 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:56.863 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:56.863 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:11:56.863 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:11:56.863 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:56.863 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:11:56.863 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:56.863 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:56.863 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.863 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:56.863 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:59.404 00:11:59.404 real 0m9.266s 00:11:59.404 user 0m16.850s 00:11:59.404 sys 0m2.743s 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:59.404 ************************************ 00:11:59.404 END TEST nvmf_nvme_cli 00:11:59.404 ************************************ 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:59.404 ************************************ 00:11:59.404 START TEST nvmf_vfio_user 00:11:59.404 ************************************ 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:59.404 * Looking for test storage... 00:11:59.404 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:59.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.404 --rc genhtml_branch_coverage=1 00:11:59.404 --rc genhtml_function_coverage=1 00:11:59.404 --rc genhtml_legend=1 00:11:59.404 --rc geninfo_all_blocks=1 00:11:59.404 --rc geninfo_unexecuted_blocks=1 00:11:59.404 00:11:59.404 ' 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:59.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.404 --rc genhtml_branch_coverage=1 00:11:59.404 --rc genhtml_function_coverage=1 00:11:59.404 --rc genhtml_legend=1 00:11:59.404 --rc geninfo_all_blocks=1 00:11:59.404 --rc geninfo_unexecuted_blocks=1 00:11:59.404 00:11:59.404 ' 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:59.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.404 --rc genhtml_branch_coverage=1 00:11:59.404 --rc genhtml_function_coverage=1 00:11:59.404 --rc genhtml_legend=1 00:11:59.404 --rc geninfo_all_blocks=1 00:11:59.404 --rc geninfo_unexecuted_blocks=1 00:11:59.404 00:11:59.404 ' 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:59.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.404 --rc genhtml_branch_coverage=1 00:11:59.404 --rc genhtml_function_coverage=1 00:11:59.404 --rc genhtml_legend=1 00:11:59.404 --rc geninfo_all_blocks=1 00:11:59.404 --rc geninfo_unexecuted_blocks=1 00:11:59.404 00:11:59.404 ' 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:11:59.404 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:11:59.405 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:59.405 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:59.405 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:59.405 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:59.405 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:59.405 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:11:59.405 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:59.405 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:59.405 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:59.405 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.405 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.405 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.405 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:11:59.405 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.405 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:11:59.405 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:59.405 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:59.405 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:59.405 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:59.405 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:59.405 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:59.405 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:59.405 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:59.405 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:59.405 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:59.405 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:11:59.405 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:59.405 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:11:59.405 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:59.405 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:11:59.405 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:11:59.405 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:11:59.405 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:11:59.405 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:11:59.405 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:11:59.405 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2577514 00:11:59.405 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:11:59.405 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2577514' 00:11:59.405 Process pid: 2577514 00:11:59.405 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:59.405 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2577514 00:11:59.405 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2577514 ']' 00:11:59.405 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.405 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:59.405 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.405 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:59.405 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:59.405 [2024-11-19 11:13:54.565634] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:11:59.405 [2024-11-19 11:13:54.565728] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:59.405 [2024-11-19 11:13:54.642663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:59.405 [2024-11-19 11:13:54.700012] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:59.405 [2024-11-19 11:13:54.700064] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:59.405 [2024-11-19 11:13:54.700092] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:59.405 [2024-11-19 11:13:54.700103] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:59.405 [2024-11-19 11:13:54.700112] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:59.405 [2024-11-19 11:13:54.701582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.405 [2024-11-19 11:13:54.701641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:59.405 [2024-11-19 11:13:54.701707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:59.405 [2024-11-19 11:13:54.701710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.405 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:59.405 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:11:59.405 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:00.338 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:00.903 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:00.903 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:00.903 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:00.903 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:00.903 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:01.161 Malloc1 00:12:01.161 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:01.418 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:01.676 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:01.933 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:01.933 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:01.933 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:02.190 Malloc2 00:12:02.190 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:02.447 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:02.705 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:02.963 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:02.963 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:02.963 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:02.963 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:02.963 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:02.963 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:02.963 [2024-11-19 11:13:58.365780] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:12:02.963 [2024-11-19 11:13:58.365824] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2577937 ] 00:12:02.963 [2024-11-19 11:13:58.416463] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:02.963 [2024-11-19 11:13:58.421830] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:02.963 [2024-11-19 11:13:58.421863] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f98e0e6d000 00:12:02.963 [2024-11-19 11:13:58.422824] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:02.963 [2024-11-19 11:13:58.423818] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:02.963 [2024-11-19 11:13:58.424825] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:02.963 [2024-11-19 11:13:58.425833] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:02.963 [2024-11-19 11:13:58.426840] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:02.964 [2024-11-19 11:13:58.427847] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:02.964 [2024-11-19 11:13:58.428857] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:02.964 [2024-11-19 11:13:58.429861] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:02.964 [2024-11-19 11:13:58.430869] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:02.964 [2024-11-19 11:13:58.430889] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f98e0e62000 00:12:02.964 [2024-11-19 11:13:58.432009] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:02.964 [2024-11-19 11:13:58.447006] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:02.964 [2024-11-19 11:13:58.447051] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:12:02.964 [2024-11-19 11:13:58.451985] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:02.964 [2024-11-19 11:13:58.452097] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:02.964 [2024-11-19 11:13:58.452192] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:12:02.964 [2024-11-19 11:13:58.452225] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:12:02.964 [2024-11-19 11:13:58.452237] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:12:02.964 [2024-11-19 11:13:58.452984] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:02.964 [2024-11-19 11:13:58.453005] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:12:02.964 [2024-11-19 11:13:58.453018] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:12:02.964 [2024-11-19 11:13:58.453989] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:02.964 [2024-11-19 11:13:58.454008] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:12:02.964 [2024-11-19 11:13:58.454022] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:12:02.964 [2024-11-19 11:13:58.454995] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:02.964 [2024-11-19 11:13:58.455014] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:02.964 [2024-11-19 11:13:58.457374] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:02.964 [2024-11-19 11:13:58.457395] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:12:02.964 [2024-11-19 11:13:58.457409] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:12:02.964 [2024-11-19 11:13:58.457422] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:02.964 [2024-11-19 11:13:58.457533] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:12:02.964 [2024-11-19 11:13:58.457541] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:02.964 [2024-11-19 11:13:58.457550] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:02.964 [2024-11-19 11:13:58.458018] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:02.964 [2024-11-19 11:13:58.459018] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:02.964 [2024-11-19 11:13:58.460026] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:03.223 [2024-11-19 11:13:58.461022] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:03.223 [2024-11-19 11:13:58.461125] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:03.223 [2024-11-19 11:13:58.462035] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:03.223 [2024-11-19 11:13:58.462053] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:03.223 [2024-11-19 11:13:58.462062] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:12:03.223 [2024-11-19 11:13:58.462085] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:12:03.223 [2024-11-19 11:13:58.462105] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:12:03.223 [2024-11-19 11:13:58.462137] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:03.223 [2024-11-19 11:13:58.462148] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:03.223 [2024-11-19 11:13:58.462155] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:03.223 [2024-11-19 11:13:58.462177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:03.223 [2024-11-19 11:13:58.462237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:03.223 [2024-11-19 11:13:58.462256] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:12:03.223 [2024-11-19 11:13:58.462265] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:12:03.223 [2024-11-19 11:13:58.462271] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:12:03.223 [2024-11-19 11:13:58.462280] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:03.223 [2024-11-19 11:13:58.462291] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:12:03.223 [2024-11-19 11:13:58.462304] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:12:03.223 [2024-11-19 11:13:58.462312] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:12:03.223 [2024-11-19 11:13:58.462328] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:12:03.223 [2024-11-19 11:13:58.462344] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:03.223 [2024-11-19 11:13:58.462359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:03.223 [2024-11-19 11:13:58.462401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:03.223 [2024-11-19 11:13:58.462414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:03.223 [2024-11-19 11:13:58.462427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:03.223 [2024-11-19 11:13:58.462439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:03.223 [2024-11-19 11:13:58.462447] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:12:03.223 [2024-11-19 11:13:58.462459] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:03.223 [2024-11-19 11:13:58.462472] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:03.223 [2024-11-19 11:13:58.462484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:03.223 [2024-11-19 11:13:58.462500] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:12:03.223 [2024-11-19 11:13:58.462509] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:03.223 [2024-11-19 11:13:58.462521] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:12:03.223 [2024-11-19 11:13:58.462531] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:12:03.223 [2024-11-19 11:13:58.462543] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:03.223 [2024-11-19 11:13:58.462555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:03.223 [2024-11-19 11:13:58.462623] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:12:03.223 [2024-11-19 11:13:58.462656] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:12:03.224 [2024-11-19 11:13:58.462671] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:03.224 [2024-11-19 11:13:58.462680] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:03.224 [2024-11-19 11:13:58.462686] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:03.224 [2024-11-19 11:13:58.462696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:03.224 [2024-11-19 11:13:58.462715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:03.224 [2024-11-19 11:13:58.462737] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:12:03.224 [2024-11-19 11:13:58.462754] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:12:03.224 [2024-11-19 11:13:58.462770] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:12:03.224 [2024-11-19 11:13:58.462783] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:03.224 [2024-11-19 11:13:58.462791] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:03.224 [2024-11-19 11:13:58.462797] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:03.224 [2024-11-19 11:13:58.462807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:03.224 [2024-11-19 11:13:58.462833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:03.224 [2024-11-19 11:13:58.462859] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:03.224 [2024-11-19 11:13:58.462875] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:03.224 [2024-11-19 11:13:58.462888] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:03.224 [2024-11-19 11:13:58.462896] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:03.224 [2024-11-19 11:13:58.462902] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:03.224 [2024-11-19 11:13:58.462912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:03.224 [2024-11-19 11:13:58.462929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:03.224 [2024-11-19 11:13:58.462944] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:03.224 [2024-11-19 11:13:58.462956] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:12:03.224 [2024-11-19 11:13:58.462970] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:12:03.224 [2024-11-19 11:13:58.462982] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:12:03.224 [2024-11-19 11:13:58.462991] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:03.224 [2024-11-19 11:13:58.462999] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:12:03.224 [2024-11-19 11:13:58.463009] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:12:03.224 [2024-11-19 11:13:58.463017] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:12:03.224 [2024-11-19 11:13:58.463026] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:12:03.224 [2024-11-19 11:13:58.463059] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:03.224 [2024-11-19 11:13:58.463079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:03.224 [2024-11-19 11:13:58.463099] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:03.224 [2024-11-19 11:13:58.463127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:03.224 [2024-11-19 11:13:58.463144] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:03.224 [2024-11-19 11:13:58.463156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:03.224 [2024-11-19 11:13:58.463172] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:03.224 [2024-11-19 11:13:58.463199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:03.224 [2024-11-19 11:13:58.463222] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:03.224 [2024-11-19 11:13:58.463232] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:03.224 [2024-11-19 11:13:58.463238] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:03.224 [2024-11-19 11:13:58.463244] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:03.224 [2024-11-19 11:13:58.463249] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:12:03.224 [2024-11-19 11:13:58.463258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:03.224 [2024-11-19 11:13:58.463270] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:03.224 [2024-11-19 11:13:58.463278] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:03.224 [2024-11-19 11:13:58.463283] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:03.224 [2024-11-19 11:13:58.463292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:03.224 [2024-11-19 11:13:58.463302] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:03.224 [2024-11-19 11:13:58.463310] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:03.224 [2024-11-19 11:13:58.463316] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:03.224 [2024-11-19 11:13:58.463324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:03.224 [2024-11-19 11:13:58.463336] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:03.224 [2024-11-19 11:13:58.463359] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:03.224 [2024-11-19 11:13:58.463373] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:03.224 [2024-11-19 11:13:58.463383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:03.224 [2024-11-19 11:13:58.463395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:03.224 [2024-11-19 11:13:58.463434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:03.224 [2024-11-19 11:13:58.463457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:03.224 [2024-11-19 11:13:58.463470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:03.224 ===================================================== 00:12:03.224 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:03.224 ===================================================== 00:12:03.224 Controller Capabilities/Features 00:12:03.224 ================================ 00:12:03.224 Vendor ID: 4e58 00:12:03.224 Subsystem Vendor ID: 4e58 00:12:03.224 Serial Number: SPDK1 00:12:03.224 Model Number: SPDK bdev Controller 00:12:03.224 Firmware Version: 25.01 00:12:03.224 Recommended Arb Burst: 6 00:12:03.224 IEEE OUI Identifier: 8d 6b 50 00:12:03.224 Multi-path I/O 00:12:03.224 May have multiple subsystem ports: Yes 00:12:03.224 May have multiple controllers: Yes 00:12:03.224 Associated with SR-IOV VF: No 00:12:03.224 Max Data Transfer Size: 131072 00:12:03.224 Max Number of Namespaces: 32 00:12:03.224 Max Number of I/O Queues: 127 00:12:03.224 NVMe Specification Version (VS): 1.3 00:12:03.224 NVMe Specification Version (Identify): 1.3 00:12:03.224 Maximum Queue Entries: 256 00:12:03.224 Contiguous Queues Required: Yes 00:12:03.224 Arbitration Mechanisms Supported 00:12:03.224 Weighted Round Robin: Not Supported 00:12:03.224 Vendor Specific: Not Supported 00:12:03.224 Reset Timeout: 15000 ms 00:12:03.224 Doorbell Stride: 4 bytes 00:12:03.224 NVM Subsystem Reset: Not Supported 00:12:03.224 Command Sets Supported 00:12:03.224 NVM Command Set: Supported 00:12:03.224 Boot Partition: Not Supported 00:12:03.224 Memory Page Size Minimum: 4096 bytes 00:12:03.224 Memory Page Size Maximum: 4096 bytes 00:12:03.224 Persistent Memory Region: Not Supported 00:12:03.224 Optional Asynchronous Events Supported 00:12:03.224 Namespace Attribute Notices: Supported 00:12:03.224 Firmware Activation Notices: Not Supported 00:12:03.224 ANA Change Notices: Not Supported 00:12:03.224 PLE Aggregate Log Change Notices: Not Supported 00:12:03.224 LBA Status Info Alert Notices: Not Supported 00:12:03.225 EGE Aggregate Log Change Notices: Not Supported 00:12:03.225 Normal NVM Subsystem Shutdown event: Not Supported 00:12:03.225 Zone Descriptor Change Notices: Not Supported 00:12:03.225 Discovery Log Change Notices: Not Supported 00:12:03.225 Controller Attributes 00:12:03.225 128-bit Host Identifier: Supported 00:12:03.225 Non-Operational Permissive Mode: Not Supported 00:12:03.225 NVM Sets: Not Supported 00:12:03.225 Read Recovery Levels: Not Supported 00:12:03.225 Endurance Groups: Not Supported 00:12:03.225 Predictable Latency Mode: Not Supported 00:12:03.225 Traffic Based Keep ALive: Not Supported 00:12:03.225 Namespace Granularity: Not Supported 00:12:03.225 SQ Associations: Not Supported 00:12:03.225 UUID List: Not Supported 00:12:03.225 Multi-Domain Subsystem: Not Supported 00:12:03.225 Fixed Capacity Management: Not Supported 00:12:03.225 Variable Capacity Management: Not Supported 00:12:03.225 Delete Endurance Group: Not Supported 00:12:03.225 Delete NVM Set: Not Supported 00:12:03.225 Extended LBA Formats Supported: Not Supported 00:12:03.225 Flexible Data Placement Supported: Not Supported 00:12:03.225 00:12:03.225 Controller Memory Buffer Support 00:12:03.225 ================================ 00:12:03.225 Supported: No 00:12:03.225 00:12:03.225 Persistent Memory Region Support 00:12:03.225 ================================ 00:12:03.225 Supported: No 00:12:03.225 00:12:03.225 Admin Command Set Attributes 00:12:03.225 ============================ 00:12:03.225 Security Send/Receive: Not Supported 00:12:03.225 Format NVM: Not Supported 00:12:03.225 Firmware Activate/Download: Not Supported 00:12:03.225 Namespace Management: Not Supported 00:12:03.225 Device Self-Test: Not Supported 00:12:03.225 Directives: Not Supported 00:12:03.225 NVMe-MI: Not Supported 00:12:03.225 Virtualization Management: Not Supported 00:12:03.225 Doorbell Buffer Config: Not Supported 00:12:03.225 Get LBA Status Capability: Not Supported 00:12:03.225 Command & Feature Lockdown Capability: Not Supported 00:12:03.225 Abort Command Limit: 4 00:12:03.225 Async Event Request Limit: 4 00:12:03.225 Number of Firmware Slots: N/A 00:12:03.225 Firmware Slot 1 Read-Only: N/A 00:12:03.225 Firmware Activation Without Reset: N/A 00:12:03.225 Multiple Update Detection Support: N/A 00:12:03.225 Firmware Update Granularity: No Information Provided 00:12:03.225 Per-Namespace SMART Log: No 00:12:03.225 Asymmetric Namespace Access Log Page: Not Supported 00:12:03.225 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:03.225 Command Effects Log Page: Supported 00:12:03.225 Get Log Page Extended Data: Supported 00:12:03.225 Telemetry Log Pages: Not Supported 00:12:03.225 Persistent Event Log Pages: Not Supported 00:12:03.225 Supported Log Pages Log Page: May Support 00:12:03.225 Commands Supported & Effects Log Page: Not Supported 00:12:03.225 Feature Identifiers & Effects Log Page:May Support 00:12:03.225 NVMe-MI Commands & Effects Log Page: May Support 00:12:03.225 Data Area 4 for Telemetry Log: Not Supported 00:12:03.225 Error Log Page Entries Supported: 128 00:12:03.225 Keep Alive: Supported 00:12:03.225 Keep Alive Granularity: 10000 ms 00:12:03.225 00:12:03.225 NVM Command Set Attributes 00:12:03.225 ========================== 00:12:03.225 Submission Queue Entry Size 00:12:03.225 Max: 64 00:12:03.225 Min: 64 00:12:03.225 Completion Queue Entry Size 00:12:03.225 Max: 16 00:12:03.225 Min: 16 00:12:03.225 Number of Namespaces: 32 00:12:03.225 Compare Command: Supported 00:12:03.225 Write Uncorrectable Command: Not Supported 00:12:03.225 Dataset Management Command: Supported 00:12:03.225 Write Zeroes Command: Supported 00:12:03.225 Set Features Save Field: Not Supported 00:12:03.225 Reservations: Not Supported 00:12:03.225 Timestamp: Not Supported 00:12:03.225 Copy: Supported 00:12:03.225 Volatile Write Cache: Present 00:12:03.225 Atomic Write Unit (Normal): 1 00:12:03.225 Atomic Write Unit (PFail): 1 00:12:03.225 Atomic Compare & Write Unit: 1 00:12:03.225 Fused Compare & Write: Supported 00:12:03.225 Scatter-Gather List 00:12:03.225 SGL Command Set: Supported (Dword aligned) 00:12:03.225 SGL Keyed: Not Supported 00:12:03.225 SGL Bit Bucket Descriptor: Not Supported 00:12:03.225 SGL Metadata Pointer: Not Supported 00:12:03.225 Oversized SGL: Not Supported 00:12:03.225 SGL Metadata Address: Not Supported 00:12:03.225 SGL Offset: Not Supported 00:12:03.225 Transport SGL Data Block: Not Supported 00:12:03.225 Replay Protected Memory Block: Not Supported 00:12:03.225 00:12:03.225 Firmware Slot Information 00:12:03.225 ========================= 00:12:03.225 Active slot: 1 00:12:03.225 Slot 1 Firmware Revision: 25.01 00:12:03.225 00:12:03.225 00:12:03.225 Commands Supported and Effects 00:12:03.225 ============================== 00:12:03.225 Admin Commands 00:12:03.225 -------------- 00:12:03.225 Get Log Page (02h): Supported 00:12:03.225 Identify (06h): Supported 00:12:03.225 Abort (08h): Supported 00:12:03.225 Set Features (09h): Supported 00:12:03.225 Get Features (0Ah): Supported 00:12:03.225 Asynchronous Event Request (0Ch): Supported 00:12:03.225 Keep Alive (18h): Supported 00:12:03.225 I/O Commands 00:12:03.225 ------------ 00:12:03.225 Flush (00h): Supported LBA-Change 00:12:03.225 Write (01h): Supported LBA-Change 00:12:03.225 Read (02h): Supported 00:12:03.225 Compare (05h): Supported 00:12:03.225 Write Zeroes (08h): Supported LBA-Change 00:12:03.225 Dataset Management (09h): Supported LBA-Change 00:12:03.225 Copy (19h): Supported LBA-Change 00:12:03.225 00:12:03.225 Error Log 00:12:03.225 ========= 00:12:03.225 00:12:03.225 Arbitration 00:12:03.225 =========== 00:12:03.225 Arbitration Burst: 1 00:12:03.225 00:12:03.225 Power Management 00:12:03.225 ================ 00:12:03.225 Number of Power States: 1 00:12:03.225 Current Power State: Power State #0 00:12:03.225 Power State #0: 00:12:03.225 Max Power: 0.00 W 00:12:03.225 Non-Operational State: Operational 00:12:03.225 Entry Latency: Not Reported 00:12:03.225 Exit Latency: Not Reported 00:12:03.225 Relative Read Throughput: 0 00:12:03.225 Relative Read Latency: 0 00:12:03.225 Relative Write Throughput: 0 00:12:03.225 Relative Write Latency: 0 00:12:03.225 Idle Power: Not Reported 00:12:03.225 Active Power: Not Reported 00:12:03.225 Non-Operational Permissive Mode: Not Supported 00:12:03.225 00:12:03.225 Health Information 00:12:03.225 ================== 00:12:03.225 Critical Warnings: 00:12:03.225 Available Spare Space: OK 00:12:03.225 Temperature: OK 00:12:03.225 Device Reliability: OK 00:12:03.225 Read Only: No 00:12:03.225 Volatile Memory Backup: OK 00:12:03.225 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:03.225 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:03.225 Available Spare: 0% 00:12:03.226 Available Sp[2024-11-19 11:13:58.463603] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:03.226 [2024-11-19 11:13:58.463620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:03.226 [2024-11-19 11:13:58.463668] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:12:03.226 [2024-11-19 11:13:58.463686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:03.226 [2024-11-19 11:13:58.463698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:03.226 [2024-11-19 11:13:58.463708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:03.226 [2024-11-19 11:13:58.463718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:03.226 [2024-11-19 11:13:58.466373] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:03.226 [2024-11-19 11:13:58.466397] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:03.226 [2024-11-19 11:13:58.467058] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:03.226 [2024-11-19 11:13:58.467134] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:12:03.226 [2024-11-19 11:13:58.467147] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:12:03.226 [2024-11-19 11:13:58.468071] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:03.226 [2024-11-19 11:13:58.468095] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:12:03.226 [2024-11-19 11:13:58.468153] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:03.226 [2024-11-19 11:13:58.470106] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:03.226 are Threshold: 0% 00:12:03.226 Life Percentage Used: 0% 00:12:03.226 Data Units Read: 0 00:12:03.226 Data Units Written: 0 00:12:03.226 Host Read Commands: 0 00:12:03.226 Host Write Commands: 0 00:12:03.226 Controller Busy Time: 0 minutes 00:12:03.226 Power Cycles: 0 00:12:03.226 Power On Hours: 0 hours 00:12:03.226 Unsafe Shutdowns: 0 00:12:03.226 Unrecoverable Media Errors: 0 00:12:03.226 Lifetime Error Log Entries: 0 00:12:03.226 Warning Temperature Time: 0 minutes 00:12:03.226 Critical Temperature Time: 0 minutes 00:12:03.226 00:12:03.226 Number of Queues 00:12:03.226 ================ 00:12:03.226 Number of I/O Submission Queues: 127 00:12:03.226 Number of I/O Completion Queues: 127 00:12:03.226 00:12:03.226 Active Namespaces 00:12:03.226 ================= 00:12:03.226 Namespace ID:1 00:12:03.226 Error Recovery Timeout: Unlimited 00:12:03.226 Command Set Identifier: NVM (00h) 00:12:03.226 Deallocate: Supported 00:12:03.226 Deallocated/Unwritten Error: Not Supported 00:12:03.226 Deallocated Read Value: Unknown 00:12:03.226 Deallocate in Write Zeroes: Not Supported 00:12:03.226 Deallocated Guard Field: 0xFFFF 00:12:03.226 Flush: Supported 00:12:03.226 Reservation: Supported 00:12:03.226 Namespace Sharing Capabilities: Multiple Controllers 00:12:03.226 Size (in LBAs): 131072 (0GiB) 00:12:03.226 Capacity (in LBAs): 131072 (0GiB) 00:12:03.226 Utilization (in LBAs): 131072 (0GiB) 00:12:03.226 NGUID: B944F2789C924E6C9F26AE8380286FEC 00:12:03.226 UUID: b944f278-9c92-4e6c-9f26-ae8380286fec 00:12:03.226 Thin Provisioning: Not Supported 00:12:03.226 Per-NS Atomic Units: Yes 00:12:03.226 Atomic Boundary Size (Normal): 0 00:12:03.226 Atomic Boundary Size (PFail): 0 00:12:03.226 Atomic Boundary Offset: 0 00:12:03.226 Maximum Single Source Range Length: 65535 00:12:03.226 Maximum Copy Length: 65535 00:12:03.226 Maximum Source Range Count: 1 00:12:03.226 NGUID/EUI64 Never Reused: No 00:12:03.226 Namespace Write Protected: No 00:12:03.226 Number of LBA Formats: 1 00:12:03.226 Current LBA Format: LBA Format #00 00:12:03.226 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:03.226 00:12:03.226 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:03.484 [2024-11-19 11:13:58.724287] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:08.748 Initializing NVMe Controllers 00:12:08.748 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:08.748 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:08.748 Initialization complete. Launching workers. 00:12:08.748 ======================================================== 00:12:08.748 Latency(us) 00:12:08.748 Device Information : IOPS MiB/s Average min max 00:12:08.748 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 33311.24 130.12 3841.59 1208.94 9291.47 00:12:08.748 ======================================================== 00:12:08.748 Total : 33311.24 130.12 3841.59 1208.94 9291.47 00:12:08.748 00:12:08.748 [2024-11-19 11:14:03.747031] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:08.748 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:08.748 [2024-11-19 11:14:03.999212] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:14.007 Initializing NVMe Controllers 00:12:14.007 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:14.007 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:14.007 Initialization complete. Launching workers. 00:12:14.007 ======================================================== 00:12:14.007 Latency(us) 00:12:14.007 Device Information : IOPS MiB/s Average min max 00:12:14.007 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7980.96 5982.68 10986.16 00:12:14.007 ======================================================== 00:12:14.008 Total : 16051.20 62.70 7980.96 5982.68 10986.16 00:12:14.008 00:12:14.008 [2024-11-19 11:14:09.033401] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:14.008 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:14.008 [2024-11-19 11:14:09.278614] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:19.268 [2024-11-19 11:14:14.347704] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:19.268 Initializing NVMe Controllers 00:12:19.268 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:19.268 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:19.268 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:19.268 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:19.268 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:19.268 Initialization complete. Launching workers. 00:12:19.268 Starting thread on core 2 00:12:19.268 Starting thread on core 3 00:12:19.268 Starting thread on core 1 00:12:19.268 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:19.268 [2024-11-19 11:14:14.670893] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:22.552 [2024-11-19 11:14:17.737721] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:22.552 Initializing NVMe Controllers 00:12:22.552 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:22.552 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:22.552 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:22.552 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:22.552 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:22.552 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:22.552 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:22.552 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:22.552 Initialization complete. Launching workers. 00:12:22.552 Starting thread on core 1 with urgent priority queue 00:12:22.552 Starting thread on core 2 with urgent priority queue 00:12:22.552 Starting thread on core 3 with urgent priority queue 00:12:22.552 Starting thread on core 0 with urgent priority queue 00:12:22.552 SPDK bdev Controller (SPDK1 ) core 0: 5367.00 IO/s 18.63 secs/100000 ios 00:12:22.552 SPDK bdev Controller (SPDK1 ) core 1: 5758.33 IO/s 17.37 secs/100000 ios 00:12:22.552 SPDK bdev Controller (SPDK1 ) core 2: 5464.67 IO/s 18.30 secs/100000 ios 00:12:22.552 SPDK bdev Controller (SPDK1 ) core 3: 5892.33 IO/s 16.97 secs/100000 ios 00:12:22.552 ======================================================== 00:12:22.552 00:12:22.552 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:22.810 [2024-11-19 11:14:18.065353] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:22.810 Initializing NVMe Controllers 00:12:22.810 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:22.810 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:22.810 Namespace ID: 1 size: 0GB 00:12:22.810 Initialization complete. 00:12:22.810 INFO: using host memory buffer for IO 00:12:22.810 Hello world! 00:12:22.810 [2024-11-19 11:14:18.100069] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:22.810 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:23.068 [2024-11-19 11:14:18.431845] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:24.001 Initializing NVMe Controllers 00:12:24.001 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:24.001 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:24.001 Initialization complete. Launching workers. 00:12:24.001 submit (in ns) avg, min, max = 6412.5, 3505.6, 4002338.9 00:12:24.001 complete (in ns) avg, min, max = 28545.2, 2064.4, 4016233.3 00:12:24.001 00:12:24.001 Submit histogram 00:12:24.001 ================ 00:12:24.001 Range in us Cumulative Count 00:12:24.001 3.484 - 3.508: 0.0077% ( 1) 00:12:24.001 3.508 - 3.532: 0.4105% ( 52) 00:12:24.001 3.532 - 3.556: 1.5722% ( 150) 00:12:24.001 3.556 - 3.579: 5.1115% ( 457) 00:12:24.001 3.579 - 3.603: 10.2928% ( 669) 00:12:24.001 3.603 - 3.627: 19.4393% ( 1181) 00:12:24.001 3.627 - 3.650: 29.6623% ( 1320) 00:12:24.001 3.650 - 3.674: 38.9250% ( 1196) 00:12:24.001 3.674 - 3.698: 45.5468% ( 855) 00:12:24.001 3.698 - 3.721: 52.0678% ( 842) 00:12:24.001 3.721 - 3.745: 57.4969% ( 701) 00:12:24.001 3.745 - 3.769: 62.4148% ( 635) 00:12:24.001 3.769 - 3.793: 66.9842% ( 590) 00:12:24.001 3.793 - 3.816: 70.1208% ( 405) 00:12:24.001 3.816 - 3.840: 73.4820% ( 434) 00:12:24.001 3.840 - 3.864: 77.1066% ( 468) 00:12:24.001 3.864 - 3.887: 80.8008% ( 477) 00:12:24.001 3.887 - 3.911: 84.1001% ( 426) 00:12:24.001 3.911 - 3.935: 86.5164% ( 312) 00:12:24.001 3.935 - 3.959: 88.1893% ( 216) 00:12:24.001 3.959 - 3.982: 89.7615% ( 203) 00:12:24.001 3.982 - 4.006: 91.2794% ( 196) 00:12:24.001 4.006 - 4.030: 92.4024% ( 145) 00:12:24.001 4.030 - 4.053: 93.1846% ( 101) 00:12:24.001 4.053 - 4.077: 93.8739% ( 89) 00:12:24.001 4.077 - 4.101: 94.4780% ( 78) 00:12:24.001 4.101 - 4.124: 95.0589% ( 75) 00:12:24.001 4.124 - 4.148: 95.3764% ( 41) 00:12:24.001 4.148 - 4.172: 95.6939% ( 41) 00:12:24.001 4.172 - 4.196: 96.0270% ( 43) 00:12:24.001 4.196 - 4.219: 96.2903% ( 34) 00:12:24.001 4.219 - 4.243: 96.4374% ( 19) 00:12:24.001 4.243 - 4.267: 96.5226% ( 11) 00:12:24.001 4.267 - 4.290: 96.6465% ( 16) 00:12:24.001 4.290 - 4.314: 96.7550% ( 14) 00:12:24.001 4.314 - 4.338: 96.8401% ( 11) 00:12:24.001 4.338 - 4.361: 96.9718% ( 17) 00:12:24.001 4.361 - 4.385: 97.0802% ( 14) 00:12:24.001 4.385 - 4.409: 97.1190% ( 5) 00:12:24.001 4.409 - 4.433: 97.1809% ( 8) 00:12:24.001 4.433 - 4.456: 97.2506% ( 9) 00:12:24.001 4.456 - 4.480: 97.2893% ( 5) 00:12:24.001 4.480 - 4.504: 97.3281% ( 5) 00:12:24.001 4.504 - 4.527: 97.3358% ( 1) 00:12:24.001 4.527 - 4.551: 97.3436% ( 1) 00:12:24.001 4.575 - 4.599: 97.3590% ( 2) 00:12:24.001 4.599 - 4.622: 97.3745% ( 2) 00:12:24.001 4.622 - 4.646: 97.3900% ( 2) 00:12:24.001 4.646 - 4.670: 97.4133% ( 3) 00:12:24.001 4.670 - 4.693: 97.4210% ( 1) 00:12:24.001 4.693 - 4.717: 97.4675% ( 6) 00:12:24.001 4.717 - 4.741: 97.5217% ( 7) 00:12:24.001 4.741 - 4.764: 97.5604% ( 5) 00:12:24.001 4.764 - 4.788: 97.6069% ( 6) 00:12:24.001 4.788 - 4.812: 97.6688% ( 8) 00:12:24.001 4.812 - 4.836: 97.7076% ( 5) 00:12:24.001 4.836 - 4.859: 97.7463% ( 5) 00:12:24.001 4.859 - 4.883: 97.8005% ( 7) 00:12:24.001 4.883 - 4.907: 97.8702% ( 9) 00:12:24.001 4.907 - 4.930: 97.8934% ( 3) 00:12:24.001 4.930 - 4.954: 97.9322% ( 5) 00:12:24.001 4.954 - 4.978: 98.0096% ( 10) 00:12:24.001 4.978 - 5.001: 98.0406% ( 4) 00:12:24.001 5.001 - 5.025: 98.0948% ( 7) 00:12:24.001 5.025 - 5.049: 98.1335% ( 5) 00:12:24.001 5.049 - 5.073: 98.1568% ( 3) 00:12:24.001 5.073 - 5.096: 98.1645% ( 1) 00:12:24.001 5.096 - 5.120: 98.1877% ( 3) 00:12:24.001 5.120 - 5.144: 98.2032% ( 2) 00:12:24.001 5.144 - 5.167: 98.2419% ( 5) 00:12:24.001 5.167 - 5.191: 98.2574% ( 2) 00:12:24.001 5.239 - 5.262: 98.2652% ( 1) 00:12:24.001 5.262 - 5.286: 98.2962% ( 4) 00:12:24.001 5.310 - 5.333: 98.3116% ( 2) 00:12:24.001 5.357 - 5.381: 98.3194% ( 1) 00:12:24.001 5.381 - 5.404: 98.3271% ( 1) 00:12:24.001 5.428 - 5.452: 98.3349% ( 1) 00:12:24.001 5.618 - 5.641: 98.3426% ( 1) 00:12:24.001 5.641 - 5.665: 98.3581% ( 2) 00:12:24.001 5.784 - 5.807: 98.3659% ( 1) 00:12:24.001 5.855 - 5.879: 98.3814% ( 2) 00:12:24.001 6.116 - 6.163: 98.3968% ( 2) 00:12:24.001 6.163 - 6.210: 98.4046% ( 1) 00:12:24.001 6.447 - 6.495: 98.4201% ( 2) 00:12:24.001 6.732 - 6.779: 98.4278% ( 1) 00:12:24.001 6.779 - 6.827: 98.4433% ( 2) 00:12:24.001 6.874 - 6.921: 98.4588% ( 2) 00:12:24.001 7.253 - 7.301: 98.4665% ( 1) 00:12:24.001 7.348 - 7.396: 98.4820% ( 2) 00:12:24.001 7.443 - 7.490: 98.4898% ( 1) 00:12:24.001 7.585 - 7.633: 98.4975% ( 1) 00:12:24.001 7.633 - 7.680: 98.5053% ( 1) 00:12:24.001 7.822 - 7.870: 98.5130% ( 1) 00:12:24.001 7.870 - 7.917: 98.5285% ( 2) 00:12:24.001 7.917 - 7.964: 98.5362% ( 1) 00:12:24.001 8.059 - 8.107: 98.5440% ( 1) 00:12:24.001 8.154 - 8.201: 98.5517% ( 1) 00:12:24.001 8.201 - 8.249: 98.5595% ( 1) 00:12:24.001 8.391 - 8.439: 98.5672% ( 1) 00:12:24.001 8.439 - 8.486: 98.5750% ( 1) 00:12:24.001 8.533 - 8.581: 98.5827% ( 1) 00:12:24.001 8.723 - 8.770: 98.5982% ( 2) 00:12:24.001 8.818 - 8.865: 98.6059% ( 1) 00:12:24.001 8.865 - 8.913: 98.6137% ( 1) 00:12:24.001 8.960 - 9.007: 98.6214% ( 1) 00:12:24.001 9.007 - 9.055: 98.6292% ( 1) 00:12:24.001 9.244 - 9.292: 98.6447% ( 2) 00:12:24.001 9.339 - 9.387: 98.6524% ( 1) 00:12:24.001 9.387 - 9.434: 98.6602% ( 1) 00:12:24.001 9.434 - 9.481: 98.6679% ( 1) 00:12:24.001 9.481 - 9.529: 98.6834% ( 2) 00:12:24.001 9.576 - 9.624: 98.6989% ( 2) 00:12:24.001 9.624 - 9.671: 98.7144% ( 2) 00:12:24.001 9.719 - 9.766: 98.7221% ( 1) 00:12:24.001 9.766 - 9.813: 98.7299% ( 1) 00:12:24.001 9.861 - 9.908: 98.7454% ( 2) 00:12:24.001 9.956 - 10.003: 98.7531% ( 1) 00:12:24.001 10.003 - 10.050: 98.7686% ( 2) 00:12:24.001 10.240 - 10.287: 98.7841% ( 2) 00:12:24.001 10.287 - 10.335: 98.7918% ( 1) 00:12:24.001 10.524 - 10.572: 98.8073% ( 2) 00:12:24.001 10.714 - 10.761: 98.8228% ( 2) 00:12:24.001 10.809 - 10.856: 98.8305% ( 1) 00:12:24.001 10.951 - 10.999: 98.8383% ( 1) 00:12:24.001 10.999 - 11.046: 98.8460% ( 1) 00:12:24.001 11.046 - 11.093: 98.8538% ( 1) 00:12:24.001 11.141 - 11.188: 98.8615% ( 1) 00:12:24.001 11.425 - 11.473: 98.8693% ( 1) 00:12:24.001 11.473 - 11.520: 98.8770% ( 1) 00:12:24.001 11.567 - 11.615: 98.8925% ( 2) 00:12:24.001 11.662 - 11.710: 98.9002% ( 1) 00:12:24.001 11.757 - 11.804: 98.9157% ( 2) 00:12:24.001 11.804 - 11.852: 98.9235% ( 1) 00:12:24.001 11.899 - 11.947: 98.9312% ( 1) 00:12:24.001 11.947 - 11.994: 98.9390% ( 1) 00:12:24.001 11.994 - 12.041: 98.9467% ( 1) 00:12:24.001 12.136 - 12.231: 98.9545% ( 1) 00:12:24.001 12.231 - 12.326: 98.9622% ( 1) 00:12:24.001 12.610 - 12.705: 98.9777% ( 2) 00:12:24.001 12.990 - 13.084: 98.9854% ( 1) 00:12:24.001 13.369 - 13.464: 98.9932% ( 1) 00:12:24.001 13.653 - 13.748: 99.0009% ( 1) 00:12:24.001 13.748 - 13.843: 99.0087% ( 1) 00:12:24.001 14.033 - 14.127: 99.0164% ( 1) 00:12:24.001 15.170 - 15.265: 99.0242% ( 1) 00:12:24.001 16.972 - 17.067: 99.0319% ( 1) 00:12:24.001 17.067 - 17.161: 99.0474% ( 2) 00:12:24.001 17.351 - 17.446: 99.0861% ( 5) 00:12:24.001 17.446 - 17.541: 99.1094% ( 3) 00:12:24.001 17.541 - 17.636: 99.1248% ( 2) 00:12:24.001 17.636 - 17.730: 99.1791% ( 7) 00:12:24.001 17.730 - 17.825: 99.2333% ( 7) 00:12:24.001 17.825 - 17.920: 99.2565% ( 3) 00:12:24.001 17.920 - 18.015: 99.3185% ( 8) 00:12:24.001 18.015 - 18.110: 99.3804% ( 8) 00:12:24.002 18.110 - 18.204: 99.4346% ( 7) 00:12:24.002 18.204 - 18.299: 99.5043% ( 9) 00:12:24.002 18.299 - 18.394: 99.5818% ( 10) 00:12:24.002 18.394 - 18.489: 99.6670% ( 11) 00:12:24.002 18.489 - 18.584: 99.6825% ( 2) 00:12:24.002 18.584 - 18.679: 99.6980% ( 2) 00:12:24.002 18.679 - 18.773: 99.7367% ( 5) 00:12:24.002 18.773 - 18.868: 99.7831% ( 6) 00:12:24.002 18.868 - 18.963: 99.8064% ( 3) 00:12:24.002 18.963 - 19.058: 99.8374% ( 4) 00:12:24.002 19.058 - 19.153: 99.8529% ( 2) 00:12:24.002 19.153 - 19.247: 99.8606% ( 1) 00:12:24.002 19.342 - 19.437: 99.8683% ( 1) 00:12:24.002 19.532 - 19.627: 99.8761% ( 1) 00:12:24.002 21.523 - 21.618: 99.8838% ( 1) 00:12:24.002 21.713 - 21.807: 99.8916% ( 1) 00:12:24.002 22.376 - 22.471: 99.8993% ( 1) 00:12:24.002 23.230 - 23.324: 99.9071% ( 1) 00:12:24.002 24.083 - 24.178: 99.9148% ( 1) 00:12:24.002 25.031 - 25.221: 99.9226% ( 1) 00:12:24.002 26.548 - 26.738: 99.9303% ( 1) 00:12:24.002 27.117 - 27.307: 99.9380% ( 1) 00:12:24.002 3980.705 - 4004.978: 100.0000% ( 8) 00:12:24.002 00:12:24.002 Complete histogram 00:12:24.002 ================== 00:12:24.002 Range in us Cumulative Count 00:12:24.002 2.062 - 2.074: 2.6952% ( 348) 00:12:24.002 2.074 - 2.086: 36.5939% ( 4377) 00:12:24.002 2.086 - 2.098: 44.5555% ( 1028) 00:12:24.002 2.098 - 2.110: 48.7376% ( 540) 00:12:24.002 2.110 - 2.121: 56.5753% ( 1012) 00:12:24.002 2.121 - 2.133: 58.3566% ( 230) 00:12:24.002 2.133 - 2.145: 64.5291% ( 797) 00:12:24.002 2.145 - 2.157: 77.7881% ( 1712) 00:12:24.002 2.157 - 2.169: 80.4988% ( 350) 00:12:24.002 2.169 - 2.181: 82.7292% ( 288) 00:12:24.002 2.181 - 2.193: 85.5716% ( 367) 00:12:24.002 2.193 - 2.204: 86.3460% ( 100) 00:12:24.002 2.204 - 2.216: 87.6317% ( 166) 00:12:24.002 2.216 - 2.228: 90.3656% ( 353) 00:12:24.002 2.228 - 2.240: 92.0616% ( 219) 00:12:24.002 2.240 - 2.252: 93.5719% ( 195) 00:12:24.002 2.252 - 2.264: 94.2069% ( 82) 00:12:24.002 2.264 - 2.276: 94.4470% ( 31) 00:12:24.002 2.276 - 2.287: 94.6174% ( 22) 00:12:24.002 2.287 - 2.299: 94.9272% ( 40) 00:12:24.002 2.299 - 2.311: 95.2447% ( 41) 00:12:24.002 2.311 - 2.323: 95.5932% ( 45) 00:12:24.002 2.323 - 2.335: 95.7249% ( 17) 00:12:24.002 2.335 - 2.347: 95.7404% ( 2) 00:12:24.002 2.347 - 2.359: 95.7946% ( 7) 00:12:24.002 2.359 - 2.370: 95.8333% ( 5) 00:12:24.002 2.370 - 2.382: 95.8798% ( 6) 00:12:24.002 2.382 - 2.394: 95.9805% ( 13) 00:12:24.002 2.394 - 2.406: 96.0889% ( 14) 00:12:24.002 2.406 - 2.418: 96.2670% ( 23) 00:12:24.002 2.418 - 2.430: 96.4452% ( 23) 00:12:24.002 2.430 - 2.441: 96.7937% ( 45) 00:12:24.002 2.441 - 2.453: 97.0105% ( 28) 00:12:24.002 2.453 - 2.465: 97.2351% ( 29) 00:12:24.002 2.465 - 2.477: 97.5062% ( 35) 00:12:24.002 2.477 - 2.489: 97.6921% ( 24) 00:12:24.002 2.489 - 2.501: 97.8547% ( 21) 00:12:24.002 2.501 - 2.513: 98.0096% ( 20) 00:12:24.002 2.513 - 2.524: 98.1258% ( 15) 00:12:24.002 2.524 - 2.536: 98.2110% ( 11) 00:12:24.002 2.536 - 2.548: 98.2652% ( 7) 00:12:24.002 2.548 - 2.560: 98.2884% ( 3) 00:12:24.002 2.560 - 2.572: 98.3349% ( 6) 00:12:24.002 2.572 - 2.584: 98.3504% ( 2) 00:12:24.002 2.584 - 2.596: 98.3891% ( 5) 00:12:24.002 2.596 - 2.607: 98.4046% ( 2) 00:12:24.002 2.607 - 2.619: 98.4123% ( 1) 00:12:24.002 2.619 - 2.631: 98.4278% ( 2) 00:12:24.002 2.631 - 2.643: 98.4356% ( 1) 00:12:24.002 2.655 - 2.667: 98.4433% ( 1) 00:12:24.002 2.690 - 2.702: 98.4588% ( 2) 00:12:24.002 2.738 - 2.750: 98.4665% ( 1) 00:12:24.002 2.785 - 2.797: 98.4743% ( 1) 00:12:24.002 2.821 - 2.833: 98.4898% ( 2) 00:12:24.002 2.833 - 2.844: 98.4975% ( 1) 00:12:24.002 2.868 - 2.880: 98.5053% ( 1) 00:12:24.002 2.880 - 2.892: 98.5130% ( 1) 00:12:24.002 2.927 - 2.939: 98.5208% ( 1) 00:12:24.002 3.105 - 3.129: 98.5285% ( 1) 00:12:24.002 3.342 - 3.366: 98.5362% ( 1) 00:12:24.002 3.413 - 3.437: 98.5517% ( 2) 00:12:24.002 3.437 - 3.461: 98.5672% ( 2) 00:12:24.002 3.461 - 3.484: 98.5827% ( 2) 00:12:24.002 3.484 - 3.508: 98.5905% ( 1) 00:12:24.002 3.508 - 3.532: 98.6059% ( 2) 00:12:24.002 3.556 - 3.579: 98.6447% ( 5) 00:12:24.002 3.603 - 3.627: 98.6524% ( 1) 00:12:24.002 3.627 - 3.650: 98.6602% ( 1) 00:12:24.002 3.721 - 3.745: 98.6679% ( 1) 00:12:24.002 3.745 - 3.769: 98.6757% ( 1) 00:12:24.002 3.769 - 3.793: 98.6911% ( 2) 00:12:24.002 3.793 - 3.816: 98.7144% ( 3) 00:12:24.002 3.840 - 3.864: 98.7299% ( 2) 00:12:24.002 3.864 - 3.887: 9[2024-11-19 11:14:19.452178] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:24.259 8.7454% ( 2) 00:12:24.259 3.887 - 3.911: 98.7531% ( 1) 00:12:24.259 3.935 - 3.959: 98.7608% ( 1) 00:12:24.259 3.959 - 3.982: 98.7686% ( 1) 00:12:24.259 5.784 - 5.807: 98.7763% ( 1) 00:12:24.259 6.495 - 6.542: 98.7841% ( 1) 00:12:24.259 6.590 - 6.637: 98.7918% ( 1) 00:12:24.259 6.684 - 6.732: 98.7996% ( 1) 00:12:24.259 7.159 - 7.206: 98.8073% ( 1) 00:12:24.259 7.253 - 7.301: 98.8151% ( 1) 00:12:24.259 7.348 - 7.396: 98.8228% ( 1) 00:12:24.259 7.538 - 7.585: 98.8305% ( 1) 00:12:24.259 7.680 - 7.727: 98.8383% ( 1) 00:12:24.259 8.201 - 8.249: 98.8538% ( 2) 00:12:24.259 8.439 - 8.486: 98.8615% ( 1) 00:12:24.259 8.960 - 9.007: 98.8693% ( 1) 00:12:24.259 15.739 - 15.834: 98.8848% ( 2) 00:12:24.259 15.834 - 15.929: 98.9002% ( 2) 00:12:24.259 15.929 - 16.024: 98.9235% ( 3) 00:12:24.259 16.024 - 16.119: 98.9777% ( 7) 00:12:24.259 16.119 - 16.213: 99.0087% ( 4) 00:12:24.259 16.213 - 16.308: 99.0397% ( 4) 00:12:24.259 16.308 - 16.403: 99.0939% ( 7) 00:12:24.259 16.403 - 16.498: 99.1094% ( 2) 00:12:24.259 16.498 - 16.593: 99.1171% ( 1) 00:12:24.259 16.593 - 16.687: 99.1403% ( 3) 00:12:24.259 16.687 - 16.782: 99.2023% ( 8) 00:12:24.259 16.782 - 16.877: 99.2255% ( 3) 00:12:24.259 16.972 - 17.067: 99.2410% ( 2) 00:12:24.259 17.067 - 17.161: 99.2565% ( 2) 00:12:24.259 17.161 - 17.256: 99.2643% ( 1) 00:12:24.259 17.256 - 17.351: 99.3030% ( 5) 00:12:24.259 17.825 - 17.920: 99.3107% ( 1) 00:12:24.259 17.920 - 18.015: 99.3185% ( 1) 00:12:24.259 24.652 - 24.841: 99.3262% ( 1) 00:12:24.259 29.961 - 30.151: 99.3340% ( 1) 00:12:24.259 33.754 - 33.944: 99.3417% ( 1) 00:12:24.259 3980.705 - 4004.978: 99.9071% ( 73) 00:12:24.259 4004.978 - 4029.250: 100.0000% ( 12) 00:12:24.259 00:12:24.259 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:24.259 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:24.259 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:24.259 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:24.259 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:24.517 [ 00:12:24.517 { 00:12:24.517 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:24.517 "subtype": "Discovery", 00:12:24.517 "listen_addresses": [], 00:12:24.517 "allow_any_host": true, 00:12:24.517 "hosts": [] 00:12:24.517 }, 00:12:24.517 { 00:12:24.517 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:24.517 "subtype": "NVMe", 00:12:24.517 "listen_addresses": [ 00:12:24.517 { 00:12:24.517 "trtype": "VFIOUSER", 00:12:24.517 "adrfam": "IPv4", 00:12:24.517 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:24.517 "trsvcid": "0" 00:12:24.517 } 00:12:24.517 ], 00:12:24.517 "allow_any_host": true, 00:12:24.517 "hosts": [], 00:12:24.517 "serial_number": "SPDK1", 00:12:24.517 "model_number": "SPDK bdev Controller", 00:12:24.517 "max_namespaces": 32, 00:12:24.517 "min_cntlid": 1, 00:12:24.517 "max_cntlid": 65519, 00:12:24.517 "namespaces": [ 00:12:24.517 { 00:12:24.517 "nsid": 1, 00:12:24.517 "bdev_name": "Malloc1", 00:12:24.517 "name": "Malloc1", 00:12:24.517 "nguid": "B944F2789C924E6C9F26AE8380286FEC", 00:12:24.517 "uuid": "b944f278-9c92-4e6c-9f26-ae8380286fec" 00:12:24.517 } 00:12:24.517 ] 00:12:24.517 }, 00:12:24.517 { 00:12:24.517 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:24.517 "subtype": "NVMe", 00:12:24.517 "listen_addresses": [ 00:12:24.517 { 00:12:24.517 "trtype": "VFIOUSER", 00:12:24.517 "adrfam": "IPv4", 00:12:24.517 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:24.517 "trsvcid": "0" 00:12:24.517 } 00:12:24.517 ], 00:12:24.517 "allow_any_host": true, 00:12:24.517 "hosts": [], 00:12:24.517 "serial_number": "SPDK2", 00:12:24.517 "model_number": "SPDK bdev Controller", 00:12:24.517 "max_namespaces": 32, 00:12:24.517 "min_cntlid": 1, 00:12:24.517 "max_cntlid": 65519, 00:12:24.517 "namespaces": [ 00:12:24.517 { 00:12:24.517 "nsid": 1, 00:12:24.517 "bdev_name": "Malloc2", 00:12:24.517 "name": "Malloc2", 00:12:24.517 "nguid": "3EB17CF3DCCE4B9887AAE08C3AD2672C", 00:12:24.517 "uuid": "3eb17cf3-dcce-4b98-87aa-e08c3ad2672c" 00:12:24.517 } 00:12:24.517 ] 00:12:24.517 } 00:12:24.517 ] 00:12:24.517 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:24.517 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2580456 00:12:24.517 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:24.517 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:24.517 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:12:24.517 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:24.517 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:24.517 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:12:24.517 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:24.518 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:24.518 [2024-11-19 11:14:20.011365] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:24.776 Malloc3 00:12:24.776 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:25.033 [2024-11-19 11:14:20.446627] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:25.033 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:25.033 Asynchronous Event Request test 00:12:25.033 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:25.033 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:25.033 Registering asynchronous event callbacks... 00:12:25.033 Starting namespace attribute notice tests for all controllers... 00:12:25.033 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:25.033 aer_cb - Changed Namespace 00:12:25.033 Cleaning up... 00:12:25.291 [ 00:12:25.291 { 00:12:25.291 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:25.291 "subtype": "Discovery", 00:12:25.291 "listen_addresses": [], 00:12:25.291 "allow_any_host": true, 00:12:25.291 "hosts": [] 00:12:25.291 }, 00:12:25.291 { 00:12:25.291 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:25.291 "subtype": "NVMe", 00:12:25.291 "listen_addresses": [ 00:12:25.291 { 00:12:25.291 "trtype": "VFIOUSER", 00:12:25.291 "adrfam": "IPv4", 00:12:25.291 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:25.291 "trsvcid": "0" 00:12:25.291 } 00:12:25.291 ], 00:12:25.291 "allow_any_host": true, 00:12:25.291 "hosts": [], 00:12:25.291 "serial_number": "SPDK1", 00:12:25.291 "model_number": "SPDK bdev Controller", 00:12:25.291 "max_namespaces": 32, 00:12:25.291 "min_cntlid": 1, 00:12:25.291 "max_cntlid": 65519, 00:12:25.291 "namespaces": [ 00:12:25.291 { 00:12:25.291 "nsid": 1, 00:12:25.291 "bdev_name": "Malloc1", 00:12:25.291 "name": "Malloc1", 00:12:25.291 "nguid": "B944F2789C924E6C9F26AE8380286FEC", 00:12:25.291 "uuid": "b944f278-9c92-4e6c-9f26-ae8380286fec" 00:12:25.291 }, 00:12:25.291 { 00:12:25.291 "nsid": 2, 00:12:25.291 "bdev_name": "Malloc3", 00:12:25.291 "name": "Malloc3", 00:12:25.291 "nguid": "73F19399C9E94E47B2DA7942B4B82B92", 00:12:25.291 "uuid": "73f19399-c9e9-4e47-b2da-7942b4b82b92" 00:12:25.291 } 00:12:25.291 ] 00:12:25.291 }, 00:12:25.291 { 00:12:25.291 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:25.291 "subtype": "NVMe", 00:12:25.291 "listen_addresses": [ 00:12:25.291 { 00:12:25.291 "trtype": "VFIOUSER", 00:12:25.291 "adrfam": "IPv4", 00:12:25.291 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:25.291 "trsvcid": "0" 00:12:25.291 } 00:12:25.291 ], 00:12:25.291 "allow_any_host": true, 00:12:25.291 "hosts": [], 00:12:25.291 "serial_number": "SPDK2", 00:12:25.291 "model_number": "SPDK bdev Controller", 00:12:25.291 "max_namespaces": 32, 00:12:25.291 "min_cntlid": 1, 00:12:25.291 "max_cntlid": 65519, 00:12:25.291 "namespaces": [ 00:12:25.291 { 00:12:25.291 "nsid": 1, 00:12:25.291 "bdev_name": "Malloc2", 00:12:25.291 "name": "Malloc2", 00:12:25.291 "nguid": "3EB17CF3DCCE4B9887AAE08C3AD2672C", 00:12:25.291 "uuid": "3eb17cf3-dcce-4b98-87aa-e08c3ad2672c" 00:12:25.291 } 00:12:25.291 ] 00:12:25.291 } 00:12:25.291 ] 00:12:25.291 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2580456 00:12:25.291 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:25.291 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:25.291 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:25.291 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:25.291 [2024-11-19 11:14:20.741970] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:12:25.291 [2024-11-19 11:14:20.742012] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2580593 ] 00:12:25.553 [2024-11-19 11:14:20.789934] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:25.553 [2024-11-19 11:14:20.798682] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:25.553 [2024-11-19 11:14:20.798716] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f86ef577000 00:12:25.553 [2024-11-19 11:14:20.799677] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:25.553 [2024-11-19 11:14:20.800689] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:25.553 [2024-11-19 11:14:20.801685] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:25.553 [2024-11-19 11:14:20.802689] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:25.553 [2024-11-19 11:14:20.803707] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:25.553 [2024-11-19 11:14:20.804702] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:25.553 [2024-11-19 11:14:20.805706] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:25.553 [2024-11-19 11:14:20.806711] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:25.553 [2024-11-19 11:14:20.807721] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:25.553 [2024-11-19 11:14:20.807743] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f86ef56c000 00:12:25.553 [2024-11-19 11:14:20.808858] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:25.553 [2024-11-19 11:14:20.827568] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:25.553 [2024-11-19 11:14:20.827607] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:12:25.553 [2024-11-19 11:14:20.829713] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:25.553 [2024-11-19 11:14:20.829771] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:25.553 [2024-11-19 11:14:20.829866] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:12:25.553 [2024-11-19 11:14:20.829890] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:12:25.553 [2024-11-19 11:14:20.829905] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:12:25.553 [2024-11-19 11:14:20.830726] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:25.553 [2024-11-19 11:14:20.830748] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:12:25.553 [2024-11-19 11:14:20.830761] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:12:25.553 [2024-11-19 11:14:20.831741] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:25.553 [2024-11-19 11:14:20.831763] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:12:25.553 [2024-11-19 11:14:20.831777] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:12:25.553 [2024-11-19 11:14:20.832743] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:25.553 [2024-11-19 11:14:20.832764] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:25.553 [2024-11-19 11:14:20.833754] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:25.553 [2024-11-19 11:14:20.833774] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:12:25.553 [2024-11-19 11:14:20.833783] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:12:25.553 [2024-11-19 11:14:20.833794] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:25.553 [2024-11-19 11:14:20.833904] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:12:25.553 [2024-11-19 11:14:20.833912] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:25.553 [2024-11-19 11:14:20.833920] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:25.553 [2024-11-19 11:14:20.834765] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:25.553 [2024-11-19 11:14:20.835767] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:25.553 [2024-11-19 11:14:20.836775] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:25.553 [2024-11-19 11:14:20.837768] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:25.553 [2024-11-19 11:14:20.837834] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:25.553 [2024-11-19 11:14:20.838786] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:25.553 [2024-11-19 11:14:20.838808] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:25.553 [2024-11-19 11:14:20.838817] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:12:25.553 [2024-11-19 11:14:20.838847] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:12:25.553 [2024-11-19 11:14:20.838862] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:12:25.553 [2024-11-19 11:14:20.838887] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:25.553 [2024-11-19 11:14:20.838897] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:25.553 [2024-11-19 11:14:20.838904] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:25.553 [2024-11-19 11:14:20.838925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:25.553 [2024-11-19 11:14:20.845381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:25.553 [2024-11-19 11:14:20.845406] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:12:25.553 [2024-11-19 11:14:20.845416] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:12:25.553 [2024-11-19 11:14:20.845423] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:12:25.553 [2024-11-19 11:14:20.845431] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:25.554 [2024-11-19 11:14:20.845444] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:12:25.554 [2024-11-19 11:14:20.845453] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:12:25.554 [2024-11-19 11:14:20.845462] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:12:25.554 [2024-11-19 11:14:20.845479] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:12:25.554 [2024-11-19 11:14:20.845496] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:25.554 [2024-11-19 11:14:20.853374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:25.554 [2024-11-19 11:14:20.853399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:25.554 [2024-11-19 11:14:20.853413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:25.554 [2024-11-19 11:14:20.853426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:25.554 [2024-11-19 11:14:20.853439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:25.554 [2024-11-19 11:14:20.853448] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:12:25.554 [2024-11-19 11:14:20.853460] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:25.554 [2024-11-19 11:14:20.853474] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:25.554 [2024-11-19 11:14:20.861374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:25.554 [2024-11-19 11:14:20.861398] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:12:25.554 [2024-11-19 11:14:20.861413] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:25.554 [2024-11-19 11:14:20.861426] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:12:25.554 [2024-11-19 11:14:20.861436] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:12:25.554 [2024-11-19 11:14:20.861450] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:25.554 [2024-11-19 11:14:20.869374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:25.554 [2024-11-19 11:14:20.869451] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:12:25.554 [2024-11-19 11:14:20.869469] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:12:25.554 [2024-11-19 11:14:20.869483] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:25.554 [2024-11-19 11:14:20.869492] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:25.554 [2024-11-19 11:14:20.869498] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:25.554 [2024-11-19 11:14:20.869508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:25.554 [2024-11-19 11:14:20.877376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:25.554 [2024-11-19 11:14:20.877402] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:12:25.554 [2024-11-19 11:14:20.877427] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:12:25.554 [2024-11-19 11:14:20.877444] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:12:25.554 [2024-11-19 11:14:20.877457] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:25.554 [2024-11-19 11:14:20.877466] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:25.554 [2024-11-19 11:14:20.877472] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:25.554 [2024-11-19 11:14:20.877482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:25.554 [2024-11-19 11:14:20.885376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:25.554 [2024-11-19 11:14:20.885406] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:25.554 [2024-11-19 11:14:20.885423] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:25.554 [2024-11-19 11:14:20.885436] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:25.554 [2024-11-19 11:14:20.885445] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:25.554 [2024-11-19 11:14:20.885451] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:25.554 [2024-11-19 11:14:20.885460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:25.554 [2024-11-19 11:14:20.893386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:25.554 [2024-11-19 11:14:20.893410] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:25.554 [2024-11-19 11:14:20.893423] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:12:25.554 [2024-11-19 11:14:20.893438] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:12:25.554 [2024-11-19 11:14:20.893449] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:12:25.554 [2024-11-19 11:14:20.893458] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:25.554 [2024-11-19 11:14:20.893466] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:12:25.554 [2024-11-19 11:14:20.893475] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:12:25.554 [2024-11-19 11:14:20.893482] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:12:25.554 [2024-11-19 11:14:20.893491] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:12:25.554 [2024-11-19 11:14:20.893518] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:25.554 [2024-11-19 11:14:20.901376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:25.554 [2024-11-19 11:14:20.901402] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:25.554 [2024-11-19 11:14:20.909374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:25.554 [2024-11-19 11:14:20.909399] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:25.554 [2024-11-19 11:14:20.917375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:25.554 [2024-11-19 11:14:20.917400] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:25.554 [2024-11-19 11:14:20.925375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:25.554 [2024-11-19 11:14:20.925407] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:25.554 [2024-11-19 11:14:20.925418] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:25.554 [2024-11-19 11:14:20.925425] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:25.554 [2024-11-19 11:14:20.925431] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:25.554 [2024-11-19 11:14:20.925436] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:12:25.554 [2024-11-19 11:14:20.925446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:25.554 [2024-11-19 11:14:20.925458] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:25.554 [2024-11-19 11:14:20.925466] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:25.554 [2024-11-19 11:14:20.925476] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:25.554 [2024-11-19 11:14:20.925486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:25.554 [2024-11-19 11:14:20.925497] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:25.554 [2024-11-19 11:14:20.925505] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:25.554 [2024-11-19 11:14:20.925511] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:25.554 [2024-11-19 11:14:20.925520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:25.554 [2024-11-19 11:14:20.925532] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:25.554 [2024-11-19 11:14:20.925540] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:25.554 [2024-11-19 11:14:20.925546] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:25.555 [2024-11-19 11:14:20.925555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:25.555 [2024-11-19 11:14:20.933377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:25.555 [2024-11-19 11:14:20.933405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:25.555 [2024-11-19 11:14:20.933423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:25.555 [2024-11-19 11:14:20.933436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:25.555 ===================================================== 00:12:25.555 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:25.555 ===================================================== 00:12:25.555 Controller Capabilities/Features 00:12:25.555 ================================ 00:12:25.555 Vendor ID: 4e58 00:12:25.555 Subsystem Vendor ID: 4e58 00:12:25.555 Serial Number: SPDK2 00:12:25.555 Model Number: SPDK bdev Controller 00:12:25.555 Firmware Version: 25.01 00:12:25.555 Recommended Arb Burst: 6 00:12:25.555 IEEE OUI Identifier: 8d 6b 50 00:12:25.555 Multi-path I/O 00:12:25.555 May have multiple subsystem ports: Yes 00:12:25.555 May have multiple controllers: Yes 00:12:25.555 Associated with SR-IOV VF: No 00:12:25.555 Max Data Transfer Size: 131072 00:12:25.555 Max Number of Namespaces: 32 00:12:25.555 Max Number of I/O Queues: 127 00:12:25.555 NVMe Specification Version (VS): 1.3 00:12:25.555 NVMe Specification Version (Identify): 1.3 00:12:25.555 Maximum Queue Entries: 256 00:12:25.555 Contiguous Queues Required: Yes 00:12:25.555 Arbitration Mechanisms Supported 00:12:25.555 Weighted Round Robin: Not Supported 00:12:25.555 Vendor Specific: Not Supported 00:12:25.555 Reset Timeout: 15000 ms 00:12:25.555 Doorbell Stride: 4 bytes 00:12:25.555 NVM Subsystem Reset: Not Supported 00:12:25.555 Command Sets Supported 00:12:25.555 NVM Command Set: Supported 00:12:25.555 Boot Partition: Not Supported 00:12:25.555 Memory Page Size Minimum: 4096 bytes 00:12:25.555 Memory Page Size Maximum: 4096 bytes 00:12:25.555 Persistent Memory Region: Not Supported 00:12:25.555 Optional Asynchronous Events Supported 00:12:25.555 Namespace Attribute Notices: Supported 00:12:25.555 Firmware Activation Notices: Not Supported 00:12:25.555 ANA Change Notices: Not Supported 00:12:25.555 PLE Aggregate Log Change Notices: Not Supported 00:12:25.555 LBA Status Info Alert Notices: Not Supported 00:12:25.555 EGE Aggregate Log Change Notices: Not Supported 00:12:25.555 Normal NVM Subsystem Shutdown event: Not Supported 00:12:25.555 Zone Descriptor Change Notices: Not Supported 00:12:25.555 Discovery Log Change Notices: Not Supported 00:12:25.555 Controller Attributes 00:12:25.555 128-bit Host Identifier: Supported 00:12:25.555 Non-Operational Permissive Mode: Not Supported 00:12:25.555 NVM Sets: Not Supported 00:12:25.555 Read Recovery Levels: Not Supported 00:12:25.555 Endurance Groups: Not Supported 00:12:25.555 Predictable Latency Mode: Not Supported 00:12:25.555 Traffic Based Keep ALive: Not Supported 00:12:25.555 Namespace Granularity: Not Supported 00:12:25.555 SQ Associations: Not Supported 00:12:25.555 UUID List: Not Supported 00:12:25.555 Multi-Domain Subsystem: Not Supported 00:12:25.555 Fixed Capacity Management: Not Supported 00:12:25.555 Variable Capacity Management: Not Supported 00:12:25.555 Delete Endurance Group: Not Supported 00:12:25.555 Delete NVM Set: Not Supported 00:12:25.555 Extended LBA Formats Supported: Not Supported 00:12:25.555 Flexible Data Placement Supported: Not Supported 00:12:25.555 00:12:25.555 Controller Memory Buffer Support 00:12:25.555 ================================ 00:12:25.555 Supported: No 00:12:25.555 00:12:25.555 Persistent Memory Region Support 00:12:25.555 ================================ 00:12:25.555 Supported: No 00:12:25.555 00:12:25.555 Admin Command Set Attributes 00:12:25.555 ============================ 00:12:25.555 Security Send/Receive: Not Supported 00:12:25.555 Format NVM: Not Supported 00:12:25.555 Firmware Activate/Download: Not Supported 00:12:25.555 Namespace Management: Not Supported 00:12:25.555 Device Self-Test: Not Supported 00:12:25.555 Directives: Not Supported 00:12:25.555 NVMe-MI: Not Supported 00:12:25.555 Virtualization Management: Not Supported 00:12:25.555 Doorbell Buffer Config: Not Supported 00:12:25.555 Get LBA Status Capability: Not Supported 00:12:25.555 Command & Feature Lockdown Capability: Not Supported 00:12:25.555 Abort Command Limit: 4 00:12:25.555 Async Event Request Limit: 4 00:12:25.555 Number of Firmware Slots: N/A 00:12:25.555 Firmware Slot 1 Read-Only: N/A 00:12:25.555 Firmware Activation Without Reset: N/A 00:12:25.555 Multiple Update Detection Support: N/A 00:12:25.555 Firmware Update Granularity: No Information Provided 00:12:25.555 Per-Namespace SMART Log: No 00:12:25.555 Asymmetric Namespace Access Log Page: Not Supported 00:12:25.555 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:25.555 Command Effects Log Page: Supported 00:12:25.555 Get Log Page Extended Data: Supported 00:12:25.555 Telemetry Log Pages: Not Supported 00:12:25.555 Persistent Event Log Pages: Not Supported 00:12:25.555 Supported Log Pages Log Page: May Support 00:12:25.555 Commands Supported & Effects Log Page: Not Supported 00:12:25.555 Feature Identifiers & Effects Log Page:May Support 00:12:25.555 NVMe-MI Commands & Effects Log Page: May Support 00:12:25.555 Data Area 4 for Telemetry Log: Not Supported 00:12:25.555 Error Log Page Entries Supported: 128 00:12:25.555 Keep Alive: Supported 00:12:25.555 Keep Alive Granularity: 10000 ms 00:12:25.555 00:12:25.555 NVM Command Set Attributes 00:12:25.555 ========================== 00:12:25.555 Submission Queue Entry Size 00:12:25.555 Max: 64 00:12:25.555 Min: 64 00:12:25.555 Completion Queue Entry Size 00:12:25.555 Max: 16 00:12:25.555 Min: 16 00:12:25.555 Number of Namespaces: 32 00:12:25.555 Compare Command: Supported 00:12:25.555 Write Uncorrectable Command: Not Supported 00:12:25.555 Dataset Management Command: Supported 00:12:25.555 Write Zeroes Command: Supported 00:12:25.555 Set Features Save Field: Not Supported 00:12:25.555 Reservations: Not Supported 00:12:25.555 Timestamp: Not Supported 00:12:25.555 Copy: Supported 00:12:25.555 Volatile Write Cache: Present 00:12:25.555 Atomic Write Unit (Normal): 1 00:12:25.555 Atomic Write Unit (PFail): 1 00:12:25.555 Atomic Compare & Write Unit: 1 00:12:25.555 Fused Compare & Write: Supported 00:12:25.555 Scatter-Gather List 00:12:25.555 SGL Command Set: Supported (Dword aligned) 00:12:25.555 SGL Keyed: Not Supported 00:12:25.555 SGL Bit Bucket Descriptor: Not Supported 00:12:25.555 SGL Metadata Pointer: Not Supported 00:12:25.555 Oversized SGL: Not Supported 00:12:25.555 SGL Metadata Address: Not Supported 00:12:25.555 SGL Offset: Not Supported 00:12:25.555 Transport SGL Data Block: Not Supported 00:12:25.555 Replay Protected Memory Block: Not Supported 00:12:25.555 00:12:25.555 Firmware Slot Information 00:12:25.555 ========================= 00:12:25.555 Active slot: 1 00:12:25.555 Slot 1 Firmware Revision: 25.01 00:12:25.555 00:12:25.555 00:12:25.555 Commands Supported and Effects 00:12:25.555 ============================== 00:12:25.555 Admin Commands 00:12:25.555 -------------- 00:12:25.555 Get Log Page (02h): Supported 00:12:25.555 Identify (06h): Supported 00:12:25.555 Abort (08h): Supported 00:12:25.555 Set Features (09h): Supported 00:12:25.555 Get Features (0Ah): Supported 00:12:25.555 Asynchronous Event Request (0Ch): Supported 00:12:25.555 Keep Alive (18h): Supported 00:12:25.555 I/O Commands 00:12:25.555 ------------ 00:12:25.555 Flush (00h): Supported LBA-Change 00:12:25.555 Write (01h): Supported LBA-Change 00:12:25.555 Read (02h): Supported 00:12:25.555 Compare (05h): Supported 00:12:25.555 Write Zeroes (08h): Supported LBA-Change 00:12:25.555 Dataset Management (09h): Supported LBA-Change 00:12:25.555 Copy (19h): Supported LBA-Change 00:12:25.555 00:12:25.555 Error Log 00:12:25.555 ========= 00:12:25.555 00:12:25.555 Arbitration 00:12:25.555 =========== 00:12:25.555 Arbitration Burst: 1 00:12:25.555 00:12:25.555 Power Management 00:12:25.555 ================ 00:12:25.555 Number of Power States: 1 00:12:25.555 Current Power State: Power State #0 00:12:25.555 Power State #0: 00:12:25.555 Max Power: 0.00 W 00:12:25.555 Non-Operational State: Operational 00:12:25.556 Entry Latency: Not Reported 00:12:25.556 Exit Latency: Not Reported 00:12:25.556 Relative Read Throughput: 0 00:12:25.556 Relative Read Latency: 0 00:12:25.556 Relative Write Throughput: 0 00:12:25.556 Relative Write Latency: 0 00:12:25.556 Idle Power: Not Reported 00:12:25.556 Active Power: Not Reported 00:12:25.556 Non-Operational Permissive Mode: Not Supported 00:12:25.556 00:12:25.556 Health Information 00:12:25.556 ================== 00:12:25.556 Critical Warnings: 00:12:25.556 Available Spare Space: OK 00:12:25.556 Temperature: OK 00:12:25.556 Device Reliability: OK 00:12:25.556 Read Only: No 00:12:25.556 Volatile Memory Backup: OK 00:12:25.556 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:25.556 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:25.556 Available Spare: 0% 00:12:25.556 Available Sp[2024-11-19 11:14:20.933565] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:25.556 [2024-11-19 11:14:20.941376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:25.556 [2024-11-19 11:14:20.941430] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:12:25.556 [2024-11-19 11:14:20.941454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:25.556 [2024-11-19 11:14:20.941465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:25.556 [2024-11-19 11:14:20.941475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:25.556 [2024-11-19 11:14:20.941484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:25.556 [2024-11-19 11:14:20.941564] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:25.556 [2024-11-19 11:14:20.941587] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:25.556 [2024-11-19 11:14:20.942568] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:25.556 [2024-11-19 11:14:20.942640] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:12:25.556 [2024-11-19 11:14:20.942655] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:12:25.556 [2024-11-19 11:14:20.943578] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:25.556 [2024-11-19 11:14:20.943608] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:12:25.556 [2024-11-19 11:14:20.943676] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:25.556 [2024-11-19 11:14:20.944854] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:25.556 are Threshold: 0% 00:12:25.556 Life Percentage Used: 0% 00:12:25.556 Data Units Read: 0 00:12:25.556 Data Units Written: 0 00:12:25.556 Host Read Commands: 0 00:12:25.556 Host Write Commands: 0 00:12:25.556 Controller Busy Time: 0 minutes 00:12:25.556 Power Cycles: 0 00:12:25.556 Power On Hours: 0 hours 00:12:25.556 Unsafe Shutdowns: 0 00:12:25.556 Unrecoverable Media Errors: 0 00:12:25.556 Lifetime Error Log Entries: 0 00:12:25.556 Warning Temperature Time: 0 minutes 00:12:25.556 Critical Temperature Time: 0 minutes 00:12:25.556 00:12:25.556 Number of Queues 00:12:25.556 ================ 00:12:25.556 Number of I/O Submission Queues: 127 00:12:25.556 Number of I/O Completion Queues: 127 00:12:25.556 00:12:25.556 Active Namespaces 00:12:25.556 ================= 00:12:25.556 Namespace ID:1 00:12:25.556 Error Recovery Timeout: Unlimited 00:12:25.556 Command Set Identifier: NVM (00h) 00:12:25.556 Deallocate: Supported 00:12:25.556 Deallocated/Unwritten Error: Not Supported 00:12:25.556 Deallocated Read Value: Unknown 00:12:25.556 Deallocate in Write Zeroes: Not Supported 00:12:25.556 Deallocated Guard Field: 0xFFFF 00:12:25.556 Flush: Supported 00:12:25.556 Reservation: Supported 00:12:25.556 Namespace Sharing Capabilities: Multiple Controllers 00:12:25.556 Size (in LBAs): 131072 (0GiB) 00:12:25.556 Capacity (in LBAs): 131072 (0GiB) 00:12:25.556 Utilization (in LBAs): 131072 (0GiB) 00:12:25.556 NGUID: 3EB17CF3DCCE4B9887AAE08C3AD2672C 00:12:25.556 UUID: 3eb17cf3-dcce-4b98-87aa-e08c3ad2672c 00:12:25.556 Thin Provisioning: Not Supported 00:12:25.556 Per-NS Atomic Units: Yes 00:12:25.556 Atomic Boundary Size (Normal): 0 00:12:25.556 Atomic Boundary Size (PFail): 0 00:12:25.556 Atomic Boundary Offset: 0 00:12:25.556 Maximum Single Source Range Length: 65535 00:12:25.556 Maximum Copy Length: 65535 00:12:25.556 Maximum Source Range Count: 1 00:12:25.556 NGUID/EUI64 Never Reused: No 00:12:25.556 Namespace Write Protected: No 00:12:25.556 Number of LBA Formats: 1 00:12:25.556 Current LBA Format: LBA Format #00 00:12:25.556 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:25.556 00:12:25.556 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:25.870 [2024-11-19 11:14:21.197198] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:31.162 Initializing NVMe Controllers 00:12:31.162 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:31.162 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:31.162 Initialization complete. Launching workers. 00:12:31.162 ======================================================== 00:12:31.162 Latency(us) 00:12:31.162 Device Information : IOPS MiB/s Average min max 00:12:31.162 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34124.10 133.30 3750.76 1190.94 11589.17 00:12:31.162 ======================================================== 00:12:31.162 Total : 34124.10 133.30 3750.76 1190.94 11589.17 00:12:31.162 00:12:31.162 [2024-11-19 11:14:26.308779] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:31.162 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:31.162 [2024-11-19 11:14:26.571532] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:36.427 Initializing NVMe Controllers 00:12:36.427 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:36.427 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:36.427 Initialization complete. Launching workers. 00:12:36.427 ======================================================== 00:12:36.427 Latency(us) 00:12:36.427 Device Information : IOPS MiB/s Average min max 00:12:36.427 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31714.40 123.88 4037.25 1229.85 10523.59 00:12:36.427 ======================================================== 00:12:36.427 Total : 31714.40 123.88 4037.25 1229.85 10523.59 00:12:36.427 00:12:36.427 [2024-11-19 11:14:31.593967] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:36.427 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:36.427 [2024-11-19 11:14:31.832218] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:41.693 [2024-11-19 11:14:36.980522] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:41.693 Initializing NVMe Controllers 00:12:41.693 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:41.693 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:41.693 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:12:41.693 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:12:41.693 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:12:41.693 Initialization complete. Launching workers. 00:12:41.693 Starting thread on core 2 00:12:41.693 Starting thread on core 3 00:12:41.693 Starting thread on core 1 00:12:41.693 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:12:41.951 [2024-11-19 11:14:37.321892] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:45.234 [2024-11-19 11:14:40.384868] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:45.234 Initializing NVMe Controllers 00:12:45.234 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:45.234 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:45.234 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:12:45.234 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:12:45.234 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:12:45.234 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:12:45.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:45.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:45.234 Initialization complete. Launching workers. 00:12:45.234 Starting thread on core 1 with urgent priority queue 00:12:45.234 Starting thread on core 2 with urgent priority queue 00:12:45.234 Starting thread on core 3 with urgent priority queue 00:12:45.234 Starting thread on core 0 with urgent priority queue 00:12:45.234 SPDK bdev Controller (SPDK2 ) core 0: 5371.00 IO/s 18.62 secs/100000 ios 00:12:45.234 SPDK bdev Controller (SPDK2 ) core 1: 5415.67 IO/s 18.46 secs/100000 ios 00:12:45.234 SPDK bdev Controller (SPDK2 ) core 2: 5270.33 IO/s 18.97 secs/100000 ios 00:12:45.234 SPDK bdev Controller (SPDK2 ) core 3: 5124.00 IO/s 19.52 secs/100000 ios 00:12:45.234 ======================================================== 00:12:45.234 00:12:45.234 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:45.234 [2024-11-19 11:14:40.711949] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:45.234 Initializing NVMe Controllers 00:12:45.234 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:45.234 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:45.234 Namespace ID: 1 size: 0GB 00:12:45.234 Initialization complete. 00:12:45.234 INFO: using host memory buffer for IO 00:12:45.234 Hello world! 00:12:45.234 [2024-11-19 11:14:40.721180] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:45.492 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:45.749 [2024-11-19 11:14:41.048146] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:46.683 Initializing NVMe Controllers 00:12:46.683 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:46.683 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:46.683 Initialization complete. Launching workers. 00:12:46.683 submit (in ns) avg, min, max = 8558.7, 3530.0, 4016457.8 00:12:46.683 complete (in ns) avg, min, max = 22060.7, 2062.2, 4016413.3 00:12:46.683 00:12:46.683 Submit histogram 00:12:46.683 ================ 00:12:46.683 Range in us Cumulative Count 00:12:46.683 3.508 - 3.532: 0.0233% ( 3) 00:12:46.683 3.532 - 3.556: 1.6278% ( 207) 00:12:46.683 3.556 - 3.579: 3.1858% ( 201) 00:12:46.683 3.579 - 3.603: 8.1622% ( 642) 00:12:46.683 3.603 - 3.627: 15.4252% ( 937) 00:12:46.684 3.627 - 3.650: 25.3469% ( 1280) 00:12:46.684 3.650 - 3.674: 33.2377% ( 1018) 00:12:46.684 3.674 - 3.698: 40.5395% ( 942) 00:12:46.684 3.698 - 3.721: 47.5157% ( 900) 00:12:46.684 3.721 - 3.745: 53.7943% ( 810) 00:12:46.684 3.745 - 3.769: 59.4062% ( 724) 00:12:46.684 3.769 - 3.793: 63.9950% ( 592) 00:12:46.684 3.793 - 3.816: 67.9250% ( 507) 00:12:46.684 3.816 - 3.840: 71.2038% ( 423) 00:12:46.684 3.840 - 3.864: 74.8702% ( 473) 00:12:46.684 3.864 - 3.887: 78.8854% ( 518) 00:12:46.684 3.887 - 3.911: 82.4122% ( 455) 00:12:46.684 3.911 - 3.935: 85.1949% ( 359) 00:12:46.684 3.935 - 3.959: 87.2878% ( 270) 00:12:46.684 3.959 - 3.982: 88.9156% ( 210) 00:12:46.684 3.982 - 4.006: 90.6441% ( 223) 00:12:46.684 4.006 - 4.030: 91.9929% ( 174) 00:12:46.684 4.030 - 4.053: 93.2176% ( 158) 00:12:46.684 4.053 - 4.077: 93.9850% ( 99) 00:12:46.684 4.077 - 4.101: 94.7523% ( 99) 00:12:46.684 4.101 - 4.124: 95.2407% ( 63) 00:12:46.684 4.124 - 4.148: 95.8530% ( 79) 00:12:46.684 4.148 - 4.172: 96.2639% ( 53) 00:12:46.684 4.172 - 4.196: 96.4499% ( 24) 00:12:46.684 4.196 - 4.219: 96.6979% ( 32) 00:12:46.684 4.219 - 4.243: 96.8917% ( 25) 00:12:46.684 4.243 - 4.267: 97.0080% ( 15) 00:12:46.684 4.267 - 4.290: 97.1475% ( 18) 00:12:46.684 4.290 - 4.314: 97.2793% ( 17) 00:12:46.684 4.314 - 4.338: 97.3568% ( 10) 00:12:46.684 4.338 - 4.361: 97.4421% ( 11) 00:12:46.684 4.361 - 4.385: 97.5196% ( 10) 00:12:46.684 4.385 - 4.409: 97.5738% ( 7) 00:12:46.684 4.409 - 4.433: 97.5816% ( 1) 00:12:46.684 4.433 - 4.456: 97.6048% ( 3) 00:12:46.684 4.456 - 4.480: 97.6358% ( 4) 00:12:46.684 4.480 - 4.504: 97.6513% ( 2) 00:12:46.684 4.575 - 4.599: 97.6591% ( 1) 00:12:46.684 4.599 - 4.622: 97.6901% ( 4) 00:12:46.684 4.622 - 4.646: 97.7134% ( 3) 00:12:46.684 4.646 - 4.670: 97.7289% ( 2) 00:12:46.684 4.670 - 4.693: 97.7366% ( 1) 00:12:46.684 4.741 - 4.764: 97.7444% ( 1) 00:12:46.684 4.764 - 4.788: 97.7599% ( 2) 00:12:46.684 4.788 - 4.812: 97.7986% ( 5) 00:12:46.684 4.812 - 4.836: 97.8374% ( 5) 00:12:46.684 4.836 - 4.859: 97.8684% ( 4) 00:12:46.684 4.859 - 4.883: 97.8994% ( 4) 00:12:46.684 4.883 - 4.907: 97.9536% ( 7) 00:12:46.684 4.907 - 4.930: 97.9847% ( 4) 00:12:46.684 4.930 - 4.954: 98.0312% ( 6) 00:12:46.684 4.954 - 4.978: 98.0854% ( 7) 00:12:46.684 4.978 - 5.001: 98.1242% ( 5) 00:12:46.684 5.001 - 5.025: 98.1939% ( 9) 00:12:46.684 5.025 - 5.049: 98.2637% ( 9) 00:12:46.684 5.049 - 5.073: 98.3180% ( 7) 00:12:46.684 5.073 - 5.096: 98.3490% ( 4) 00:12:46.684 5.096 - 5.120: 98.3877% ( 5) 00:12:46.684 5.120 - 5.144: 98.4187% ( 4) 00:12:46.684 5.144 - 5.167: 98.4575% ( 5) 00:12:46.684 5.167 - 5.191: 98.4730% ( 2) 00:12:46.684 5.191 - 5.215: 98.4807% ( 1) 00:12:46.684 5.215 - 5.239: 98.4962% ( 2) 00:12:46.684 5.239 - 5.262: 98.5195% ( 3) 00:12:46.684 5.262 - 5.286: 98.5272% ( 1) 00:12:46.684 5.286 - 5.310: 98.5427% ( 2) 00:12:46.684 5.333 - 5.357: 98.5583% ( 2) 00:12:46.684 5.381 - 5.404: 98.5660% ( 1) 00:12:46.684 5.428 - 5.452: 98.5738% ( 1) 00:12:46.684 5.570 - 5.594: 98.5815% ( 1) 00:12:46.684 5.594 - 5.618: 98.5893% ( 1) 00:12:46.684 5.641 - 5.665: 98.6048% ( 2) 00:12:46.684 5.760 - 5.784: 98.6125% ( 1) 00:12:46.684 5.831 - 5.855: 98.6203% ( 1) 00:12:46.684 5.997 - 6.021: 98.6280% ( 1) 00:12:46.684 6.044 - 6.068: 98.6358% ( 1) 00:12:46.684 6.210 - 6.258: 98.6435% ( 1) 00:12:46.684 6.258 - 6.305: 98.6513% ( 1) 00:12:46.684 6.590 - 6.637: 98.6590% ( 1) 00:12:46.684 6.827 - 6.874: 98.6668% ( 1) 00:12:46.684 6.969 - 7.016: 98.6745% ( 1) 00:12:46.684 7.253 - 7.301: 98.6823% ( 1) 00:12:46.684 7.301 - 7.348: 98.6978% ( 2) 00:12:46.684 7.348 - 7.396: 98.7055% ( 1) 00:12:46.684 7.585 - 7.633: 98.7133% ( 1) 00:12:46.684 7.727 - 7.775: 98.7210% ( 1) 00:12:46.684 7.775 - 7.822: 98.7288% ( 1) 00:12:46.684 7.822 - 7.870: 98.7365% ( 1) 00:12:46.684 8.059 - 8.107: 98.7443% ( 1) 00:12:46.684 8.201 - 8.249: 98.7520% ( 1) 00:12:46.684 8.296 - 8.344: 98.7598% ( 1) 00:12:46.684 8.344 - 8.391: 98.7675% ( 1) 00:12:46.684 8.391 - 8.439: 98.7753% ( 1) 00:12:46.684 8.486 - 8.533: 98.7830% ( 1) 00:12:46.684 8.533 - 8.581: 98.7985% ( 2) 00:12:46.684 8.581 - 8.628: 98.8063% ( 1) 00:12:46.684 8.628 - 8.676: 98.8140% ( 1) 00:12:46.684 8.676 - 8.723: 98.8373% ( 3) 00:12:46.684 8.770 - 8.818: 98.8528% ( 2) 00:12:46.684 8.960 - 9.007: 98.8606% ( 1) 00:12:46.684 9.197 - 9.244: 98.8761% ( 2) 00:12:46.684 9.244 - 9.292: 98.8916% ( 2) 00:12:46.684 9.292 - 9.339: 98.8993% ( 1) 00:12:46.684 9.387 - 9.434: 98.9071% ( 1) 00:12:46.684 9.434 - 9.481: 98.9148% ( 1) 00:12:46.684 9.481 - 9.529: 98.9226% ( 1) 00:12:46.684 9.529 - 9.576: 98.9381% ( 2) 00:12:46.684 9.576 - 9.624: 98.9458% ( 1) 00:12:46.684 9.624 - 9.671: 98.9536% ( 1) 00:12:46.684 9.719 - 9.766: 98.9613% ( 1) 00:12:46.684 9.813 - 9.861: 98.9691% ( 1) 00:12:46.684 10.050 - 10.098: 98.9846% ( 2) 00:12:46.684 10.193 - 10.240: 99.0078% ( 3) 00:12:46.684 10.240 - 10.287: 99.0233% ( 2) 00:12:46.684 10.335 - 10.382: 99.0388% ( 2) 00:12:46.684 10.524 - 10.572: 99.0466% ( 1) 00:12:46.684 10.572 - 10.619: 99.0543% ( 1) 00:12:46.684 10.619 - 10.667: 99.0621% ( 1) 00:12:46.684 10.809 - 10.856: 99.0698% ( 1) 00:12:46.684 10.951 - 10.999: 99.0776% ( 1) 00:12:46.684 10.999 - 11.046: 99.0931% ( 2) 00:12:46.684 11.141 - 11.188: 99.1008% ( 1) 00:12:46.684 11.236 - 11.283: 99.1086% ( 1) 00:12:46.684 11.473 - 11.520: 99.1163% ( 1) 00:12:46.684 12.136 - 12.231: 99.1241% ( 1) 00:12:46.684 12.326 - 12.421: 99.1319% ( 1) 00:12:46.684 12.421 - 12.516: 99.1396% ( 1) 00:12:46.684 13.084 - 13.179: 99.1629% ( 3) 00:12:46.684 13.938 - 14.033: 99.1706% ( 1) 00:12:46.684 14.033 - 14.127: 99.1784% ( 1) 00:12:46.684 14.412 - 14.507: 99.1861% ( 1) 00:12:46.684 14.601 - 14.696: 99.2016% ( 2) 00:12:46.684 14.696 - 14.791: 99.2094% ( 1) 00:12:46.684 15.076 - 15.170: 99.2171% ( 1) 00:12:46.684 17.067 - 17.161: 99.2249% ( 1) 00:12:46.684 17.161 - 17.256: 99.2404% ( 2) 00:12:46.684 17.256 - 17.351: 99.2636% ( 3) 00:12:46.684 17.351 - 17.446: 99.2869% ( 3) 00:12:46.684 17.446 - 17.541: 99.3179% ( 4) 00:12:46.684 17.541 - 17.636: 99.3489% ( 4) 00:12:46.684 17.636 - 17.730: 99.3876% ( 5) 00:12:46.684 17.730 - 17.825: 99.4342% ( 6) 00:12:46.684 17.825 - 17.920: 99.4652% ( 4) 00:12:46.684 17.920 - 18.015: 99.5349% ( 9) 00:12:46.684 18.015 - 18.110: 99.5814% ( 6) 00:12:46.684 18.110 - 18.204: 99.6124% ( 4) 00:12:46.684 18.204 - 18.299: 99.6279% ( 2) 00:12:46.684 18.299 - 18.394: 99.6899% ( 8) 00:12:46.684 18.394 - 18.489: 99.7597% ( 9) 00:12:46.684 18.489 - 18.584: 99.7752% ( 2) 00:12:46.684 18.584 - 18.679: 99.7907% ( 2) 00:12:46.684 18.679 - 18.773: 99.7985% ( 1) 00:12:46.684 18.868 - 18.963: 99.8140% ( 2) 00:12:46.684 19.153 - 19.247: 99.8295% ( 2) 00:12:46.684 21.713 - 21.807: 99.8450% ( 2) 00:12:46.684 22.945 - 23.040: 99.8527% ( 1) 00:12:46.684 23.609 - 23.704: 99.8605% ( 1) 00:12:46.684 23.704 - 23.799: 99.8682% ( 1) 00:12:46.684 24.083 - 24.178: 99.8760% ( 1) 00:12:46.684 24.462 - 24.652: 99.8837% ( 1) 00:12:46.684 3980.705 - 4004.978: 99.9922% ( 14) 00:12:46.684 4004.978 - 4029.250: 100.0000% ( 1) 00:12:46.684 00:12:46.684 Complete histogram 00:12:46.684 ================== 00:12:46.684 Range in us Cumulative Count 00:12:46.684 2.062 - 2.074: 0.1938% ( 25) 00:12:46.684 2.074 - 2.086: 16.6576% ( 2124) 00:12:46.684 2.086 - 2.098: 48.1281% ( 4060) 00:12:46.684 2.098 - 2.110: 50.6085% ( 320) 00:12:46.684 2.110 - 2.121: 56.2282% ( 725) 00:12:46.684 2.121 - 2.133: 59.6233% ( 438) 00:12:46.684 2.133 - 2.145: 61.3363% ( 221) 00:12:46.684 2.145 - 2.157: 73.1261% ( 1521) 00:12:46.684 2.157 - 2.169: 82.0246% ( 1148) 00:12:46.684 2.169 - 2.181: 83.2029% ( 152) 00:12:46.684 2.181 - 2.193: 85.2647% ( 266) 00:12:46.685 2.193 - 2.204: 86.7530% ( 192) 00:12:46.685 2.204 - 2.216: 88.3342% ( 204) 00:12:46.685 2.216 - 2.228: 91.6673% ( 430) 00:12:46.685 2.228 - 2.240: 93.5121% ( 238) 00:12:46.685 2.240 - 2.252: 93.7679% ( 33) 00:12:46.685 2.252 - 2.264: 94.2098% ( 57) 00:12:46.685 2.264 - 2.276: 94.5353% ( 42) 00:12:46.685 2.276 - 2.287: 94.9849% ( 58) 00:12:46.685 2.287 - 2.299: 95.4345% ( 58) 00:12:46.685 2.299 - 2.311: 95.5662% ( 17) 00:12:46.685 2.311 - 2.323: 95.6360% ( 9) 00:12:46.685 2.323 - 2.335: 95.7213% ( 11) 00:12:46.685 2.335 - 2.347: 95.8220% ( 13) 00:12:46.685 2.347 - 2.359: 95.8685% ( 6) 00:12:46.685 2.359 - 2.370: 96.0313% ( 21) 00:12:46.685 2.370 - 2.382: 96.1166% ( 11) 00:12:46.685 2.382 - 2.394: 96.2716% ( 20) 00:12:46.685 2.394 - 2.406: 96.4499% ( 23) 00:12:46.685 2.406 - 2.418: 96.5662% ( 15) 00:12:46.685 2.418 - 2.430: 96.8064% ( 31) 00:12:46.685 2.430 - 2.441: 97.0157% ( 27) 00:12:46.685 2.441 - 2.453: 97.2095% ( 25) 00:12:46.685 2.453 - 2.465: 97.3800% ( 22) 00:12:46.685 2.465 - 2.477: 97.5506% ( 22) 00:12:46.685 2.477 - 2.489: 97.7521% ( 26) 00:12:46.685 2.489 - 2.501: 97.8916% ( 18) 00:12:46.685 2.501 - 2.513: 98.0079% ( 15) 00:12:46.685 2.513 - 2.524: 98.1087% ( 13) 00:12:46.685 2.524 - 2.536: 98.2404% ( 17) 00:12:46.685 2.536 - 2.548: 98.2947% ( 7) 00:12:46.685 2.548 - 2.560: 98.3490% ( 7) 00:12:46.685 2.560 - 2.572: 98.3955% ( 6) 00:12:46.685 2.572 - 2.584: 98.4187% ( 3) 00:12:46.685 2.584 - 2.596: 98.4420% ( 3) 00:12:46.685 2.596 - 2.607: 98.4575% ( 2) 00:12:46.685 2.607 - 2.619: 98.4652% ( 1) 00:12:46.685 2.631 - 2.643: 98.4885% ( 3) 00:12:46.685 2.643 - 2.655: 98.4962% ( 1) 00:12:46.685 2.655 - 2.667: 98.5117% ( 2) 00:12:46.685 2.679 - 2.690: 98.5272% ( 2) 00:12:46.685 2.738 - 2.750: 98.5350% ( 1) 00:12:46.685 2.785 - 2.797: 98.5505% ( 2) 00:12:46.685 2.797 - 2.809: 98.5738% ( 3) 00:12:46.685 2.844 - 2.856: 98.5815% ( 1) 00:12:46.685 2.868 - 2.880: 98.5893% ( 1) 00:12:46.685 2.939 - 2.951: 98.5970% ( 1) 00:12:46.685 2.963 - 2.975: 98.6125% ( 2) 00:12:46.685 3.413 - 3.437: 98.6203% ( 1) 00:12:46.685 3.698 - 3.721: 98.6435% ( 3) 00:12:46.685 3.721 - 3.745: 98.6668% ( 3) 00:12:46.685 3.745 - 3.769: 98.6745% ( 1) 00:12:46.685 3.769 - 3.793: 98.6900% ( 2) 00:12:46.685 3.793 - 3.816: 98.7210% ( 4) 00:12:46.685 3.816 - 3.840: 98.7365% ( 2) 00:12:46.685 3.840 - 3.864: 98.7520% ( 2) 00:12:46.685 3.864 - 3.887: 98.7598% ( 1) 00:12:46.685 3.887 - 3.911: 98.7753% ( 2) 00:12:46.685 3.935 - 3.959: 98.7830% ( 1) 00:12:46.685 3.982 - 4.006: 98.7908% ( 1) 00:12:46.685 4.006 - 4.030: 98.7985% ( 1) 00:12:46.685 4.030 - 4.053: 98.8140% ( 2) 00:12:46.685 4.053 - 4.077: 98.8218% ( 1) 00:12:46.685 4.101 - 4.124: 98.8451% ( 3) 00:12:46.685 4.124 - 4.148: 98.8528% ( 1) 00:12:46.685 4.148 - 4.172: 98.8606% ( 1) 00:12:46.685 4.172 - 4.196: 98.8683% ( 1) 00:12:46.685 4.196 - 4.219: 98.8761% ( 1) 00:12:46.685 4.267 - 4.290: 98.8838% ( 1) 00:12:46.685 4.290 - 4.314: 98.8916% ( 1) 00:12:46.685 4.338 - 4.361: 98.8993% ( 1) 00:12:46.685 5.973 - 5.997: 98.9071% ( 1) 00:12:46.685 6.400 - 6.447: 98.9148% ( 1) 00:12:46.685 6.447 - 6.495: 98.9226% ( 1) 00:12:46.685 6.637 - 6.684: 9[2024-11-19 11:14:42.144495] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:46.943 8.9303% ( 1) 00:12:46.943 6.779 - 6.827: 98.9458% ( 2) 00:12:46.943 7.111 - 7.159: 98.9536% ( 1) 00:12:46.943 7.159 - 7.206: 98.9613% ( 1) 00:12:46.943 7.253 - 7.301: 98.9691% ( 1) 00:12:46.943 7.633 - 7.680: 98.9768% ( 1) 00:12:46.943 7.727 - 7.775: 98.9846% ( 1) 00:12:46.943 7.964 - 8.012: 98.9923% ( 1) 00:12:46.943 8.439 - 8.486: 99.0001% ( 1) 00:12:46.943 8.486 - 8.533: 99.0156% ( 2) 00:12:46.943 10.240 - 10.287: 99.0233% ( 1) 00:12:46.943 10.477 - 10.524: 99.0311% ( 1) 00:12:46.943 12.800 - 12.895: 99.0388% ( 1) 00:12:46.943 15.644 - 15.739: 99.0543% ( 2) 00:12:46.943 15.739 - 15.834: 99.0621% ( 1) 00:12:46.943 15.834 - 15.929: 99.0698% ( 1) 00:12:46.943 15.929 - 16.024: 99.1086% ( 5) 00:12:46.943 16.024 - 16.119: 99.1396% ( 4) 00:12:46.943 16.119 - 16.213: 99.1706% ( 4) 00:12:46.943 16.213 - 16.308: 99.1939% ( 3) 00:12:46.943 16.308 - 16.403: 99.2171% ( 3) 00:12:46.943 16.403 - 16.498: 99.2481% ( 4) 00:12:46.943 16.498 - 16.593: 99.2636% ( 2) 00:12:46.943 16.593 - 16.687: 99.2869% ( 3) 00:12:46.943 16.687 - 16.782: 99.3179% ( 4) 00:12:46.943 16.782 - 16.877: 99.3411% ( 3) 00:12:46.943 16.877 - 16.972: 99.3644% ( 3) 00:12:46.943 16.972 - 17.067: 99.3721% ( 1) 00:12:46.943 17.067 - 17.161: 99.3876% ( 2) 00:12:46.943 17.161 - 17.256: 99.3954% ( 1) 00:12:46.943 17.256 - 17.351: 99.4186% ( 3) 00:12:46.943 17.351 - 17.446: 99.4342% ( 2) 00:12:46.943 17.541 - 17.636: 99.4419% ( 1) 00:12:46.943 17.636 - 17.730: 99.4497% ( 1) 00:12:46.943 18.015 - 18.110: 99.4729% ( 3) 00:12:46.943 18.299 - 18.394: 99.4807% ( 1) 00:12:46.943 18.394 - 18.489: 99.4962% ( 2) 00:12:46.943 29.393 - 29.582: 99.5039% ( 1) 00:12:46.943 3980.705 - 4004.978: 99.9535% ( 58) 00:12:46.943 4004.978 - 4029.250: 100.0000% ( 6) 00:12:46.943 00:12:46.943 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:12:46.943 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:46.943 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:12:46.943 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:12:46.943 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:47.201 [ 00:12:47.201 { 00:12:47.201 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:47.201 "subtype": "Discovery", 00:12:47.201 "listen_addresses": [], 00:12:47.201 "allow_any_host": true, 00:12:47.201 "hosts": [] 00:12:47.201 }, 00:12:47.201 { 00:12:47.201 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:47.201 "subtype": "NVMe", 00:12:47.201 "listen_addresses": [ 00:12:47.201 { 00:12:47.201 "trtype": "VFIOUSER", 00:12:47.201 "adrfam": "IPv4", 00:12:47.201 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:47.201 "trsvcid": "0" 00:12:47.201 } 00:12:47.201 ], 00:12:47.201 "allow_any_host": true, 00:12:47.201 "hosts": [], 00:12:47.201 "serial_number": "SPDK1", 00:12:47.201 "model_number": "SPDK bdev Controller", 00:12:47.201 "max_namespaces": 32, 00:12:47.201 "min_cntlid": 1, 00:12:47.201 "max_cntlid": 65519, 00:12:47.201 "namespaces": [ 00:12:47.201 { 00:12:47.201 "nsid": 1, 00:12:47.201 "bdev_name": "Malloc1", 00:12:47.201 "name": "Malloc1", 00:12:47.201 "nguid": "B944F2789C924E6C9F26AE8380286FEC", 00:12:47.201 "uuid": "b944f278-9c92-4e6c-9f26-ae8380286fec" 00:12:47.201 }, 00:12:47.201 { 00:12:47.201 "nsid": 2, 00:12:47.201 "bdev_name": "Malloc3", 00:12:47.201 "name": "Malloc3", 00:12:47.201 "nguid": "73F19399C9E94E47B2DA7942B4B82B92", 00:12:47.201 "uuid": "73f19399-c9e9-4e47-b2da-7942b4b82b92" 00:12:47.201 } 00:12:47.201 ] 00:12:47.201 }, 00:12:47.201 { 00:12:47.201 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:47.201 "subtype": "NVMe", 00:12:47.201 "listen_addresses": [ 00:12:47.201 { 00:12:47.201 "trtype": "VFIOUSER", 00:12:47.201 "adrfam": "IPv4", 00:12:47.201 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:47.201 "trsvcid": "0" 00:12:47.201 } 00:12:47.201 ], 00:12:47.201 "allow_any_host": true, 00:12:47.201 "hosts": [], 00:12:47.201 "serial_number": "SPDK2", 00:12:47.201 "model_number": "SPDK bdev Controller", 00:12:47.201 "max_namespaces": 32, 00:12:47.201 "min_cntlid": 1, 00:12:47.201 "max_cntlid": 65519, 00:12:47.201 "namespaces": [ 00:12:47.201 { 00:12:47.201 "nsid": 1, 00:12:47.201 "bdev_name": "Malloc2", 00:12:47.201 "name": "Malloc2", 00:12:47.201 "nguid": "3EB17CF3DCCE4B9887AAE08C3AD2672C", 00:12:47.201 "uuid": "3eb17cf3-dcce-4b98-87aa-e08c3ad2672c" 00:12:47.201 } 00:12:47.201 ] 00:12:47.201 } 00:12:47.201 ] 00:12:47.201 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:47.201 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2583120 00:12:47.201 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:12:47.201 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:47.201 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:12:47.201 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:47.201 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:47.201 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:12:47.201 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:47.201 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:12:47.459 [2024-11-19 11:14:42.709852] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:47.459 Malloc4 00:12:47.459 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:12:47.716 [2024-11-19 11:14:43.128051] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:47.716 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:47.716 Asynchronous Event Request test 00:12:47.716 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:47.716 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:47.716 Registering asynchronous event callbacks... 00:12:47.716 Starting namespace attribute notice tests for all controllers... 00:12:47.716 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:47.716 aer_cb - Changed Namespace 00:12:47.716 Cleaning up... 00:12:47.974 [ 00:12:47.974 { 00:12:47.974 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:47.974 "subtype": "Discovery", 00:12:47.974 "listen_addresses": [], 00:12:47.974 "allow_any_host": true, 00:12:47.974 "hosts": [] 00:12:47.974 }, 00:12:47.974 { 00:12:47.974 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:47.974 "subtype": "NVMe", 00:12:47.974 "listen_addresses": [ 00:12:47.974 { 00:12:47.974 "trtype": "VFIOUSER", 00:12:47.974 "adrfam": "IPv4", 00:12:47.974 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:47.974 "trsvcid": "0" 00:12:47.974 } 00:12:47.974 ], 00:12:47.974 "allow_any_host": true, 00:12:47.974 "hosts": [], 00:12:47.974 "serial_number": "SPDK1", 00:12:47.974 "model_number": "SPDK bdev Controller", 00:12:47.974 "max_namespaces": 32, 00:12:47.974 "min_cntlid": 1, 00:12:47.974 "max_cntlid": 65519, 00:12:47.974 "namespaces": [ 00:12:47.974 { 00:12:47.974 "nsid": 1, 00:12:47.974 "bdev_name": "Malloc1", 00:12:47.974 "name": "Malloc1", 00:12:47.974 "nguid": "B944F2789C924E6C9F26AE8380286FEC", 00:12:47.974 "uuid": "b944f278-9c92-4e6c-9f26-ae8380286fec" 00:12:47.974 }, 00:12:47.974 { 00:12:47.974 "nsid": 2, 00:12:47.974 "bdev_name": "Malloc3", 00:12:47.974 "name": "Malloc3", 00:12:47.974 "nguid": "73F19399C9E94E47B2DA7942B4B82B92", 00:12:47.974 "uuid": "73f19399-c9e9-4e47-b2da-7942b4b82b92" 00:12:47.974 } 00:12:47.974 ] 00:12:47.974 }, 00:12:47.974 { 00:12:47.974 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:47.974 "subtype": "NVMe", 00:12:47.974 "listen_addresses": [ 00:12:47.974 { 00:12:47.974 "trtype": "VFIOUSER", 00:12:47.974 "adrfam": "IPv4", 00:12:47.974 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:47.974 "trsvcid": "0" 00:12:47.974 } 00:12:47.974 ], 00:12:47.974 "allow_any_host": true, 00:12:47.974 "hosts": [], 00:12:47.974 "serial_number": "SPDK2", 00:12:47.974 "model_number": "SPDK bdev Controller", 00:12:47.974 "max_namespaces": 32, 00:12:47.974 "min_cntlid": 1, 00:12:47.974 "max_cntlid": 65519, 00:12:47.974 "namespaces": [ 00:12:47.974 { 00:12:47.974 "nsid": 1, 00:12:47.974 "bdev_name": "Malloc2", 00:12:47.974 "name": "Malloc2", 00:12:47.974 "nguid": "3EB17CF3DCCE4B9887AAE08C3AD2672C", 00:12:47.974 "uuid": "3eb17cf3-dcce-4b98-87aa-e08c3ad2672c" 00:12:47.974 }, 00:12:47.974 { 00:12:47.974 "nsid": 2, 00:12:47.974 "bdev_name": "Malloc4", 00:12:47.974 "name": "Malloc4", 00:12:47.974 "nguid": "4BC7F084424C48BCA3F6AD0E98361C8B", 00:12:47.974 "uuid": "4bc7f084-424c-48bc-a3f6-ad0e98361c8b" 00:12:47.974 } 00:12:47.974 ] 00:12:47.974 } 00:12:47.974 ] 00:12:47.974 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2583120 00:12:47.974 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:12:47.974 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2577514 00:12:47.974 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2577514 ']' 00:12:47.974 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2577514 00:12:47.974 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:12:47.974 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:47.974 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2577514 00:12:47.974 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:47.974 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:47.974 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2577514' 00:12:47.974 killing process with pid 2577514 00:12:47.975 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2577514 00:12:47.975 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2577514 00:12:48.542 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:48.542 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:48.542 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:12:48.542 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:12:48.542 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:12:48.542 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2583264 00:12:48.542 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:12:48.542 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2583264' 00:12:48.542 Process pid: 2583264 00:12:48.542 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:48.542 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2583264 00:12:48.542 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2583264 ']' 00:12:48.542 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.542 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:48.542 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.542 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:48.542 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:48.542 [2024-11-19 11:14:43.855464] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:12:48.542 [2024-11-19 11:14:43.856492] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:12:48.542 [2024-11-19 11:14:43.856561] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:48.542 [2024-11-19 11:14:43.929813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:48.542 [2024-11-19 11:14:43.982895] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:48.542 [2024-11-19 11:14:43.982952] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:48.542 [2024-11-19 11:14:43.982979] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:48.542 [2024-11-19 11:14:43.982990] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:48.542 [2024-11-19 11:14:43.982999] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:48.542 [2024-11-19 11:14:43.984428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:48.542 [2024-11-19 11:14:43.984487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:48.542 [2024-11-19 11:14:43.984551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:48.542 [2024-11-19 11:14:43.984554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.801 [2024-11-19 11:14:44.067406] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:12:48.801 [2024-11-19 11:14:44.067691] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:12:48.801 [2024-11-19 11:14:44.067933] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:12:48.801 [2024-11-19 11:14:44.068589] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:12:48.801 [2024-11-19 11:14:44.068820] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:12:48.801 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:48.801 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:12:48.801 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:49.736 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:12:49.995 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:49.995 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:49.995 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:49.995 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:49.995 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:50.255 Malloc1 00:12:50.255 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:50.821 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:50.822 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:51.387 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:51.387 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:51.387 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:51.387 Malloc2 00:12:51.645 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:51.902 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:52.160 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:52.418 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:12:52.418 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2583264 00:12:52.418 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2583264 ']' 00:12:52.418 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2583264 00:12:52.418 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:12:52.418 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:52.418 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2583264 00:12:52.418 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:52.418 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:52.418 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2583264' 00:12:52.418 killing process with pid 2583264 00:12:52.418 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2583264 00:12:52.418 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2583264 00:12:52.677 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:52.677 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:52.677 00:12:52.677 real 0m53.687s 00:12:52.677 user 3m27.570s 00:12:52.677 sys 0m3.978s 00:12:52.677 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:52.677 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:52.677 ************************************ 00:12:52.677 END TEST nvmf_vfio_user 00:12:52.677 ************************************ 00:12:52.677 11:14:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:52.677 11:14:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:52.677 11:14:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:52.677 11:14:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:52.677 ************************************ 00:12:52.677 START TEST nvmf_vfio_user_nvme_compliance 00:12:52.677 ************************************ 00:12:52.677 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:52.677 * Looking for test storage... 00:12:52.677 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:12:52.677 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:52.677 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:12:52.677 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:52.936 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:52.936 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:52.936 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:52.936 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:52.936 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:12:52.936 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:12:52.936 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:12:52.936 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:12:52.936 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:12:52.936 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:12:52.936 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:12:52.936 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:52.936 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:12:52.936 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:12:52.936 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:52.936 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:52.936 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:12:52.936 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:12:52.936 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:52.936 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:12:52.936 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:12:52.936 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:12:52.936 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:12:52.936 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:52.936 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:12:52.936 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:12:52.936 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:52.936 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:52.936 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:12:52.936 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:52.936 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:52.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.936 --rc genhtml_branch_coverage=1 00:12:52.936 --rc genhtml_function_coverage=1 00:12:52.936 --rc genhtml_legend=1 00:12:52.936 --rc geninfo_all_blocks=1 00:12:52.936 --rc geninfo_unexecuted_blocks=1 00:12:52.936 00:12:52.936 ' 00:12:52.936 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:52.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.936 --rc genhtml_branch_coverage=1 00:12:52.936 --rc genhtml_function_coverage=1 00:12:52.936 --rc genhtml_legend=1 00:12:52.936 --rc geninfo_all_blocks=1 00:12:52.936 --rc geninfo_unexecuted_blocks=1 00:12:52.936 00:12:52.936 ' 00:12:52.936 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:52.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.936 --rc genhtml_branch_coverage=1 00:12:52.936 --rc genhtml_function_coverage=1 00:12:52.936 --rc genhtml_legend=1 00:12:52.936 --rc geninfo_all_blocks=1 00:12:52.936 --rc geninfo_unexecuted_blocks=1 00:12:52.936 00:12:52.936 ' 00:12:52.936 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:52.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.937 --rc genhtml_branch_coverage=1 00:12:52.937 --rc genhtml_function_coverage=1 00:12:52.937 --rc genhtml_legend=1 00:12:52.937 --rc geninfo_all_blocks=1 00:12:52.937 --rc geninfo_unexecuted_blocks=1 00:12:52.937 00:12:52.937 ' 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:52.937 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2583883 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2583883' 00:12:52.937 Process pid: 2583883 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2583883 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 2583883 ']' 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:52.937 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:52.937 [2024-11-19 11:14:48.301521] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:12:52.937 [2024-11-19 11:14:48.301603] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:52.938 [2024-11-19 11:14:48.378149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:52.938 [2024-11-19 11:14:48.431855] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:52.938 [2024-11-19 11:14:48.431921] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:52.938 [2024-11-19 11:14:48.431934] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:52.938 [2024-11-19 11:14:48.431945] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:52.938 [2024-11-19 11:14:48.431967] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:53.196 [2024-11-19 11:14:48.433490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:53.196 [2024-11-19 11:14:48.433555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:53.196 [2024-11-19 11:14:48.433559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.196 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:53.196 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:12:53.196 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:12:54.140 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:54.140 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:12:54.140 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:54.140 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.140 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:54.140 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.140 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:12:54.140 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:54.140 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.140 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:54.140 malloc0 00:12:54.140 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.140 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:12:54.140 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.140 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:54.140 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.140 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:54.140 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.140 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:54.140 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.140 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:54.140 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.140 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:54.140 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.140 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:12:54.397 00:12:54.397 00:12:54.397 CUnit - A unit testing framework for C - Version 2.1-3 00:12:54.397 http://cunit.sourceforge.net/ 00:12:54.397 00:12:54.397 00:12:54.397 Suite: nvme_compliance 00:12:54.398 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-19 11:14:49.816858] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:54.398 [2024-11-19 11:14:49.818330] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:12:54.398 [2024-11-19 11:14:49.818378] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:12:54.398 [2024-11-19 11:14:49.818394] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:12:54.398 [2024-11-19 11:14:49.819875] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:54.398 passed 00:12:54.656 Test: admin_identify_ctrlr_verify_fused ...[2024-11-19 11:14:49.906492] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:54.656 [2024-11-19 11:14:49.909515] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:54.656 passed 00:12:54.656 Test: admin_identify_ns ...[2024-11-19 11:14:49.994893] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:54.656 [2024-11-19 11:14:50.054389] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:12:54.656 [2024-11-19 11:14:50.062395] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:12:54.656 [2024-11-19 11:14:50.083519] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:54.656 passed 00:12:54.913 Test: admin_get_features_mandatory_features ...[2024-11-19 11:14:50.172928] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:54.914 [2024-11-19 11:14:50.175949] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:54.914 passed 00:12:54.914 Test: admin_get_features_optional_features ...[2024-11-19 11:14:50.259481] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:54.914 [2024-11-19 11:14:50.262503] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:54.914 passed 00:12:54.914 Test: admin_set_features_number_of_queues ...[2024-11-19 11:14:50.345644] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:55.172 [2024-11-19 11:14:50.450574] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:55.172 passed 00:12:55.172 Test: admin_get_log_page_mandatory_logs ...[2024-11-19 11:14:50.538397] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:55.172 [2024-11-19 11:14:50.541424] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:55.172 passed 00:12:55.172 Test: admin_get_log_page_with_lpo ...[2024-11-19 11:14:50.625527] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:55.430 [2024-11-19 11:14:50.693408] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:12:55.430 [2024-11-19 11:14:50.706450] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:55.430 passed 00:12:55.430 Test: fabric_property_get ...[2024-11-19 11:14:50.788913] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:55.430 [2024-11-19 11:14:50.790193] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:12:55.430 [2024-11-19 11:14:50.791941] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:55.430 passed 00:12:55.430 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-19 11:14:50.875523] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:55.430 [2024-11-19 11:14:50.876840] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:12:55.430 [2024-11-19 11:14:50.878541] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:55.430 passed 00:12:55.687 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-19 11:14:50.961759] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:55.687 [2024-11-19 11:14:51.045375] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:55.687 [2024-11-19 11:14:51.061403] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:55.687 [2024-11-19 11:14:51.066495] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:55.687 passed 00:12:55.687 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-19 11:14:51.151732] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:55.687 [2024-11-19 11:14:51.153049] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:12:55.687 [2024-11-19 11:14:51.154762] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:55.945 passed 00:12:55.945 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-19 11:14:51.237910] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:55.945 [2024-11-19 11:14:51.317388] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:55.945 [2024-11-19 11:14:51.341375] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:55.945 [2024-11-19 11:14:51.346491] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:55.945 passed 00:12:55.945 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-19 11:14:51.428316] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:55.945 [2024-11-19 11:14:51.429649] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:12:55.945 [2024-11-19 11:14:51.429705] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:12:55.945 [2024-11-19 11:14:51.431356] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:56.203 passed 00:12:56.203 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-19 11:14:51.515317] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:56.203 [2024-11-19 11:14:51.607377] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:12:56.203 [2024-11-19 11:14:51.615386] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:12:56.203 [2024-11-19 11:14:51.623386] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:12:56.203 [2024-11-19 11:14:51.631369] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:12:56.203 [2024-11-19 11:14:51.660495] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:56.203 passed 00:12:56.462 Test: admin_create_io_sq_verify_pc ...[2024-11-19 11:14:51.746047] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:56.462 [2024-11-19 11:14:51.762387] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:12:56.462 [2024-11-19 11:14:51.780439] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:56.462 passed 00:12:56.462 Test: admin_create_io_qp_max_qps ...[2024-11-19 11:14:51.862991] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:57.834 [2024-11-19 11:14:52.967380] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:12:58.092 [2024-11-19 11:14:53.342046] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:58.092 passed 00:12:58.092 Test: admin_create_io_sq_shared_cq ...[2024-11-19 11:14:53.426320] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:58.092 [2024-11-19 11:14:53.558375] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:58.350 [2024-11-19 11:14:53.595446] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:58.350 passed 00:12:58.350 00:12:58.350 Run Summary: Type Total Ran Passed Failed Inactive 00:12:58.351 suites 1 1 n/a 0 0 00:12:58.351 tests 18 18 18 0 0 00:12:58.351 asserts 360 360 360 0 n/a 00:12:58.351 00:12:58.351 Elapsed time = 1.566 seconds 00:12:58.351 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2583883 00:12:58.351 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 2583883 ']' 00:12:58.351 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 2583883 00:12:58.351 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:12:58.351 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:58.351 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2583883 00:12:58.351 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:58.351 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:58.351 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2583883' 00:12:58.351 killing process with pid 2583883 00:12:58.351 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 2583883 00:12:58.351 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 2583883 00:12:58.610 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:12:58.610 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:12:58.610 00:12:58.610 real 0m5.860s 00:12:58.610 user 0m16.393s 00:12:58.610 sys 0m0.573s 00:12:58.610 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:58.610 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:58.610 ************************************ 00:12:58.610 END TEST nvmf_vfio_user_nvme_compliance 00:12:58.610 ************************************ 00:12:58.610 11:14:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:58.610 11:14:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:58.610 11:14:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:58.610 11:14:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:58.610 ************************************ 00:12:58.610 START TEST nvmf_vfio_user_fuzz 00:12:58.610 ************************************ 00:12:58.610 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:58.610 * Looking for test storage... 00:12:58.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:58.610 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:58.610 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:12:58.610 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:58.869 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:58.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.870 --rc genhtml_branch_coverage=1 00:12:58.870 --rc genhtml_function_coverage=1 00:12:58.870 --rc genhtml_legend=1 00:12:58.870 --rc geninfo_all_blocks=1 00:12:58.870 --rc geninfo_unexecuted_blocks=1 00:12:58.870 00:12:58.870 ' 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:58.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.870 --rc genhtml_branch_coverage=1 00:12:58.870 --rc genhtml_function_coverage=1 00:12:58.870 --rc genhtml_legend=1 00:12:58.870 --rc geninfo_all_blocks=1 00:12:58.870 --rc geninfo_unexecuted_blocks=1 00:12:58.870 00:12:58.870 ' 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:58.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.870 --rc genhtml_branch_coverage=1 00:12:58.870 --rc genhtml_function_coverage=1 00:12:58.870 --rc genhtml_legend=1 00:12:58.870 --rc geninfo_all_blocks=1 00:12:58.870 --rc geninfo_unexecuted_blocks=1 00:12:58.870 00:12:58.870 ' 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:58.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.870 --rc genhtml_branch_coverage=1 00:12:58.870 --rc genhtml_function_coverage=1 00:12:58.870 --rc genhtml_legend=1 00:12:58.870 --rc geninfo_all_blocks=1 00:12:58.870 --rc geninfo_unexecuted_blocks=1 00:12:58.870 00:12:58.870 ' 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:58.870 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:58.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:58.871 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:58.871 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:58.871 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:58.871 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:58.871 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:58.871 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:58.871 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:12:58.871 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:58.871 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:58.871 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:12:58.871 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2584614 00:12:58.871 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:58.871 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2584614' 00:12:58.871 Process pid: 2584614 00:12:58.871 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:58.871 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2584614 00:12:58.871 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 2584614 ']' 00:12:58.871 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.871 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:58.871 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.871 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:58.871 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:59.129 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:59.129 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:12:59.129 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:00.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:00.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:00.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:00.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:00.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:00.066 malloc0 00:13:00.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:00.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:00.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:00.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:00.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:00.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:00.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:00.066 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:32.159 Fuzzing completed. Shutting down the fuzz application 00:13:32.159 00:13:32.159 Dumping successful admin opcodes: 00:13:32.159 8, 9, 10, 24, 00:13:32.159 Dumping successful io opcodes: 00:13:32.159 0, 00:13:32.159 NS: 0x20000081ef00 I/O qp, Total commands completed: 641236, total successful commands: 2487, random_seed: 67604288 00:13:32.159 NS: 0x20000081ef00 admin qp, Total commands completed: 152816, total successful commands: 1233, random_seed: 313615808 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2584614 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 2584614 ']' 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 2584614 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2584614 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2584614' 00:13:32.159 killing process with pid 2584614 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 2584614 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 2584614 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:32.159 00:13:32.159 real 0m32.370s 00:13:32.159 user 0m33.273s 00:13:32.159 sys 0m25.372s 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:32.159 ************************************ 00:13:32.159 END TEST nvmf_vfio_user_fuzz 00:13:32.159 ************************************ 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:32.159 ************************************ 00:13:32.159 START TEST nvmf_auth_target 00:13:32.159 ************************************ 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:32.159 * Looking for test storage... 00:13:32.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:13:32.159 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:32.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.160 --rc genhtml_branch_coverage=1 00:13:32.160 --rc genhtml_function_coverage=1 00:13:32.160 --rc genhtml_legend=1 00:13:32.160 --rc geninfo_all_blocks=1 00:13:32.160 --rc geninfo_unexecuted_blocks=1 00:13:32.160 00:13:32.160 ' 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:32.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.160 --rc genhtml_branch_coverage=1 00:13:32.160 --rc genhtml_function_coverage=1 00:13:32.160 --rc genhtml_legend=1 00:13:32.160 --rc geninfo_all_blocks=1 00:13:32.160 --rc geninfo_unexecuted_blocks=1 00:13:32.160 00:13:32.160 ' 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:32.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.160 --rc genhtml_branch_coverage=1 00:13:32.160 --rc genhtml_function_coverage=1 00:13:32.160 --rc genhtml_legend=1 00:13:32.160 --rc geninfo_all_blocks=1 00:13:32.160 --rc geninfo_unexecuted_blocks=1 00:13:32.160 00:13:32.160 ' 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:32.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.160 --rc genhtml_branch_coverage=1 00:13:32.160 --rc genhtml_function_coverage=1 00:13:32.160 --rc genhtml_legend=1 00:13:32.160 --rc geninfo_all_blocks=1 00:13:32.160 --rc geninfo_unexecuted_blocks=1 00:13:32.160 00:13:32.160 ' 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:32.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:13:32.160 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.063 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:34.063 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:13:34.063 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:34.063 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:34.063 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:34.063 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:34.063 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:34.063 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:13:34.063 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:34.063 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:13:34.063 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:13:34.063 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:13:34.063 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:13:34.063 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:13:34.063 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:13:34.063 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:34.063 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:34.063 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:34.063 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:34.063 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:34.063 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:34.063 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:34.063 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:34.063 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:34.063 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:34.063 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:34.063 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:34.063 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:34.063 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:34.063 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:34.063 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:34.063 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:34.063 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:34.063 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:34.063 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:13:34.063 Found 0000:82:00.0 (0x8086 - 0x159b) 00:13:34.063 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:34.063 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:34.063 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:34.063 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:34.063 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:34.063 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:34.063 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:13:34.063 Found 0000:82:00.1 (0x8086 - 0x159b) 00:13:34.063 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:13:34.064 Found net devices under 0000:82:00.0: cvl_0_0 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:13:34.064 Found net devices under 0000:82:00.1: cvl_0_1 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:34.064 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:34.064 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:13:34.064 00:13:34.064 --- 10.0.0.2 ping statistics --- 00:13:34.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.064 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:34.064 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:34.064 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:13:34.064 00:13:34.064 --- 10.0.0.1 ping statistics --- 00:13:34.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.064 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2590990 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2590990 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2590990 ']' 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.064 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:34.065 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.324 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:34.324 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:34.324 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:34.324 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:34.324 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.324 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:34.324 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2591128 00:13:34.324 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:13:34.324 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:34.324 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:13:34.324 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:34.324 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:34.324 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:34.324 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:13:34.324 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:13:34.324 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:34.324 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=db18a65a3641b976d182191b543dc0836c5803e3182331c8 00:13:34.324 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:13:34.324 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.ew0 00:13:34.324 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key db18a65a3641b976d182191b543dc0836c5803e3182331c8 0 00:13:34.324 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 db18a65a3641b976d182191b543dc0836c5803e3182331c8 0 00:13:34.324 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:34.324 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:34.324 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=db18a65a3641b976d182191b543dc0836c5803e3182331c8 00:13:34.324 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:13:34.324 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:34.324 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.ew0 00:13:34.324 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.ew0 00:13:34.324 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.ew0 00:13:34.324 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:13:34.324 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:34.324 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:34.324 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:34.324 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:13:34.324 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:13:34.324 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:34.324 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ce9af06b6c89731215343daa62affaadfd7b39400e8ccb952b29599f8b40a189 00:13:34.324 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:13:34.324 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.HLS 00:13:34.324 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ce9af06b6c89731215343daa62affaadfd7b39400e8ccb952b29599f8b40a189 3 00:13:34.324 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ce9af06b6c89731215343daa62affaadfd7b39400e8ccb952b29599f8b40a189 3 00:13:34.324 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:34.324 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:34.324 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ce9af06b6c89731215343daa62affaadfd7b39400e8ccb952b29599f8b40a189 00:13:34.324 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:13:34.324 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.HLS 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.HLS 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.HLS 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=91631318af73a77da7e83323ce6c89be 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.L8Q 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 91631318af73a77da7e83323ce6c89be 1 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 91631318af73a77da7e83323ce6c89be 1 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=91631318af73a77da7e83323ce6c89be 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.L8Q 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.L8Q 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.L8Q 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3973af974749f8503c6aa26347dc500706a17c5f16fe1f74 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.nHL 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3973af974749f8503c6aa26347dc500706a17c5f16fe1f74 2 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3973af974749f8503c6aa26347dc500706a17c5f16fe1f74 2 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3973af974749f8503c6aa26347dc500706a17c5f16fe1f74 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.nHL 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.nHL 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.nHL 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=51479b469b9bcd7ed3826e73c7364946ee4955c3f848c850 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.TLq 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 51479b469b9bcd7ed3826e73c7364946ee4955c3f848c850 2 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 51479b469b9bcd7ed3826e73c7364946ee4955c3f848c850 2 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=51479b469b9bcd7ed3826e73c7364946ee4955c3f848c850 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:13:34.583 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:34.584 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.TLq 00:13:34.584 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.TLq 00:13:34.584 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.TLq 00:13:34.584 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:13:34.584 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:34.584 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:34.584 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:34.584 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:13:34.584 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:13:34.584 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:34.584 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=23e026d1f35382cd82ed17bc150599fc 00:13:34.584 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:13:34.584 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.iiS 00:13:34.584 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 23e026d1f35382cd82ed17bc150599fc 1 00:13:34.584 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 23e026d1f35382cd82ed17bc150599fc 1 00:13:34.584 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:34.584 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:34.584 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=23e026d1f35382cd82ed17bc150599fc 00:13:34.584 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:13:34.584 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:34.584 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.iiS 00:13:34.584 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.iiS 00:13:34.584 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.iiS 00:13:34.584 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:13:34.584 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:34.584 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:34.584 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:34.584 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:13:34.584 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:13:34.584 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:34.584 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b93116433b166a21666e1627628441f0edbf835fc1a7e9b9b2ba89ab67c926a6 00:13:34.584 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:13:34.584 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.WhC 00:13:34.584 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b93116433b166a21666e1627628441f0edbf835fc1a7e9b9b2ba89ab67c926a6 3 00:13:34.584 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b93116433b166a21666e1627628441f0edbf835fc1a7e9b9b2ba89ab67c926a6 3 00:13:34.584 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:34.584 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:34.584 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b93116433b166a21666e1627628441f0edbf835fc1a7e9b9b2ba89ab67c926a6 00:13:34.584 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:13:34.584 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:34.842 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.WhC 00:13:34.842 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.WhC 00:13:34.842 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.WhC 00:13:34.842 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:13:34.842 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2590990 00:13:34.842 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2590990 ']' 00:13:34.842 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.842 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:34.842 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.842 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:34.842 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.101 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:35.101 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:35.101 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2591128 /var/tmp/host.sock 00:13:35.101 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2591128 ']' 00:13:35.101 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:13:35.101 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:35.101 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:35.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:35.101 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:35.101 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.359 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:35.359 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:35.359 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:13:35.359 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.359 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.359 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.359 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:35.359 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ew0 00:13:35.359 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.359 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.359 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.359 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.ew0 00:13:35.359 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.ew0 00:13:35.616 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.HLS ]] 00:13:35.616 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.HLS 00:13:35.616 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.616 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.616 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.616 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.HLS 00:13:35.616 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.HLS 00:13:35.874 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:35.874 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.L8Q 00:13:35.874 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.874 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.874 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.874 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.L8Q 00:13:35.875 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.L8Q 00:13:36.132 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.nHL ]] 00:13:36.132 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.nHL 00:13:36.132 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.132 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.132 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.132 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.nHL 00:13:36.132 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.nHL 00:13:36.390 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:36.390 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.TLq 00:13:36.390 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.390 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.390 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.390 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.TLq 00:13:36.390 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.TLq 00:13:36.647 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.iiS ]] 00:13:36.647 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.iiS 00:13:36.647 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.647 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.647 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.647 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.iiS 00:13:36.647 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.iiS 00:13:36.904 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:36.904 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.WhC 00:13:36.904 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.904 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.904 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.904 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.WhC 00:13:36.904 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.WhC 00:13:37.162 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:13:37.162 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:13:37.162 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:37.162 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:37.162 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:37.162 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:37.420 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:13:37.420 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:37.420 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:37.420 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:37.420 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:37.420 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:37.420 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:37.420 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.420 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.420 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.420 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:37.420 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:37.679 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:37.937 00:13:37.937 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:37.937 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:37.937 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:38.195 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:38.195 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:38.195 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.195 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.195 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.195 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:38.195 { 00:13:38.195 "cntlid": 1, 00:13:38.195 "qid": 0, 00:13:38.195 "state": "enabled", 00:13:38.195 "thread": "nvmf_tgt_poll_group_000", 00:13:38.195 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:13:38.195 "listen_address": { 00:13:38.195 "trtype": "TCP", 00:13:38.195 "adrfam": "IPv4", 00:13:38.195 "traddr": "10.0.0.2", 00:13:38.195 "trsvcid": "4420" 00:13:38.195 }, 00:13:38.195 "peer_address": { 00:13:38.195 "trtype": "TCP", 00:13:38.195 "adrfam": "IPv4", 00:13:38.195 "traddr": "10.0.0.1", 00:13:38.195 "trsvcid": "36868" 00:13:38.195 }, 00:13:38.195 "auth": { 00:13:38.195 "state": "completed", 00:13:38.195 "digest": "sha256", 00:13:38.195 "dhgroup": "null" 00:13:38.195 } 00:13:38.195 } 00:13:38.195 ]' 00:13:38.196 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:38.196 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:38.196 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:38.196 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:38.196 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:38.196 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:38.196 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:38.196 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:38.454 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGIxOGE2NWEzNjQxYjk3NmQxODIxOTFiNTQzZGMwODM2YzU4MDNlMzE4MjMzMWM4nuP7ZA==: --dhchap-ctrl-secret DHHC-1:03:Y2U5YWYwNmI2Yzg5NzMxMjE1MzQzZGFhNjJhZmZhYWRmZDdiMzk0MDBlOGNjYjk1MmIyOTU5OWY4YjQwYTE4OdJAE80=: 00:13:38.454 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:00:ZGIxOGE2NWEzNjQxYjk3NmQxODIxOTFiNTQzZGMwODM2YzU4MDNlMzE4MjMzMWM4nuP7ZA==: --dhchap-ctrl-secret DHHC-1:03:Y2U5YWYwNmI2Yzg5NzMxMjE1MzQzZGFhNjJhZmZhYWRmZDdiMzk0MDBlOGNjYjk1MmIyOTU5OWY4YjQwYTE4OdJAE80=: 00:13:39.387 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:39.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:39.387 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:13:39.387 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.387 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.387 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.387 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:39.387 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:39.387 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:39.646 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:13:39.646 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:39.646 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:39.646 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:39.646 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:39.646 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:39.646 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:39.646 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.646 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.646 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.646 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:39.646 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:39.646 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:40.212 00:13:40.212 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:40.212 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:40.212 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:40.212 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:40.212 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:40.212 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.212 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.212 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.212 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:40.212 { 00:13:40.212 "cntlid": 3, 00:13:40.212 "qid": 0, 00:13:40.212 "state": "enabled", 00:13:40.212 "thread": "nvmf_tgt_poll_group_000", 00:13:40.212 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:13:40.212 "listen_address": { 00:13:40.212 "trtype": "TCP", 00:13:40.212 "adrfam": "IPv4", 00:13:40.212 "traddr": "10.0.0.2", 00:13:40.212 "trsvcid": "4420" 00:13:40.212 }, 00:13:40.212 "peer_address": { 00:13:40.212 "trtype": "TCP", 00:13:40.212 "adrfam": "IPv4", 00:13:40.212 "traddr": "10.0.0.1", 00:13:40.212 "trsvcid": "36900" 00:13:40.212 }, 00:13:40.212 "auth": { 00:13:40.212 "state": "completed", 00:13:40.212 "digest": "sha256", 00:13:40.212 "dhgroup": "null" 00:13:40.212 } 00:13:40.212 } 00:13:40.212 ]' 00:13:40.212 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:40.470 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:40.470 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:40.470 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:40.470 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:40.470 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:40.470 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:40.470 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:40.728 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTE2MzEzMThhZjczYTc3ZGE3ZTgzMzIzY2U2Yzg5YmXOwQ9p: --dhchap-ctrl-secret DHHC-1:02:Mzk3M2FmOTc0NzQ5Zjg1MDNjNmFhMjYzNDdkYzUwMDcwNmExN2M1ZjE2ZmUxZjc0gyublA==: 00:13:40.728 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:01:OTE2MzEzMThhZjczYTc3ZGE3ZTgzMzIzY2U2Yzg5YmXOwQ9p: --dhchap-ctrl-secret DHHC-1:02:Mzk3M2FmOTc0NzQ5Zjg1MDNjNmFhMjYzNDdkYzUwMDcwNmExN2M1ZjE2ZmUxZjc0gyublA==: 00:13:41.662 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:41.662 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:41.662 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:13:41.662 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.662 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.662 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.662 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:41.662 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:41.662 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:41.920 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:13:41.920 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:41.920 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:41.920 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:41.920 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:41.920 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:41.921 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:41.921 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.921 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.921 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.921 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:41.921 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:41.921 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:42.179 00:13:42.179 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:42.179 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:42.179 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:42.437 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:42.437 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:42.437 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.437 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.437 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.437 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:42.437 { 00:13:42.437 "cntlid": 5, 00:13:42.437 "qid": 0, 00:13:42.437 "state": "enabled", 00:13:42.437 "thread": "nvmf_tgt_poll_group_000", 00:13:42.437 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:13:42.437 "listen_address": { 00:13:42.437 "trtype": "TCP", 00:13:42.437 "adrfam": "IPv4", 00:13:42.437 "traddr": "10.0.0.2", 00:13:42.437 "trsvcid": "4420" 00:13:42.437 }, 00:13:42.437 "peer_address": { 00:13:42.437 "trtype": "TCP", 00:13:42.437 "adrfam": "IPv4", 00:13:42.437 "traddr": "10.0.0.1", 00:13:42.437 "trsvcid": "34810" 00:13:42.437 }, 00:13:42.437 "auth": { 00:13:42.437 "state": "completed", 00:13:42.437 "digest": "sha256", 00:13:42.437 "dhgroup": "null" 00:13:42.437 } 00:13:42.437 } 00:13:42.437 ]' 00:13:42.437 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:42.695 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:42.695 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:42.695 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:42.695 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:42.695 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:42.695 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:42.695 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:42.952 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTE0NzliNDY5YjliY2Q3ZWQzODI2ZTczYzczNjQ5NDZlZTQ5NTVjM2Y4NDhjODUw/9XR0g==: --dhchap-ctrl-secret DHHC-1:01:MjNlMDI2ZDFmMzUzODJjZDgyZWQxN2JjMTUwNTk5ZmNJddK+: 00:13:42.953 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:02:NTE0NzliNDY5YjliY2Q3ZWQzODI2ZTczYzczNjQ5NDZlZTQ5NTVjM2Y4NDhjODUw/9XR0g==: --dhchap-ctrl-secret DHHC-1:01:MjNlMDI2ZDFmMzUzODJjZDgyZWQxN2JjMTUwNTk5ZmNJddK+: 00:13:43.896 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:43.896 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:43.896 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:13:43.896 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.896 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.896 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.896 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:43.896 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:43.896 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:44.163 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:13:44.163 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:44.163 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:44.163 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:44.163 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:44.163 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:44.163 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:13:44.163 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.164 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.164 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.164 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:44.164 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:44.164 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:44.421 00:13:44.421 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:44.422 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:44.422 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:44.680 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:44.680 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:44.680 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.680 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.680 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.680 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:44.680 { 00:13:44.680 "cntlid": 7, 00:13:44.680 "qid": 0, 00:13:44.680 "state": "enabled", 00:13:44.680 "thread": "nvmf_tgt_poll_group_000", 00:13:44.680 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:13:44.680 "listen_address": { 00:13:44.680 "trtype": "TCP", 00:13:44.680 "adrfam": "IPv4", 00:13:44.680 "traddr": "10.0.0.2", 00:13:44.680 "trsvcid": "4420" 00:13:44.680 }, 00:13:44.680 "peer_address": { 00:13:44.680 "trtype": "TCP", 00:13:44.680 "adrfam": "IPv4", 00:13:44.680 "traddr": "10.0.0.1", 00:13:44.680 "trsvcid": "34838" 00:13:44.680 }, 00:13:44.680 "auth": { 00:13:44.680 "state": "completed", 00:13:44.680 "digest": "sha256", 00:13:44.680 "dhgroup": "null" 00:13:44.680 } 00:13:44.680 } 00:13:44.680 ]' 00:13:44.680 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:44.680 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:44.680 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:44.680 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:44.680 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:44.680 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:44.680 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:44.680 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:45.246 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjkzMTE2NDMzYjE2NmEyMTY2NmUxNjI3NjI4NDQxZjBlZGJmODM1ZmMxYTdlOWI5YjJiYTg5YWI2N2M5MjZhNgdVFY0=: 00:13:45.246 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:03:YjkzMTE2NDMzYjE2NmEyMTY2NmUxNjI3NjI4NDQxZjBlZGJmODM1ZmMxYTdlOWI5YjJiYTg5YWI2N2M5MjZhNgdVFY0=: 00:13:46.179 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:46.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:46.179 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:13:46.179 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.179 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.179 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.179 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:46.179 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:46.179 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:46.179 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:46.437 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:13:46.437 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:46.437 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:46.437 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:46.437 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:46.437 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:46.437 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:46.437 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.437 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.437 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.437 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:46.437 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:46.437 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:46.695 00:13:46.695 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:46.695 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:46.696 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:46.954 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:46.954 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:46.954 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.954 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.954 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.954 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:46.954 { 00:13:46.954 "cntlid": 9, 00:13:46.954 "qid": 0, 00:13:46.954 "state": "enabled", 00:13:46.954 "thread": "nvmf_tgt_poll_group_000", 00:13:46.954 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:13:46.954 "listen_address": { 00:13:46.954 "trtype": "TCP", 00:13:46.954 "adrfam": "IPv4", 00:13:46.954 "traddr": "10.0.0.2", 00:13:46.954 "trsvcid": "4420" 00:13:46.954 }, 00:13:46.954 "peer_address": { 00:13:46.954 "trtype": "TCP", 00:13:46.954 "adrfam": "IPv4", 00:13:46.954 "traddr": "10.0.0.1", 00:13:46.954 "trsvcid": "34858" 00:13:46.954 }, 00:13:46.954 "auth": { 00:13:46.954 "state": "completed", 00:13:46.954 "digest": "sha256", 00:13:46.954 "dhgroup": "ffdhe2048" 00:13:46.954 } 00:13:46.954 } 00:13:46.954 ]' 00:13:46.954 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:46.954 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:46.954 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:46.954 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:46.954 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:47.212 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:47.212 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:47.212 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:47.470 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGIxOGE2NWEzNjQxYjk3NmQxODIxOTFiNTQzZGMwODM2YzU4MDNlMzE4MjMzMWM4nuP7ZA==: --dhchap-ctrl-secret DHHC-1:03:Y2U5YWYwNmI2Yzg5NzMxMjE1MzQzZGFhNjJhZmZhYWRmZDdiMzk0MDBlOGNjYjk1MmIyOTU5OWY4YjQwYTE4OdJAE80=: 00:13:47.470 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:00:ZGIxOGE2NWEzNjQxYjk3NmQxODIxOTFiNTQzZGMwODM2YzU4MDNlMzE4MjMzMWM4nuP7ZA==: --dhchap-ctrl-secret DHHC-1:03:Y2U5YWYwNmI2Yzg5NzMxMjE1MzQzZGFhNjJhZmZhYWRmZDdiMzk0MDBlOGNjYjk1MmIyOTU5OWY4YjQwYTE4OdJAE80=: 00:13:48.404 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:48.404 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:48.405 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:13:48.405 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.405 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.405 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.405 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:48.405 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:48.405 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:48.662 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:13:48.663 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:48.663 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:48.663 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:48.663 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:48.663 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:48.663 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.663 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.663 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.663 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.663 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.663 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.663 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.921 00:13:48.921 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:48.921 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:48.921 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:49.179 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:49.179 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:49.179 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.179 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.179 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.179 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:49.179 { 00:13:49.179 "cntlid": 11, 00:13:49.179 "qid": 0, 00:13:49.179 "state": "enabled", 00:13:49.179 "thread": "nvmf_tgt_poll_group_000", 00:13:49.179 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:13:49.179 "listen_address": { 00:13:49.179 "trtype": "TCP", 00:13:49.179 "adrfam": "IPv4", 00:13:49.179 "traddr": "10.0.0.2", 00:13:49.179 "trsvcid": "4420" 00:13:49.179 }, 00:13:49.179 "peer_address": { 00:13:49.179 "trtype": "TCP", 00:13:49.179 "adrfam": "IPv4", 00:13:49.179 "traddr": "10.0.0.1", 00:13:49.179 "trsvcid": "34878" 00:13:49.179 }, 00:13:49.179 "auth": { 00:13:49.179 "state": "completed", 00:13:49.179 "digest": "sha256", 00:13:49.179 "dhgroup": "ffdhe2048" 00:13:49.179 } 00:13:49.179 } 00:13:49.179 ]' 00:13:49.179 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:49.179 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:49.179 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:49.436 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:49.436 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:49.436 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:49.436 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:49.436 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:49.695 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTE2MzEzMThhZjczYTc3ZGE3ZTgzMzIzY2U2Yzg5YmXOwQ9p: --dhchap-ctrl-secret DHHC-1:02:Mzk3M2FmOTc0NzQ5Zjg1MDNjNmFhMjYzNDdkYzUwMDcwNmExN2M1ZjE2ZmUxZjc0gyublA==: 00:13:49.695 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:01:OTE2MzEzMThhZjczYTc3ZGE3ZTgzMzIzY2U2Yzg5YmXOwQ9p: --dhchap-ctrl-secret DHHC-1:02:Mzk3M2FmOTc0NzQ5Zjg1MDNjNmFhMjYzNDdkYzUwMDcwNmExN2M1ZjE2ZmUxZjc0gyublA==: 00:13:50.628 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:50.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:50.628 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:13:50.628 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.628 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.628 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.628 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:50.628 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:50.629 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:50.886 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:13:50.886 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:50.886 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:50.886 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:50.886 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:50.886 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:50.886 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.886 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.886 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.886 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.886 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.886 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.886 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:51.144 00:13:51.144 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:51.144 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:51.144 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:51.402 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:51.402 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:51.402 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.402 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.402 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.402 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:51.402 { 00:13:51.402 "cntlid": 13, 00:13:51.402 "qid": 0, 00:13:51.402 "state": "enabled", 00:13:51.402 "thread": "nvmf_tgt_poll_group_000", 00:13:51.402 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:13:51.402 "listen_address": { 00:13:51.402 "trtype": "TCP", 00:13:51.402 "adrfam": "IPv4", 00:13:51.402 "traddr": "10.0.0.2", 00:13:51.402 "trsvcid": "4420" 00:13:51.402 }, 00:13:51.402 "peer_address": { 00:13:51.402 "trtype": "TCP", 00:13:51.402 "adrfam": "IPv4", 00:13:51.402 "traddr": "10.0.0.1", 00:13:51.402 "trsvcid": "34904" 00:13:51.402 }, 00:13:51.402 "auth": { 00:13:51.402 "state": "completed", 00:13:51.402 "digest": "sha256", 00:13:51.402 "dhgroup": "ffdhe2048" 00:13:51.402 } 00:13:51.402 } 00:13:51.402 ]' 00:13:51.402 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:51.402 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:51.402 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:51.402 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:51.402 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:51.660 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:51.660 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:51.660 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:51.957 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTE0NzliNDY5YjliY2Q3ZWQzODI2ZTczYzczNjQ5NDZlZTQ5NTVjM2Y4NDhjODUw/9XR0g==: --dhchap-ctrl-secret DHHC-1:01:MjNlMDI2ZDFmMzUzODJjZDgyZWQxN2JjMTUwNTk5ZmNJddK+: 00:13:51.957 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:02:NTE0NzliNDY5YjliY2Q3ZWQzODI2ZTczYzczNjQ5NDZlZTQ5NTVjM2Y4NDhjODUw/9XR0g==: --dhchap-ctrl-secret DHHC-1:01:MjNlMDI2ZDFmMzUzODJjZDgyZWQxN2JjMTUwNTk5ZmNJddK+: 00:13:52.910 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:52.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:52.910 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:13:52.910 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.910 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.910 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.910 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:52.910 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:52.910 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:52.910 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:13:52.910 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:52.910 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:52.910 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:52.910 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:52.910 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:52.910 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:13:52.910 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.910 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.910 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.910 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:52.910 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:52.910 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:53.477 00:13:53.477 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:53.477 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:53.477 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:53.477 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:53.477 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:53.477 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.477 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.734 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.734 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:53.734 { 00:13:53.734 "cntlid": 15, 00:13:53.734 "qid": 0, 00:13:53.734 "state": "enabled", 00:13:53.734 "thread": "nvmf_tgt_poll_group_000", 00:13:53.734 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:13:53.734 "listen_address": { 00:13:53.734 "trtype": "TCP", 00:13:53.734 "adrfam": "IPv4", 00:13:53.734 "traddr": "10.0.0.2", 00:13:53.734 "trsvcid": "4420" 00:13:53.734 }, 00:13:53.734 "peer_address": { 00:13:53.734 "trtype": "TCP", 00:13:53.734 "adrfam": "IPv4", 00:13:53.734 "traddr": "10.0.0.1", 00:13:53.734 "trsvcid": "33478" 00:13:53.734 }, 00:13:53.734 "auth": { 00:13:53.734 "state": "completed", 00:13:53.734 "digest": "sha256", 00:13:53.734 "dhgroup": "ffdhe2048" 00:13:53.734 } 00:13:53.734 } 00:13:53.734 ]' 00:13:53.734 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:53.734 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:53.734 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:53.734 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:53.735 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:53.735 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:53.735 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:53.735 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:53.992 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjkzMTE2NDMzYjE2NmEyMTY2NmUxNjI3NjI4NDQxZjBlZGJmODM1ZmMxYTdlOWI5YjJiYTg5YWI2N2M5MjZhNgdVFY0=: 00:13:53.992 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:03:YjkzMTE2NDMzYjE2NmEyMTY2NmUxNjI3NjI4NDQxZjBlZGJmODM1ZmMxYTdlOWI5YjJiYTg5YWI2N2M5MjZhNgdVFY0=: 00:13:54.926 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:54.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:54.926 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:13:54.926 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.926 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.926 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.926 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:54.926 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:54.926 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:54.926 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:55.184 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:13:55.184 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:55.184 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:55.184 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:55.184 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:55.184 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:55.184 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:55.184 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.184 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.184 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.184 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:55.184 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:55.184 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:55.442 00:13:55.442 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:55.442 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:55.442 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:55.700 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.700 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:55.700 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.700 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.700 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.700 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:55.700 { 00:13:55.700 "cntlid": 17, 00:13:55.700 "qid": 0, 00:13:55.700 "state": "enabled", 00:13:55.700 "thread": "nvmf_tgt_poll_group_000", 00:13:55.700 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:13:55.700 "listen_address": { 00:13:55.700 "trtype": "TCP", 00:13:55.700 "adrfam": "IPv4", 00:13:55.700 "traddr": "10.0.0.2", 00:13:55.700 "trsvcid": "4420" 00:13:55.700 }, 00:13:55.700 "peer_address": { 00:13:55.700 "trtype": "TCP", 00:13:55.700 "adrfam": "IPv4", 00:13:55.700 "traddr": "10.0.0.1", 00:13:55.700 "trsvcid": "33504" 00:13:55.700 }, 00:13:55.700 "auth": { 00:13:55.700 "state": "completed", 00:13:55.700 "digest": "sha256", 00:13:55.700 "dhgroup": "ffdhe3072" 00:13:55.700 } 00:13:55.700 } 00:13:55.700 ]' 00:13:55.700 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:55.958 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:55.958 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:55.958 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:55.958 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:55.958 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:55.958 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:55.958 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:56.216 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGIxOGE2NWEzNjQxYjk3NmQxODIxOTFiNTQzZGMwODM2YzU4MDNlMzE4MjMzMWM4nuP7ZA==: --dhchap-ctrl-secret DHHC-1:03:Y2U5YWYwNmI2Yzg5NzMxMjE1MzQzZGFhNjJhZmZhYWRmZDdiMzk0MDBlOGNjYjk1MmIyOTU5OWY4YjQwYTE4OdJAE80=: 00:13:56.216 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:00:ZGIxOGE2NWEzNjQxYjk3NmQxODIxOTFiNTQzZGMwODM2YzU4MDNlMzE4MjMzMWM4nuP7ZA==: --dhchap-ctrl-secret DHHC-1:03:Y2U5YWYwNmI2Yzg5NzMxMjE1MzQzZGFhNjJhZmZhYWRmZDdiMzk0MDBlOGNjYjk1MmIyOTU5OWY4YjQwYTE4OdJAE80=: 00:13:57.149 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:57.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:57.149 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:13:57.149 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.149 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.149 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.149 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:57.149 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:57.149 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:57.408 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:13:57.408 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:57.408 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:57.408 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:57.408 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:57.408 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:57.408 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.408 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.408 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.408 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.408 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.408 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.408 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.665 00:13:57.665 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:57.665 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:57.666 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:57.923 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:57.923 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:57.923 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.923 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.923 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.923 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:57.923 { 00:13:57.923 "cntlid": 19, 00:13:57.923 "qid": 0, 00:13:57.923 "state": "enabled", 00:13:57.923 "thread": "nvmf_tgt_poll_group_000", 00:13:57.923 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:13:57.923 "listen_address": { 00:13:57.923 "trtype": "TCP", 00:13:57.923 "adrfam": "IPv4", 00:13:57.923 "traddr": "10.0.0.2", 00:13:57.923 "trsvcid": "4420" 00:13:57.923 }, 00:13:57.923 "peer_address": { 00:13:57.923 "trtype": "TCP", 00:13:57.923 "adrfam": "IPv4", 00:13:57.923 "traddr": "10.0.0.1", 00:13:57.923 "trsvcid": "33520" 00:13:57.923 }, 00:13:57.923 "auth": { 00:13:57.923 "state": "completed", 00:13:57.923 "digest": "sha256", 00:13:57.923 "dhgroup": "ffdhe3072" 00:13:57.923 } 00:13:57.923 } 00:13:57.923 ]' 00:13:57.923 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:58.181 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:58.181 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:58.181 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:58.181 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:58.181 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:58.181 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:58.181 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:58.438 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTE2MzEzMThhZjczYTc3ZGE3ZTgzMzIzY2U2Yzg5YmXOwQ9p: --dhchap-ctrl-secret DHHC-1:02:Mzk3M2FmOTc0NzQ5Zjg1MDNjNmFhMjYzNDdkYzUwMDcwNmExN2M1ZjE2ZmUxZjc0gyublA==: 00:13:58.439 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:01:OTE2MzEzMThhZjczYTc3ZGE3ZTgzMzIzY2U2Yzg5YmXOwQ9p: --dhchap-ctrl-secret DHHC-1:02:Mzk3M2FmOTc0NzQ5Zjg1MDNjNmFhMjYzNDdkYzUwMDcwNmExN2M1ZjE2ZmUxZjc0gyublA==: 00:13:59.372 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:59.372 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:59.372 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:13:59.372 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.372 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.372 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.372 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:59.372 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:59.372 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:59.630 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:13:59.630 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:59.630 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:59.630 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:59.630 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:59.630 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:59.630 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:59.630 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.630 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.630 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.630 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:59.630 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:59.631 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:00.199 00:14:00.199 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:00.199 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:00.199 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:00.458 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:00.458 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:00.458 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.458 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.458 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.458 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:00.458 { 00:14:00.458 "cntlid": 21, 00:14:00.458 "qid": 0, 00:14:00.458 "state": "enabled", 00:14:00.458 "thread": "nvmf_tgt_poll_group_000", 00:14:00.458 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:14:00.458 "listen_address": { 00:14:00.458 "trtype": "TCP", 00:14:00.458 "adrfam": "IPv4", 00:14:00.458 "traddr": "10.0.0.2", 00:14:00.458 "trsvcid": "4420" 00:14:00.458 }, 00:14:00.458 "peer_address": { 00:14:00.458 "trtype": "TCP", 00:14:00.458 "adrfam": "IPv4", 00:14:00.458 "traddr": "10.0.0.1", 00:14:00.458 "trsvcid": "33550" 00:14:00.458 }, 00:14:00.458 "auth": { 00:14:00.458 "state": "completed", 00:14:00.458 "digest": "sha256", 00:14:00.458 "dhgroup": "ffdhe3072" 00:14:00.458 } 00:14:00.458 } 00:14:00.458 ]' 00:14:00.458 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:00.458 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:00.458 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:00.458 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:00.458 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:00.458 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:00.458 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:00.458 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:00.716 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTE0NzliNDY5YjliY2Q3ZWQzODI2ZTczYzczNjQ5NDZlZTQ5NTVjM2Y4NDhjODUw/9XR0g==: --dhchap-ctrl-secret DHHC-1:01:MjNlMDI2ZDFmMzUzODJjZDgyZWQxN2JjMTUwNTk5ZmNJddK+: 00:14:00.716 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:02:NTE0NzliNDY5YjliY2Q3ZWQzODI2ZTczYzczNjQ5NDZlZTQ5NTVjM2Y4NDhjODUw/9XR0g==: --dhchap-ctrl-secret DHHC-1:01:MjNlMDI2ZDFmMzUzODJjZDgyZWQxN2JjMTUwNTk5ZmNJddK+: 00:14:01.650 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:01.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:01.650 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:14:01.650 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.650 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.650 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.650 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:01.650 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:01.651 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:01.909 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:14:01.909 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:01.909 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:01.909 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:01.909 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:01.909 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:01.909 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:14:01.909 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.909 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.909 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.909 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:01.909 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:01.909 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:02.475 00:14:02.475 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:02.475 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:02.475 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:02.475 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:02.733 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:02.733 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.733 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.733 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.733 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:02.733 { 00:14:02.733 "cntlid": 23, 00:14:02.733 "qid": 0, 00:14:02.733 "state": "enabled", 00:14:02.733 "thread": "nvmf_tgt_poll_group_000", 00:14:02.733 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:14:02.733 "listen_address": { 00:14:02.733 "trtype": "TCP", 00:14:02.733 "adrfam": "IPv4", 00:14:02.733 "traddr": "10.0.0.2", 00:14:02.733 "trsvcid": "4420" 00:14:02.733 }, 00:14:02.733 "peer_address": { 00:14:02.733 "trtype": "TCP", 00:14:02.733 "adrfam": "IPv4", 00:14:02.733 "traddr": "10.0.0.1", 00:14:02.733 "trsvcid": "55994" 00:14:02.733 }, 00:14:02.733 "auth": { 00:14:02.733 "state": "completed", 00:14:02.733 "digest": "sha256", 00:14:02.733 "dhgroup": "ffdhe3072" 00:14:02.733 } 00:14:02.733 } 00:14:02.733 ]' 00:14:02.733 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:02.733 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:02.733 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:02.733 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:02.733 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:02.733 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:02.733 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:02.733 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:02.991 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjkzMTE2NDMzYjE2NmEyMTY2NmUxNjI3NjI4NDQxZjBlZGJmODM1ZmMxYTdlOWI5YjJiYTg5YWI2N2M5MjZhNgdVFY0=: 00:14:02.991 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:03:YjkzMTE2NDMzYjE2NmEyMTY2NmUxNjI3NjI4NDQxZjBlZGJmODM1ZmMxYTdlOWI5YjJiYTg5YWI2N2M5MjZhNgdVFY0=: 00:14:03.925 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:03.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:03.925 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:14:03.925 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.925 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.925 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.925 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:03.925 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:03.925 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:03.925 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:04.184 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:14:04.184 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:04.184 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:04.184 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:04.184 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:04.184 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:04.184 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:04.184 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.184 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.184 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.184 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:04.184 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:04.184 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:04.443 00:14:04.702 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:04.702 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:04.702 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:04.960 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:04.960 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:04.960 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.960 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.960 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.960 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:04.960 { 00:14:04.960 "cntlid": 25, 00:14:04.960 "qid": 0, 00:14:04.960 "state": "enabled", 00:14:04.960 "thread": "nvmf_tgt_poll_group_000", 00:14:04.960 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:14:04.960 "listen_address": { 00:14:04.960 "trtype": "TCP", 00:14:04.960 "adrfam": "IPv4", 00:14:04.960 "traddr": "10.0.0.2", 00:14:04.960 "trsvcid": "4420" 00:14:04.960 }, 00:14:04.960 "peer_address": { 00:14:04.960 "trtype": "TCP", 00:14:04.960 "adrfam": "IPv4", 00:14:04.960 "traddr": "10.0.0.1", 00:14:04.960 "trsvcid": "56018" 00:14:04.960 }, 00:14:04.960 "auth": { 00:14:04.960 "state": "completed", 00:14:04.960 "digest": "sha256", 00:14:04.960 "dhgroup": "ffdhe4096" 00:14:04.960 } 00:14:04.960 } 00:14:04.960 ]' 00:14:04.960 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:04.960 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:04.960 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:04.960 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:04.960 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:04.960 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:04.960 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:04.960 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:05.219 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGIxOGE2NWEzNjQxYjk3NmQxODIxOTFiNTQzZGMwODM2YzU4MDNlMzE4MjMzMWM4nuP7ZA==: --dhchap-ctrl-secret DHHC-1:03:Y2U5YWYwNmI2Yzg5NzMxMjE1MzQzZGFhNjJhZmZhYWRmZDdiMzk0MDBlOGNjYjk1MmIyOTU5OWY4YjQwYTE4OdJAE80=: 00:14:05.219 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:00:ZGIxOGE2NWEzNjQxYjk3NmQxODIxOTFiNTQzZGMwODM2YzU4MDNlMzE4MjMzMWM4nuP7ZA==: --dhchap-ctrl-secret DHHC-1:03:Y2U5YWYwNmI2Yzg5NzMxMjE1MzQzZGFhNjJhZmZhYWRmZDdiMzk0MDBlOGNjYjk1MmIyOTU5OWY4YjQwYTE4OdJAE80=: 00:14:06.154 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:06.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:06.154 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:14:06.154 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.154 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.154 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.154 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:06.154 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:06.154 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:06.412 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:14:06.412 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:06.412 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:06.412 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:06.412 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:06.412 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:06.412 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:06.412 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.412 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.412 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.412 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:06.412 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:06.412 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:06.978 00:14:06.978 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:06.978 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:06.978 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:07.237 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:07.237 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:07.237 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.237 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.237 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.237 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:07.237 { 00:14:07.237 "cntlid": 27, 00:14:07.237 "qid": 0, 00:14:07.237 "state": "enabled", 00:14:07.237 "thread": "nvmf_tgt_poll_group_000", 00:14:07.237 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:14:07.237 "listen_address": { 00:14:07.237 "trtype": "TCP", 00:14:07.237 "adrfam": "IPv4", 00:14:07.237 "traddr": "10.0.0.2", 00:14:07.237 "trsvcid": "4420" 00:14:07.237 }, 00:14:07.237 "peer_address": { 00:14:07.237 "trtype": "TCP", 00:14:07.237 "adrfam": "IPv4", 00:14:07.237 "traddr": "10.0.0.1", 00:14:07.237 "trsvcid": "56054" 00:14:07.237 }, 00:14:07.237 "auth": { 00:14:07.237 "state": "completed", 00:14:07.237 "digest": "sha256", 00:14:07.237 "dhgroup": "ffdhe4096" 00:14:07.237 } 00:14:07.237 } 00:14:07.237 ]' 00:14:07.237 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:07.237 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:07.237 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:07.237 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:07.237 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:07.237 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:07.237 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:07.237 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:07.495 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTE2MzEzMThhZjczYTc3ZGE3ZTgzMzIzY2U2Yzg5YmXOwQ9p: --dhchap-ctrl-secret DHHC-1:02:Mzk3M2FmOTc0NzQ5Zjg1MDNjNmFhMjYzNDdkYzUwMDcwNmExN2M1ZjE2ZmUxZjc0gyublA==: 00:14:07.495 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:01:OTE2MzEzMThhZjczYTc3ZGE3ZTgzMzIzY2U2Yzg5YmXOwQ9p: --dhchap-ctrl-secret DHHC-1:02:Mzk3M2FmOTc0NzQ5Zjg1MDNjNmFhMjYzNDdkYzUwMDcwNmExN2M1ZjE2ZmUxZjc0gyublA==: 00:14:08.430 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:08.430 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:08.430 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:14:08.430 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.430 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.430 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.430 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:08.430 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:08.430 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:08.690 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:14:08.690 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:08.690 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:08.690 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:08.690 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:08.690 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:08.690 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:08.690 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.690 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.690 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.690 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:08.690 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:08.690 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:09.259 00:14:09.259 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:09.259 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:09.259 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:09.517 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:09.517 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:09.517 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.517 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.517 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.517 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:09.517 { 00:14:09.517 "cntlid": 29, 00:14:09.517 "qid": 0, 00:14:09.517 "state": "enabled", 00:14:09.517 "thread": "nvmf_tgt_poll_group_000", 00:14:09.517 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:14:09.517 "listen_address": { 00:14:09.517 "trtype": "TCP", 00:14:09.517 "adrfam": "IPv4", 00:14:09.517 "traddr": "10.0.0.2", 00:14:09.517 "trsvcid": "4420" 00:14:09.517 }, 00:14:09.517 "peer_address": { 00:14:09.517 "trtype": "TCP", 00:14:09.517 "adrfam": "IPv4", 00:14:09.517 "traddr": "10.0.0.1", 00:14:09.517 "trsvcid": "56096" 00:14:09.517 }, 00:14:09.517 "auth": { 00:14:09.517 "state": "completed", 00:14:09.517 "digest": "sha256", 00:14:09.517 "dhgroup": "ffdhe4096" 00:14:09.517 } 00:14:09.517 } 00:14:09.517 ]' 00:14:09.517 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:09.517 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:09.517 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:09.517 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:09.517 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:09.517 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:09.517 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:09.517 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:09.776 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTE0NzliNDY5YjliY2Q3ZWQzODI2ZTczYzczNjQ5NDZlZTQ5NTVjM2Y4NDhjODUw/9XR0g==: --dhchap-ctrl-secret DHHC-1:01:MjNlMDI2ZDFmMzUzODJjZDgyZWQxN2JjMTUwNTk5ZmNJddK+: 00:14:09.776 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:02:NTE0NzliNDY5YjliY2Q3ZWQzODI2ZTczYzczNjQ5NDZlZTQ5NTVjM2Y4NDhjODUw/9XR0g==: --dhchap-ctrl-secret DHHC-1:01:MjNlMDI2ZDFmMzUzODJjZDgyZWQxN2JjMTUwNTk5ZmNJddK+: 00:14:10.709 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:10.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:10.709 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:14:10.709 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.709 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.709 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.709 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:10.709 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:10.709 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:10.968 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:14:10.968 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:10.968 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:10.968 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:10.968 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:10.968 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:10.968 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:14:10.968 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.968 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.968 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.968 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:10.968 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:10.968 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:11.533 00:14:11.533 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:11.533 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:11.533 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:11.793 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:11.793 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:11.793 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.793 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.793 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.793 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:11.793 { 00:14:11.793 "cntlid": 31, 00:14:11.793 "qid": 0, 00:14:11.793 "state": "enabled", 00:14:11.793 "thread": "nvmf_tgt_poll_group_000", 00:14:11.793 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:14:11.793 "listen_address": { 00:14:11.793 "trtype": "TCP", 00:14:11.793 "adrfam": "IPv4", 00:14:11.793 "traddr": "10.0.0.2", 00:14:11.793 "trsvcid": "4420" 00:14:11.793 }, 00:14:11.793 "peer_address": { 00:14:11.793 "trtype": "TCP", 00:14:11.793 "adrfam": "IPv4", 00:14:11.793 "traddr": "10.0.0.1", 00:14:11.793 "trsvcid": "49990" 00:14:11.793 }, 00:14:11.793 "auth": { 00:14:11.793 "state": "completed", 00:14:11.793 "digest": "sha256", 00:14:11.793 "dhgroup": "ffdhe4096" 00:14:11.793 } 00:14:11.793 } 00:14:11.793 ]' 00:14:11.793 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:11.793 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:11.793 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:11.793 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:11.793 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:11.793 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:11.793 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:11.793 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:12.051 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjkzMTE2NDMzYjE2NmEyMTY2NmUxNjI3NjI4NDQxZjBlZGJmODM1ZmMxYTdlOWI5YjJiYTg5YWI2N2M5MjZhNgdVFY0=: 00:14:12.051 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:03:YjkzMTE2NDMzYjE2NmEyMTY2NmUxNjI3NjI4NDQxZjBlZGJmODM1ZmMxYTdlOWI5YjJiYTg5YWI2N2M5MjZhNgdVFY0=: 00:14:12.984 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:12.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:12.984 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:14:12.984 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.984 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.985 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.985 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:12.985 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:12.985 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:12.985 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:13.243 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:14:13.243 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:13.243 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:13.243 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:13.243 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:13.243 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:13.243 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:13.243 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.243 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.243 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.243 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:13.243 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:13.243 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:13.809 00:14:13.809 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:13.809 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:13.809 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:14.067 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:14.067 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:14.067 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.067 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.067 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.067 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:14.067 { 00:14:14.067 "cntlid": 33, 00:14:14.067 "qid": 0, 00:14:14.067 "state": "enabled", 00:14:14.067 "thread": "nvmf_tgt_poll_group_000", 00:14:14.067 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:14:14.067 "listen_address": { 00:14:14.067 "trtype": "TCP", 00:14:14.067 "adrfam": "IPv4", 00:14:14.067 "traddr": "10.0.0.2", 00:14:14.067 "trsvcid": "4420" 00:14:14.067 }, 00:14:14.067 "peer_address": { 00:14:14.067 "trtype": "TCP", 00:14:14.067 "adrfam": "IPv4", 00:14:14.067 "traddr": "10.0.0.1", 00:14:14.067 "trsvcid": "50018" 00:14:14.067 }, 00:14:14.067 "auth": { 00:14:14.067 "state": "completed", 00:14:14.067 "digest": "sha256", 00:14:14.067 "dhgroup": "ffdhe6144" 00:14:14.067 } 00:14:14.067 } 00:14:14.067 ]' 00:14:14.067 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:14.067 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:14.067 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:14.067 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:14.067 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:14.067 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:14.068 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:14.068 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:14.634 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGIxOGE2NWEzNjQxYjk3NmQxODIxOTFiNTQzZGMwODM2YzU4MDNlMzE4MjMzMWM4nuP7ZA==: --dhchap-ctrl-secret DHHC-1:03:Y2U5YWYwNmI2Yzg5NzMxMjE1MzQzZGFhNjJhZmZhYWRmZDdiMzk0MDBlOGNjYjk1MmIyOTU5OWY4YjQwYTE4OdJAE80=: 00:14:14.634 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:00:ZGIxOGE2NWEzNjQxYjk3NmQxODIxOTFiNTQzZGMwODM2YzU4MDNlMzE4MjMzMWM4nuP7ZA==: --dhchap-ctrl-secret DHHC-1:03:Y2U5YWYwNmI2Yzg5NzMxMjE1MzQzZGFhNjJhZmZhYWRmZDdiMzk0MDBlOGNjYjk1MmIyOTU5OWY4YjQwYTE4OdJAE80=: 00:14:15.568 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:15.568 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:15.568 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:14:15.568 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.568 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.568 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.568 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:15.568 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:15.568 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:15.568 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:14:15.568 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:15.568 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:15.568 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:15.568 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:15.568 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:15.568 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:15.568 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.568 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.568 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.568 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:15.568 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:15.568 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:16.135 00:14:16.135 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:16.135 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:16.135 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:16.393 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:16.393 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:16.393 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.393 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.393 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.393 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:16.393 { 00:14:16.393 "cntlid": 35, 00:14:16.393 "qid": 0, 00:14:16.393 "state": "enabled", 00:14:16.393 "thread": "nvmf_tgt_poll_group_000", 00:14:16.393 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:14:16.393 "listen_address": { 00:14:16.393 "trtype": "TCP", 00:14:16.393 "adrfam": "IPv4", 00:14:16.393 "traddr": "10.0.0.2", 00:14:16.393 "trsvcid": "4420" 00:14:16.393 }, 00:14:16.393 "peer_address": { 00:14:16.393 "trtype": "TCP", 00:14:16.393 "adrfam": "IPv4", 00:14:16.393 "traddr": "10.0.0.1", 00:14:16.393 "trsvcid": "50046" 00:14:16.393 }, 00:14:16.393 "auth": { 00:14:16.393 "state": "completed", 00:14:16.393 "digest": "sha256", 00:14:16.393 "dhgroup": "ffdhe6144" 00:14:16.393 } 00:14:16.393 } 00:14:16.393 ]' 00:14:16.393 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:16.651 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:16.651 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:16.651 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:16.651 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:16.651 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:16.651 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:16.651 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:16.940 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTE2MzEzMThhZjczYTc3ZGE3ZTgzMzIzY2U2Yzg5YmXOwQ9p: --dhchap-ctrl-secret DHHC-1:02:Mzk3M2FmOTc0NzQ5Zjg1MDNjNmFhMjYzNDdkYzUwMDcwNmExN2M1ZjE2ZmUxZjc0gyublA==: 00:14:16.940 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:01:OTE2MzEzMThhZjczYTc3ZGE3ZTgzMzIzY2U2Yzg5YmXOwQ9p: --dhchap-ctrl-secret DHHC-1:02:Mzk3M2FmOTc0NzQ5Zjg1MDNjNmFhMjYzNDdkYzUwMDcwNmExN2M1ZjE2ZmUxZjc0gyublA==: 00:14:17.895 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:17.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:17.895 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:14:17.895 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.895 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.895 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.895 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:17.895 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:17.895 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:18.154 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:14:18.154 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:18.154 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:18.154 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:18.154 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:18.154 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:18.154 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:18.154 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.154 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.154 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.154 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:18.154 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:18.154 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:18.720 00:14:18.720 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:18.720 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:18.720 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:18.979 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:18.979 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:18.979 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.979 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.979 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.979 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:18.979 { 00:14:18.979 "cntlid": 37, 00:14:18.979 "qid": 0, 00:14:18.979 "state": "enabled", 00:14:18.979 "thread": "nvmf_tgt_poll_group_000", 00:14:18.979 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:14:18.979 "listen_address": { 00:14:18.979 "trtype": "TCP", 00:14:18.979 "adrfam": "IPv4", 00:14:18.979 "traddr": "10.0.0.2", 00:14:18.979 "trsvcid": "4420" 00:14:18.979 }, 00:14:18.979 "peer_address": { 00:14:18.979 "trtype": "TCP", 00:14:18.979 "adrfam": "IPv4", 00:14:18.979 "traddr": "10.0.0.1", 00:14:18.979 "trsvcid": "50074" 00:14:18.979 }, 00:14:18.979 "auth": { 00:14:18.979 "state": "completed", 00:14:18.979 "digest": "sha256", 00:14:18.979 "dhgroup": "ffdhe6144" 00:14:18.979 } 00:14:18.979 } 00:14:18.979 ]' 00:14:18.979 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:18.979 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:18.979 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:18.979 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:18.979 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:18.979 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:18.979 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:18.979 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:19.237 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTE0NzliNDY5YjliY2Q3ZWQzODI2ZTczYzczNjQ5NDZlZTQ5NTVjM2Y4NDhjODUw/9XR0g==: --dhchap-ctrl-secret DHHC-1:01:MjNlMDI2ZDFmMzUzODJjZDgyZWQxN2JjMTUwNTk5ZmNJddK+: 00:14:19.237 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:02:NTE0NzliNDY5YjliY2Q3ZWQzODI2ZTczYzczNjQ5NDZlZTQ5NTVjM2Y4NDhjODUw/9XR0g==: --dhchap-ctrl-secret DHHC-1:01:MjNlMDI2ZDFmMzUzODJjZDgyZWQxN2JjMTUwNTk5ZmNJddK+: 00:14:20.171 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:20.171 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:20.171 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:14:20.171 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.171 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.171 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.171 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:20.171 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:20.171 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:20.429 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:14:20.429 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:20.429 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:20.429 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:20.429 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:20.429 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:20.429 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:14:20.429 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.429 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.429 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.429 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:20.429 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:20.429 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:20.996 00:14:20.996 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:20.996 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:20.996 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:21.255 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:21.255 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:21.255 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.255 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.255 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.255 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:21.255 { 00:14:21.255 "cntlid": 39, 00:14:21.255 "qid": 0, 00:14:21.255 "state": "enabled", 00:14:21.255 "thread": "nvmf_tgt_poll_group_000", 00:14:21.255 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:14:21.255 "listen_address": { 00:14:21.255 "trtype": "TCP", 00:14:21.255 "adrfam": "IPv4", 00:14:21.255 "traddr": "10.0.0.2", 00:14:21.255 "trsvcid": "4420" 00:14:21.255 }, 00:14:21.255 "peer_address": { 00:14:21.255 "trtype": "TCP", 00:14:21.255 "adrfam": "IPv4", 00:14:21.255 "traddr": "10.0.0.1", 00:14:21.255 "trsvcid": "50100" 00:14:21.255 }, 00:14:21.255 "auth": { 00:14:21.255 "state": "completed", 00:14:21.255 "digest": "sha256", 00:14:21.255 "dhgroup": "ffdhe6144" 00:14:21.255 } 00:14:21.255 } 00:14:21.255 ]' 00:14:21.255 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:21.513 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:21.513 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:21.513 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:21.514 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:21.514 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:21.514 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:21.514 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:21.772 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjkzMTE2NDMzYjE2NmEyMTY2NmUxNjI3NjI4NDQxZjBlZGJmODM1ZmMxYTdlOWI5YjJiYTg5YWI2N2M5MjZhNgdVFY0=: 00:14:21.772 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:03:YjkzMTE2NDMzYjE2NmEyMTY2NmUxNjI3NjI4NDQxZjBlZGJmODM1ZmMxYTdlOWI5YjJiYTg5YWI2N2M5MjZhNgdVFY0=: 00:14:22.706 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:22.706 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:22.706 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:14:22.706 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.706 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.706 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.706 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:22.706 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:22.706 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:22.706 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:22.964 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:14:22.964 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:22.964 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:22.964 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:22.964 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:22.964 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:22.964 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.964 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.964 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.964 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.964 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.964 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.964 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.898 00:14:23.898 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:23.898 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:23.898 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:24.156 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.156 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:24.156 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.156 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.156 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.156 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:24.156 { 00:14:24.156 "cntlid": 41, 00:14:24.156 "qid": 0, 00:14:24.156 "state": "enabled", 00:14:24.156 "thread": "nvmf_tgt_poll_group_000", 00:14:24.156 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:14:24.156 "listen_address": { 00:14:24.156 "trtype": "TCP", 00:14:24.156 "adrfam": "IPv4", 00:14:24.156 "traddr": "10.0.0.2", 00:14:24.156 "trsvcid": "4420" 00:14:24.156 }, 00:14:24.156 "peer_address": { 00:14:24.156 "trtype": "TCP", 00:14:24.156 "adrfam": "IPv4", 00:14:24.156 "traddr": "10.0.0.1", 00:14:24.156 "trsvcid": "54868" 00:14:24.156 }, 00:14:24.156 "auth": { 00:14:24.156 "state": "completed", 00:14:24.156 "digest": "sha256", 00:14:24.156 "dhgroup": "ffdhe8192" 00:14:24.156 } 00:14:24.156 } 00:14:24.156 ]' 00:14:24.156 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:24.156 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:24.156 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:24.156 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:24.156 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:24.156 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:24.156 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:24.156 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:24.414 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGIxOGE2NWEzNjQxYjk3NmQxODIxOTFiNTQzZGMwODM2YzU4MDNlMzE4MjMzMWM4nuP7ZA==: --dhchap-ctrl-secret DHHC-1:03:Y2U5YWYwNmI2Yzg5NzMxMjE1MzQzZGFhNjJhZmZhYWRmZDdiMzk0MDBlOGNjYjk1MmIyOTU5OWY4YjQwYTE4OdJAE80=: 00:14:24.414 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:00:ZGIxOGE2NWEzNjQxYjk3NmQxODIxOTFiNTQzZGMwODM2YzU4MDNlMzE4MjMzMWM4nuP7ZA==: --dhchap-ctrl-secret DHHC-1:03:Y2U5YWYwNmI2Yzg5NzMxMjE1MzQzZGFhNjJhZmZhYWRmZDdiMzk0MDBlOGNjYjk1MmIyOTU5OWY4YjQwYTE4OdJAE80=: 00:14:25.347 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:25.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:25.347 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:14:25.347 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.347 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.347 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.347 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:25.347 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:25.347 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:25.605 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:14:25.605 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:25.605 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:25.605 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:25.605 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:25.605 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:25.605 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.605 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.605 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.605 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.605 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.605 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.605 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:26.538 00:14:26.538 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:26.539 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:26.539 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.797 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.797 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.797 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.797 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.797 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.797 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:26.797 { 00:14:26.797 "cntlid": 43, 00:14:26.797 "qid": 0, 00:14:26.797 "state": "enabled", 00:14:26.797 "thread": "nvmf_tgt_poll_group_000", 00:14:26.797 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:14:26.797 "listen_address": { 00:14:26.797 "trtype": "TCP", 00:14:26.797 "adrfam": "IPv4", 00:14:26.797 "traddr": "10.0.0.2", 00:14:26.797 "trsvcid": "4420" 00:14:26.797 }, 00:14:26.797 "peer_address": { 00:14:26.797 "trtype": "TCP", 00:14:26.797 "adrfam": "IPv4", 00:14:26.797 "traddr": "10.0.0.1", 00:14:26.797 "trsvcid": "54892" 00:14:26.797 }, 00:14:26.797 "auth": { 00:14:26.797 "state": "completed", 00:14:26.797 "digest": "sha256", 00:14:26.797 "dhgroup": "ffdhe8192" 00:14:26.797 } 00:14:26.797 } 00:14:26.797 ]' 00:14:26.797 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:26.797 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:26.797 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:26.797 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:26.797 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:26.797 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:26.797 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.797 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:27.055 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTE2MzEzMThhZjczYTc3ZGE3ZTgzMzIzY2U2Yzg5YmXOwQ9p: --dhchap-ctrl-secret DHHC-1:02:Mzk3M2FmOTc0NzQ5Zjg1MDNjNmFhMjYzNDdkYzUwMDcwNmExN2M1ZjE2ZmUxZjc0gyublA==: 00:14:27.055 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:01:OTE2MzEzMThhZjczYTc3ZGE3ZTgzMzIzY2U2Yzg5YmXOwQ9p: --dhchap-ctrl-secret DHHC-1:02:Mzk3M2FmOTc0NzQ5Zjg1MDNjNmFhMjYzNDdkYzUwMDcwNmExN2M1ZjE2ZmUxZjc0gyublA==: 00:14:27.989 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:27.989 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:27.989 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:14:27.989 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.989 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.989 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.989 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:27.989 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:27.989 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:28.246 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:14:28.246 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:28.246 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:28.246 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:28.246 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:28.246 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:28.246 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:28.246 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.246 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.246 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.246 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:28.246 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:28.246 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:29.179 00:14:29.179 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:29.179 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:29.179 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:29.437 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:29.437 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:29.437 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.437 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.437 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.437 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:29.437 { 00:14:29.437 "cntlid": 45, 00:14:29.437 "qid": 0, 00:14:29.437 "state": "enabled", 00:14:29.437 "thread": "nvmf_tgt_poll_group_000", 00:14:29.437 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:14:29.437 "listen_address": { 00:14:29.437 "trtype": "TCP", 00:14:29.437 "adrfam": "IPv4", 00:14:29.437 "traddr": "10.0.0.2", 00:14:29.437 "trsvcid": "4420" 00:14:29.437 }, 00:14:29.437 "peer_address": { 00:14:29.437 "trtype": "TCP", 00:14:29.437 "adrfam": "IPv4", 00:14:29.437 "traddr": "10.0.0.1", 00:14:29.437 "trsvcid": "54922" 00:14:29.437 }, 00:14:29.437 "auth": { 00:14:29.437 "state": "completed", 00:14:29.437 "digest": "sha256", 00:14:29.437 "dhgroup": "ffdhe8192" 00:14:29.437 } 00:14:29.437 } 00:14:29.437 ]' 00:14:29.437 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:29.437 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:29.437 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:29.695 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:29.695 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:29.695 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:29.695 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:29.695 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.953 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTE0NzliNDY5YjliY2Q3ZWQzODI2ZTczYzczNjQ5NDZlZTQ5NTVjM2Y4NDhjODUw/9XR0g==: --dhchap-ctrl-secret DHHC-1:01:MjNlMDI2ZDFmMzUzODJjZDgyZWQxN2JjMTUwNTk5ZmNJddK+: 00:14:29.953 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:02:NTE0NzliNDY5YjliY2Q3ZWQzODI2ZTczYzczNjQ5NDZlZTQ5NTVjM2Y4NDhjODUw/9XR0g==: --dhchap-ctrl-secret DHHC-1:01:MjNlMDI2ZDFmMzUzODJjZDgyZWQxN2JjMTUwNTk5ZmNJddK+: 00:14:30.886 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.887 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:14:30.887 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.887 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.887 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.887 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:30.887 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:30.887 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:31.145 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:14:31.145 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:31.145 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:31.145 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:31.145 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:31.145 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:31.145 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:14:31.145 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.145 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.145 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.145 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:31.145 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:31.145 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:32.080 00:14:32.080 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:32.080 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:32.080 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.080 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.080 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.080 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.080 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.080 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.080 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:32.080 { 00:14:32.080 "cntlid": 47, 00:14:32.080 "qid": 0, 00:14:32.080 "state": "enabled", 00:14:32.080 "thread": "nvmf_tgt_poll_group_000", 00:14:32.080 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:14:32.080 "listen_address": { 00:14:32.080 "trtype": "TCP", 00:14:32.080 "adrfam": "IPv4", 00:14:32.080 "traddr": "10.0.0.2", 00:14:32.080 "trsvcid": "4420" 00:14:32.080 }, 00:14:32.080 "peer_address": { 00:14:32.080 "trtype": "TCP", 00:14:32.080 "adrfam": "IPv4", 00:14:32.080 "traddr": "10.0.0.1", 00:14:32.080 "trsvcid": "56436" 00:14:32.080 }, 00:14:32.080 "auth": { 00:14:32.080 "state": "completed", 00:14:32.080 "digest": "sha256", 00:14:32.080 "dhgroup": "ffdhe8192" 00:14:32.080 } 00:14:32.080 } 00:14:32.080 ]' 00:14:32.080 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:32.338 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:32.338 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:32.338 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:32.338 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:32.338 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:32.338 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:32.338 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:32.596 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjkzMTE2NDMzYjE2NmEyMTY2NmUxNjI3NjI4NDQxZjBlZGJmODM1ZmMxYTdlOWI5YjJiYTg5YWI2N2M5MjZhNgdVFY0=: 00:14:32.596 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:03:YjkzMTE2NDMzYjE2NmEyMTY2NmUxNjI3NjI4NDQxZjBlZGJmODM1ZmMxYTdlOWI5YjJiYTg5YWI2N2M5MjZhNgdVFY0=: 00:14:33.531 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.531 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.531 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:14:33.531 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.531 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.531 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.531 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:33.531 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:33.531 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:33.531 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:33.531 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:33.789 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:14:33.789 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:33.789 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:33.789 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:33.789 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:33.789 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:33.789 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:33.789 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.789 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.789 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.789 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:33.789 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:33.789 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:34.354 00:14:34.354 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:34.354 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:34.354 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.612 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.612 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.612 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.612 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.612 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.612 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:34.612 { 00:14:34.612 "cntlid": 49, 00:14:34.612 "qid": 0, 00:14:34.612 "state": "enabled", 00:14:34.612 "thread": "nvmf_tgt_poll_group_000", 00:14:34.612 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:14:34.612 "listen_address": { 00:14:34.612 "trtype": "TCP", 00:14:34.612 "adrfam": "IPv4", 00:14:34.612 "traddr": "10.0.0.2", 00:14:34.612 "trsvcid": "4420" 00:14:34.612 }, 00:14:34.612 "peer_address": { 00:14:34.612 "trtype": "TCP", 00:14:34.612 "adrfam": "IPv4", 00:14:34.612 "traddr": "10.0.0.1", 00:14:34.612 "trsvcid": "56470" 00:14:34.612 }, 00:14:34.612 "auth": { 00:14:34.612 "state": "completed", 00:14:34.612 "digest": "sha384", 00:14:34.612 "dhgroup": "null" 00:14:34.612 } 00:14:34.612 } 00:14:34.612 ]' 00:14:34.612 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:34.612 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:34.612 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:34.612 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:34.612 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:34.612 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:34.612 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.612 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:34.871 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGIxOGE2NWEzNjQxYjk3NmQxODIxOTFiNTQzZGMwODM2YzU4MDNlMzE4MjMzMWM4nuP7ZA==: --dhchap-ctrl-secret DHHC-1:03:Y2U5YWYwNmI2Yzg5NzMxMjE1MzQzZGFhNjJhZmZhYWRmZDdiMzk0MDBlOGNjYjk1MmIyOTU5OWY4YjQwYTE4OdJAE80=: 00:14:34.871 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:00:ZGIxOGE2NWEzNjQxYjk3NmQxODIxOTFiNTQzZGMwODM2YzU4MDNlMzE4MjMzMWM4nuP7ZA==: --dhchap-ctrl-secret DHHC-1:03:Y2U5YWYwNmI2Yzg5NzMxMjE1MzQzZGFhNjJhZmZhYWRmZDdiMzk0MDBlOGNjYjk1MmIyOTU5OWY4YjQwYTE4OdJAE80=: 00:14:35.805 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:35.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:35.805 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:14:35.805 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.805 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.805 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.805 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:35.805 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:35.805 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:36.063 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:14:36.063 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:36.063 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:36.063 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:36.063 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:36.063 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.063 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:36.063 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.063 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.063 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.063 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:36.063 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:36.063 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:36.321 00:14:36.321 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:36.321 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:36.321 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.579 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:36.579 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:36.579 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.579 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.579 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.579 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:36.579 { 00:14:36.579 "cntlid": 51, 00:14:36.579 "qid": 0, 00:14:36.579 "state": "enabled", 00:14:36.579 "thread": "nvmf_tgt_poll_group_000", 00:14:36.579 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:14:36.579 "listen_address": { 00:14:36.579 "trtype": "TCP", 00:14:36.579 "adrfam": "IPv4", 00:14:36.579 "traddr": "10.0.0.2", 00:14:36.579 "trsvcid": "4420" 00:14:36.579 }, 00:14:36.579 "peer_address": { 00:14:36.579 "trtype": "TCP", 00:14:36.579 "adrfam": "IPv4", 00:14:36.579 "traddr": "10.0.0.1", 00:14:36.579 "trsvcid": "56492" 00:14:36.579 }, 00:14:36.579 "auth": { 00:14:36.579 "state": "completed", 00:14:36.579 "digest": "sha384", 00:14:36.579 "dhgroup": "null" 00:14:36.579 } 00:14:36.579 } 00:14:36.579 ]' 00:14:36.580 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:36.838 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:36.838 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:36.838 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:36.838 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:36.838 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:36.838 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:36.838 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.096 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTE2MzEzMThhZjczYTc3ZGE3ZTgzMzIzY2U2Yzg5YmXOwQ9p: --dhchap-ctrl-secret DHHC-1:02:Mzk3M2FmOTc0NzQ5Zjg1MDNjNmFhMjYzNDdkYzUwMDcwNmExN2M1ZjE2ZmUxZjc0gyublA==: 00:14:37.096 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:01:OTE2MzEzMThhZjczYTc3ZGE3ZTgzMzIzY2U2Yzg5YmXOwQ9p: --dhchap-ctrl-secret DHHC-1:02:Mzk3M2FmOTc0NzQ5Zjg1MDNjNmFhMjYzNDdkYzUwMDcwNmExN2M1ZjE2ZmUxZjc0gyublA==: 00:14:38.030 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.030 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.030 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:14:38.030 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.030 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.030 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.030 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:38.030 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:38.030 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:38.288 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:14:38.288 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:38.288 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:38.288 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:38.288 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:38.288 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:38.288 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.288 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.288 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.288 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.288 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.288 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.288 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.853 00:14:38.853 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:38.853 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:38.853 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.110 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.110 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.110 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.110 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.110 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.110 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:39.110 { 00:14:39.110 "cntlid": 53, 00:14:39.110 "qid": 0, 00:14:39.110 "state": "enabled", 00:14:39.110 "thread": "nvmf_tgt_poll_group_000", 00:14:39.110 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:14:39.110 "listen_address": { 00:14:39.110 "trtype": "TCP", 00:14:39.110 "adrfam": "IPv4", 00:14:39.110 "traddr": "10.0.0.2", 00:14:39.110 "trsvcid": "4420" 00:14:39.110 }, 00:14:39.110 "peer_address": { 00:14:39.110 "trtype": "TCP", 00:14:39.110 "adrfam": "IPv4", 00:14:39.110 "traddr": "10.0.0.1", 00:14:39.110 "trsvcid": "56524" 00:14:39.110 }, 00:14:39.110 "auth": { 00:14:39.110 "state": "completed", 00:14:39.110 "digest": "sha384", 00:14:39.110 "dhgroup": "null" 00:14:39.110 } 00:14:39.110 } 00:14:39.110 ]' 00:14:39.110 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:39.110 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:39.110 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:39.110 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:39.110 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:39.110 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.110 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.110 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.368 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTE0NzliNDY5YjliY2Q3ZWQzODI2ZTczYzczNjQ5NDZlZTQ5NTVjM2Y4NDhjODUw/9XR0g==: --dhchap-ctrl-secret DHHC-1:01:MjNlMDI2ZDFmMzUzODJjZDgyZWQxN2JjMTUwNTk5ZmNJddK+: 00:14:39.368 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:02:NTE0NzliNDY5YjliY2Q3ZWQzODI2ZTczYzczNjQ5NDZlZTQ5NTVjM2Y4NDhjODUw/9XR0g==: --dhchap-ctrl-secret DHHC-1:01:MjNlMDI2ZDFmMzUzODJjZDgyZWQxN2JjMTUwNTk5ZmNJddK+: 00:14:40.303 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:40.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:40.303 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:14:40.303 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.303 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.303 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.303 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:40.303 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:40.303 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:40.561 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:14:40.561 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:40.561 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:40.561 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:40.561 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:40.561 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:40.561 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:14:40.561 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.561 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.561 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.561 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:40.561 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:40.561 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:40.819 00:14:40.819 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:40.819 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:40.819 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:41.077 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:41.077 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:41.077 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.077 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.077 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.077 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:41.077 { 00:14:41.077 "cntlid": 55, 00:14:41.077 "qid": 0, 00:14:41.077 "state": "enabled", 00:14:41.077 "thread": "nvmf_tgt_poll_group_000", 00:14:41.077 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:14:41.077 "listen_address": { 00:14:41.077 "trtype": "TCP", 00:14:41.077 "adrfam": "IPv4", 00:14:41.077 "traddr": "10.0.0.2", 00:14:41.077 "trsvcid": "4420" 00:14:41.077 }, 00:14:41.077 "peer_address": { 00:14:41.077 "trtype": "TCP", 00:14:41.077 "adrfam": "IPv4", 00:14:41.077 "traddr": "10.0.0.1", 00:14:41.077 "trsvcid": "56532" 00:14:41.077 }, 00:14:41.077 "auth": { 00:14:41.077 "state": "completed", 00:14:41.077 "digest": "sha384", 00:14:41.077 "dhgroup": "null" 00:14:41.077 } 00:14:41.077 } 00:14:41.077 ]' 00:14:41.077 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:41.077 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:41.077 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:41.335 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:41.335 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:41.335 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:41.335 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:41.335 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:41.593 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjkzMTE2NDMzYjE2NmEyMTY2NmUxNjI3NjI4NDQxZjBlZGJmODM1ZmMxYTdlOWI5YjJiYTg5YWI2N2M5MjZhNgdVFY0=: 00:14:41.593 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:03:YjkzMTE2NDMzYjE2NmEyMTY2NmUxNjI3NjI4NDQxZjBlZGJmODM1ZmMxYTdlOWI5YjJiYTg5YWI2N2M5MjZhNgdVFY0=: 00:14:42.580 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:42.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:42.580 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:14:42.580 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.580 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.580 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.580 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:42.580 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:42.580 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:42.580 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:42.837 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:14:42.837 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:42.837 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:42.838 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:42.838 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:42.838 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:42.838 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:42.838 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.838 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.838 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.838 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:42.838 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:42.838 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.095 00:14:43.095 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:43.095 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:43.095 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:43.353 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:43.353 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:43.353 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.353 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.611 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.611 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:43.611 { 00:14:43.611 "cntlid": 57, 00:14:43.611 "qid": 0, 00:14:43.611 "state": "enabled", 00:14:43.611 "thread": "nvmf_tgt_poll_group_000", 00:14:43.611 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:14:43.611 "listen_address": { 00:14:43.611 "trtype": "TCP", 00:14:43.611 "adrfam": "IPv4", 00:14:43.611 "traddr": "10.0.0.2", 00:14:43.611 "trsvcid": "4420" 00:14:43.611 }, 00:14:43.611 "peer_address": { 00:14:43.611 "trtype": "TCP", 00:14:43.611 "adrfam": "IPv4", 00:14:43.611 "traddr": "10.0.0.1", 00:14:43.611 "trsvcid": "54024" 00:14:43.611 }, 00:14:43.611 "auth": { 00:14:43.611 "state": "completed", 00:14:43.611 "digest": "sha384", 00:14:43.611 "dhgroup": "ffdhe2048" 00:14:43.611 } 00:14:43.611 } 00:14:43.611 ]' 00:14:43.611 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:43.611 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:43.611 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:43.611 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:43.611 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:43.611 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:43.611 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:43.611 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:43.870 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGIxOGE2NWEzNjQxYjk3NmQxODIxOTFiNTQzZGMwODM2YzU4MDNlMzE4MjMzMWM4nuP7ZA==: --dhchap-ctrl-secret DHHC-1:03:Y2U5YWYwNmI2Yzg5NzMxMjE1MzQzZGFhNjJhZmZhYWRmZDdiMzk0MDBlOGNjYjk1MmIyOTU5OWY4YjQwYTE4OdJAE80=: 00:14:43.871 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:00:ZGIxOGE2NWEzNjQxYjk3NmQxODIxOTFiNTQzZGMwODM2YzU4MDNlMzE4MjMzMWM4nuP7ZA==: --dhchap-ctrl-secret DHHC-1:03:Y2U5YWYwNmI2Yzg5NzMxMjE1MzQzZGFhNjJhZmZhYWRmZDdiMzk0MDBlOGNjYjk1MmIyOTU5OWY4YjQwYTE4OdJAE80=: 00:14:44.807 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:44.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:44.807 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:14:44.807 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.807 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.807 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.807 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:44.807 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:44.807 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:45.066 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:14:45.066 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:45.066 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:45.066 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:45.066 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:45.066 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.066 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:45.066 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.066 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.066 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.066 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:45.067 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:45.067 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:45.324 00:14:45.324 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:45.324 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.324 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:45.582 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:45.582 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:45.582 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.582 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.582 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.582 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:45.582 { 00:14:45.582 "cntlid": 59, 00:14:45.582 "qid": 0, 00:14:45.582 "state": "enabled", 00:14:45.582 "thread": "nvmf_tgt_poll_group_000", 00:14:45.582 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:14:45.582 "listen_address": { 00:14:45.582 "trtype": "TCP", 00:14:45.582 "adrfam": "IPv4", 00:14:45.582 "traddr": "10.0.0.2", 00:14:45.582 "trsvcid": "4420" 00:14:45.582 }, 00:14:45.582 "peer_address": { 00:14:45.582 "trtype": "TCP", 00:14:45.582 "adrfam": "IPv4", 00:14:45.582 "traddr": "10.0.0.1", 00:14:45.582 "trsvcid": "54052" 00:14:45.582 }, 00:14:45.582 "auth": { 00:14:45.582 "state": "completed", 00:14:45.582 "digest": "sha384", 00:14:45.582 "dhgroup": "ffdhe2048" 00:14:45.582 } 00:14:45.582 } 00:14:45.582 ]' 00:14:45.582 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:45.840 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:45.840 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:45.840 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:45.840 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:45.840 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:45.840 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:45.840 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:46.098 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTE2MzEzMThhZjczYTc3ZGE3ZTgzMzIzY2U2Yzg5YmXOwQ9p: --dhchap-ctrl-secret DHHC-1:02:Mzk3M2FmOTc0NzQ5Zjg1MDNjNmFhMjYzNDdkYzUwMDcwNmExN2M1ZjE2ZmUxZjc0gyublA==: 00:14:46.098 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:01:OTE2MzEzMThhZjczYTc3ZGE3ZTgzMzIzY2U2Yzg5YmXOwQ9p: --dhchap-ctrl-secret DHHC-1:02:Mzk3M2FmOTc0NzQ5Zjg1MDNjNmFhMjYzNDdkYzUwMDcwNmExN2M1ZjE2ZmUxZjc0gyublA==: 00:14:47.029 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.029 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:14:47.029 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.029 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.029 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.029 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:47.029 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:47.029 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:47.287 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:14:47.287 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:47.287 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:47.287 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:47.287 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:47.287 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.287 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:47.287 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.287 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.287 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.287 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:47.287 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:47.287 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:47.853 00:14:47.853 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:47.854 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.854 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:48.111 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.111 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.111 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.111 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.111 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.111 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:48.111 { 00:14:48.111 "cntlid": 61, 00:14:48.111 "qid": 0, 00:14:48.111 "state": "enabled", 00:14:48.111 "thread": "nvmf_tgt_poll_group_000", 00:14:48.111 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:14:48.111 "listen_address": { 00:14:48.111 "trtype": "TCP", 00:14:48.112 "adrfam": "IPv4", 00:14:48.112 "traddr": "10.0.0.2", 00:14:48.112 "trsvcid": "4420" 00:14:48.112 }, 00:14:48.112 "peer_address": { 00:14:48.112 "trtype": "TCP", 00:14:48.112 "adrfam": "IPv4", 00:14:48.112 "traddr": "10.0.0.1", 00:14:48.112 "trsvcid": "54088" 00:14:48.112 }, 00:14:48.112 "auth": { 00:14:48.112 "state": "completed", 00:14:48.112 "digest": "sha384", 00:14:48.112 "dhgroup": "ffdhe2048" 00:14:48.112 } 00:14:48.112 } 00:14:48.112 ]' 00:14:48.112 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:48.112 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:48.112 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:48.112 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:48.112 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:48.112 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.112 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.112 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.370 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTE0NzliNDY5YjliY2Q3ZWQzODI2ZTczYzczNjQ5NDZlZTQ5NTVjM2Y4NDhjODUw/9XR0g==: --dhchap-ctrl-secret DHHC-1:01:MjNlMDI2ZDFmMzUzODJjZDgyZWQxN2JjMTUwNTk5ZmNJddK+: 00:14:48.370 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:02:NTE0NzliNDY5YjliY2Q3ZWQzODI2ZTczYzczNjQ5NDZlZTQ5NTVjM2Y4NDhjODUw/9XR0g==: --dhchap-ctrl-secret DHHC-1:01:MjNlMDI2ZDFmMzUzODJjZDgyZWQxN2JjMTUwNTk5ZmNJddK+: 00:14:49.303 11:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.303 11:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:14:49.303 11:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.303 11:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.303 11:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.303 11:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:49.303 11:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:49.304 11:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:49.562 11:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:14:49.562 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:49.562 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:49.562 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:49.562 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:49.562 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.562 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:14:49.562 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.562 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.562 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.562 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:49.562 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:49.562 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:50.128 00:14:50.128 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:50.128 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:50.128 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.386 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.386 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.386 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.386 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.386 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.386 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:50.386 { 00:14:50.386 "cntlid": 63, 00:14:50.386 "qid": 0, 00:14:50.386 "state": "enabled", 00:14:50.386 "thread": "nvmf_tgt_poll_group_000", 00:14:50.386 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:14:50.386 "listen_address": { 00:14:50.386 "trtype": "TCP", 00:14:50.386 "adrfam": "IPv4", 00:14:50.386 "traddr": "10.0.0.2", 00:14:50.386 "trsvcid": "4420" 00:14:50.386 }, 00:14:50.386 "peer_address": { 00:14:50.386 "trtype": "TCP", 00:14:50.386 "adrfam": "IPv4", 00:14:50.386 "traddr": "10.0.0.1", 00:14:50.386 "trsvcid": "54116" 00:14:50.386 }, 00:14:50.386 "auth": { 00:14:50.386 "state": "completed", 00:14:50.386 "digest": "sha384", 00:14:50.387 "dhgroup": "ffdhe2048" 00:14:50.387 } 00:14:50.387 } 00:14:50.387 ]' 00:14:50.387 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:50.387 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:50.387 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:50.387 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:50.387 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:50.387 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.387 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.387 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.953 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjkzMTE2NDMzYjE2NmEyMTY2NmUxNjI3NjI4NDQxZjBlZGJmODM1ZmMxYTdlOWI5YjJiYTg5YWI2N2M5MjZhNgdVFY0=: 00:14:50.953 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:03:YjkzMTE2NDMzYjE2NmEyMTY2NmUxNjI3NjI4NDQxZjBlZGJmODM1ZmMxYTdlOWI5YjJiYTg5YWI2N2M5MjZhNgdVFY0=: 00:14:51.886 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.887 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:14:51.887 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.887 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.887 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.887 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:51.887 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:51.887 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:51.887 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:52.144 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:14:52.145 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:52.145 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:52.145 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:52.145 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:52.145 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.145 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.145 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.145 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.145 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.145 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.145 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.145 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.402 00:14:52.402 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:52.402 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:52.402 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.661 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.661 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.661 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.661 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.661 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.661 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:52.661 { 00:14:52.661 "cntlid": 65, 00:14:52.661 "qid": 0, 00:14:52.661 "state": "enabled", 00:14:52.661 "thread": "nvmf_tgt_poll_group_000", 00:14:52.661 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:14:52.661 "listen_address": { 00:14:52.661 "trtype": "TCP", 00:14:52.661 "adrfam": "IPv4", 00:14:52.661 "traddr": "10.0.0.2", 00:14:52.661 "trsvcid": "4420" 00:14:52.661 }, 00:14:52.661 "peer_address": { 00:14:52.661 "trtype": "TCP", 00:14:52.661 "adrfam": "IPv4", 00:14:52.661 "traddr": "10.0.0.1", 00:14:52.661 "trsvcid": "49448" 00:14:52.661 }, 00:14:52.661 "auth": { 00:14:52.661 "state": "completed", 00:14:52.661 "digest": "sha384", 00:14:52.661 "dhgroup": "ffdhe3072" 00:14:52.661 } 00:14:52.661 } 00:14:52.661 ]' 00:14:52.661 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:52.661 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:52.661 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:52.918 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:52.918 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:52.918 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:52.918 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.918 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.177 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGIxOGE2NWEzNjQxYjk3NmQxODIxOTFiNTQzZGMwODM2YzU4MDNlMzE4MjMzMWM4nuP7ZA==: --dhchap-ctrl-secret DHHC-1:03:Y2U5YWYwNmI2Yzg5NzMxMjE1MzQzZGFhNjJhZmZhYWRmZDdiMzk0MDBlOGNjYjk1MmIyOTU5OWY4YjQwYTE4OdJAE80=: 00:14:53.177 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:00:ZGIxOGE2NWEzNjQxYjk3NmQxODIxOTFiNTQzZGMwODM2YzU4MDNlMzE4MjMzMWM4nuP7ZA==: --dhchap-ctrl-secret DHHC-1:03:Y2U5YWYwNmI2Yzg5NzMxMjE1MzQzZGFhNjJhZmZhYWRmZDdiMzk0MDBlOGNjYjk1MmIyOTU5OWY4YjQwYTE4OdJAE80=: 00:14:54.112 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.112 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:14:54.112 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.112 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.112 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.112 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:54.112 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:54.112 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:54.371 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:14:54.371 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:54.371 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:54.371 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:54.371 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:54.371 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:54.371 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:54.371 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.371 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.371 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.371 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:54.371 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:54.371 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:54.937 00:14:54.937 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:54.937 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:54.937 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.938 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.938 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.938 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.938 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.938 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.196 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:55.196 { 00:14:55.196 "cntlid": 67, 00:14:55.196 "qid": 0, 00:14:55.196 "state": "enabled", 00:14:55.196 "thread": "nvmf_tgt_poll_group_000", 00:14:55.196 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:14:55.196 "listen_address": { 00:14:55.196 "trtype": "TCP", 00:14:55.196 "adrfam": "IPv4", 00:14:55.196 "traddr": "10.0.0.2", 00:14:55.196 "trsvcid": "4420" 00:14:55.196 }, 00:14:55.196 "peer_address": { 00:14:55.196 "trtype": "TCP", 00:14:55.196 "adrfam": "IPv4", 00:14:55.196 "traddr": "10.0.0.1", 00:14:55.196 "trsvcid": "49464" 00:14:55.196 }, 00:14:55.196 "auth": { 00:14:55.196 "state": "completed", 00:14:55.196 "digest": "sha384", 00:14:55.196 "dhgroup": "ffdhe3072" 00:14:55.196 } 00:14:55.196 } 00:14:55.196 ]' 00:14:55.196 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:55.196 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:55.196 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:55.196 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:55.196 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:55.196 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:55.196 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:55.196 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:55.454 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTE2MzEzMThhZjczYTc3ZGE3ZTgzMzIzY2U2Yzg5YmXOwQ9p: --dhchap-ctrl-secret DHHC-1:02:Mzk3M2FmOTc0NzQ5Zjg1MDNjNmFhMjYzNDdkYzUwMDcwNmExN2M1ZjE2ZmUxZjc0gyublA==: 00:14:55.454 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:01:OTE2MzEzMThhZjczYTc3ZGE3ZTgzMzIzY2U2Yzg5YmXOwQ9p: --dhchap-ctrl-secret DHHC-1:02:Mzk3M2FmOTc0NzQ5Zjg1MDNjNmFhMjYzNDdkYzUwMDcwNmExN2M1ZjE2ZmUxZjc0gyublA==: 00:14:56.390 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:56.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.390 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:14:56.390 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.390 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.390 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.390 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:56.390 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:56.390 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:56.650 11:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:14:56.650 11:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:56.650 11:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:56.650 11:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:56.650 11:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:56.650 11:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:56.650 11:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:56.650 11:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.650 11:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.650 11:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.650 11:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:56.650 11:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:56.650 11:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.217 00:14:57.217 11:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:57.217 11:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:57.217 11:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:57.217 11:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.217 11:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:57.217 11:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.217 11:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.475 11:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.475 11:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:57.475 { 00:14:57.475 "cntlid": 69, 00:14:57.475 "qid": 0, 00:14:57.475 "state": "enabled", 00:14:57.475 "thread": "nvmf_tgt_poll_group_000", 00:14:57.475 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:14:57.475 "listen_address": { 00:14:57.475 "trtype": "TCP", 00:14:57.475 "adrfam": "IPv4", 00:14:57.475 "traddr": "10.0.0.2", 00:14:57.475 "trsvcid": "4420" 00:14:57.475 }, 00:14:57.475 "peer_address": { 00:14:57.475 "trtype": "TCP", 00:14:57.475 "adrfam": "IPv4", 00:14:57.475 "traddr": "10.0.0.1", 00:14:57.475 "trsvcid": "49482" 00:14:57.475 }, 00:14:57.475 "auth": { 00:14:57.475 "state": "completed", 00:14:57.475 "digest": "sha384", 00:14:57.475 "dhgroup": "ffdhe3072" 00:14:57.475 } 00:14:57.475 } 00:14:57.475 ]' 00:14:57.475 11:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:57.475 11:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:57.475 11:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:57.475 11:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:57.475 11:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:57.475 11:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:57.475 11:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:57.475 11:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:57.733 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTE0NzliNDY5YjliY2Q3ZWQzODI2ZTczYzczNjQ5NDZlZTQ5NTVjM2Y4NDhjODUw/9XR0g==: --dhchap-ctrl-secret DHHC-1:01:MjNlMDI2ZDFmMzUzODJjZDgyZWQxN2JjMTUwNTk5ZmNJddK+: 00:14:57.733 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:02:NTE0NzliNDY5YjliY2Q3ZWQzODI2ZTczYzczNjQ5NDZlZTQ5NTVjM2Y4NDhjODUw/9XR0g==: --dhchap-ctrl-secret DHHC-1:01:MjNlMDI2ZDFmMzUzODJjZDgyZWQxN2JjMTUwNTk5ZmNJddK+: 00:14:58.667 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:58.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:58.667 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:14:58.667 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.667 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.667 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.667 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:58.667 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:58.667 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:59.233 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:14:59.233 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:59.233 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:59.233 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:59.233 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:59.233 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.233 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:14:59.233 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.233 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.233 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.233 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:59.233 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:59.233 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:59.491 00:14:59.491 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:59.491 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:59.491 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:59.749 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.749 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:59.749 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.749 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.749 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.749 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:59.749 { 00:14:59.749 "cntlid": 71, 00:14:59.749 "qid": 0, 00:14:59.749 "state": "enabled", 00:14:59.749 "thread": "nvmf_tgt_poll_group_000", 00:14:59.749 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:14:59.749 "listen_address": { 00:14:59.749 "trtype": "TCP", 00:14:59.749 "adrfam": "IPv4", 00:14:59.749 "traddr": "10.0.0.2", 00:14:59.749 "trsvcid": "4420" 00:14:59.749 }, 00:14:59.749 "peer_address": { 00:14:59.749 "trtype": "TCP", 00:14:59.749 "adrfam": "IPv4", 00:14:59.749 "traddr": "10.0.0.1", 00:14:59.749 "trsvcid": "49502" 00:14:59.749 }, 00:14:59.749 "auth": { 00:14:59.749 "state": "completed", 00:14:59.749 "digest": "sha384", 00:14:59.749 "dhgroup": "ffdhe3072" 00:14:59.749 } 00:14:59.749 } 00:14:59.749 ]' 00:14:59.749 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:59.749 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:59.749 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:59.749 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:59.749 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:59.749 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:59.749 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:59.749 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.007 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjkzMTE2NDMzYjE2NmEyMTY2NmUxNjI3NjI4NDQxZjBlZGJmODM1ZmMxYTdlOWI5YjJiYTg5YWI2N2M5MjZhNgdVFY0=: 00:15:00.007 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:03:YjkzMTE2NDMzYjE2NmEyMTY2NmUxNjI3NjI4NDQxZjBlZGJmODM1ZmMxYTdlOWI5YjJiYTg5YWI2N2M5MjZhNgdVFY0=: 00:15:00.941 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:00.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:00.941 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:00.941 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.941 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.941 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.941 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:00.941 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:00.941 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:00.941 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:01.508 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:15:01.508 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:01.508 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:01.508 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:01.508 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:01.508 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.508 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.508 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.508 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.508 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.508 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.508 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.508 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.766 00:15:01.767 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:01.767 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:01.767 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.025 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.025 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.025 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.025 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.025 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.025 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:02.025 { 00:15:02.025 "cntlid": 73, 00:15:02.025 "qid": 0, 00:15:02.025 "state": "enabled", 00:15:02.025 "thread": "nvmf_tgt_poll_group_000", 00:15:02.025 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:15:02.025 "listen_address": { 00:15:02.025 "trtype": "TCP", 00:15:02.025 "adrfam": "IPv4", 00:15:02.025 "traddr": "10.0.0.2", 00:15:02.025 "trsvcid": "4420" 00:15:02.025 }, 00:15:02.025 "peer_address": { 00:15:02.025 "trtype": "TCP", 00:15:02.025 "adrfam": "IPv4", 00:15:02.025 "traddr": "10.0.0.1", 00:15:02.025 "trsvcid": "53488" 00:15:02.025 }, 00:15:02.025 "auth": { 00:15:02.025 "state": "completed", 00:15:02.025 "digest": "sha384", 00:15:02.025 "dhgroup": "ffdhe4096" 00:15:02.025 } 00:15:02.025 } 00:15:02.025 ]' 00:15:02.025 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:02.025 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:02.025 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:02.025 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:02.025 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:02.282 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.282 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.282 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.540 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGIxOGE2NWEzNjQxYjk3NmQxODIxOTFiNTQzZGMwODM2YzU4MDNlMzE4MjMzMWM4nuP7ZA==: --dhchap-ctrl-secret DHHC-1:03:Y2U5YWYwNmI2Yzg5NzMxMjE1MzQzZGFhNjJhZmZhYWRmZDdiMzk0MDBlOGNjYjk1MmIyOTU5OWY4YjQwYTE4OdJAE80=: 00:15:02.540 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:00:ZGIxOGE2NWEzNjQxYjk3NmQxODIxOTFiNTQzZGMwODM2YzU4MDNlMzE4MjMzMWM4nuP7ZA==: --dhchap-ctrl-secret DHHC-1:03:Y2U5YWYwNmI2Yzg5NzMxMjE1MzQzZGFhNjJhZmZhYWRmZDdiMzk0MDBlOGNjYjk1MmIyOTU5OWY4YjQwYTE4OdJAE80=: 00:15:03.474 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.474 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.474 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:03.474 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.474 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.474 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.474 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:03.474 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:03.474 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:03.732 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:15:03.732 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:03.732 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:03.732 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:03.732 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:03.732 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.732 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.732 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.732 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.732 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.732 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.732 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.732 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.297 00:15:04.297 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:04.297 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:04.297 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.297 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.297 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.297 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.297 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.297 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.297 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:04.297 { 00:15:04.297 "cntlid": 75, 00:15:04.297 "qid": 0, 00:15:04.297 "state": "enabled", 00:15:04.297 "thread": "nvmf_tgt_poll_group_000", 00:15:04.297 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:15:04.297 "listen_address": { 00:15:04.297 "trtype": "TCP", 00:15:04.297 "adrfam": "IPv4", 00:15:04.297 "traddr": "10.0.0.2", 00:15:04.297 "trsvcid": "4420" 00:15:04.297 }, 00:15:04.297 "peer_address": { 00:15:04.297 "trtype": "TCP", 00:15:04.297 "adrfam": "IPv4", 00:15:04.297 "traddr": "10.0.0.1", 00:15:04.297 "trsvcid": "53500" 00:15:04.297 }, 00:15:04.297 "auth": { 00:15:04.297 "state": "completed", 00:15:04.297 "digest": "sha384", 00:15:04.297 "dhgroup": "ffdhe4096" 00:15:04.297 } 00:15:04.297 } 00:15:04.297 ]' 00:15:04.297 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:04.556 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:04.556 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:04.556 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:04.556 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:04.556 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.556 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.556 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:04.815 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTE2MzEzMThhZjczYTc3ZGE3ZTgzMzIzY2U2Yzg5YmXOwQ9p: --dhchap-ctrl-secret DHHC-1:02:Mzk3M2FmOTc0NzQ5Zjg1MDNjNmFhMjYzNDdkYzUwMDcwNmExN2M1ZjE2ZmUxZjc0gyublA==: 00:15:04.815 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:01:OTE2MzEzMThhZjczYTc3ZGE3ZTgzMzIzY2U2Yzg5YmXOwQ9p: --dhchap-ctrl-secret DHHC-1:02:Mzk3M2FmOTc0NzQ5Zjg1MDNjNmFhMjYzNDdkYzUwMDcwNmExN2M1ZjE2ZmUxZjc0gyublA==: 00:15:05.748 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.748 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.749 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:05.749 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.749 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.749 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.749 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:05.749 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:05.749 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:06.006 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:15:06.006 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:06.006 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:06.006 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:06.006 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:06.006 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.006 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:06.006 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.006 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.006 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.006 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:06.006 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:06.006 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:06.572 00:15:06.572 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:06.572 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.572 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:06.850 11:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.850 11:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.850 11:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.850 11:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.850 11:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.850 11:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:06.850 { 00:15:06.850 "cntlid": 77, 00:15:06.850 "qid": 0, 00:15:06.850 "state": "enabled", 00:15:06.850 "thread": "nvmf_tgt_poll_group_000", 00:15:06.850 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:15:06.850 "listen_address": { 00:15:06.850 "trtype": "TCP", 00:15:06.850 "adrfam": "IPv4", 00:15:06.850 "traddr": "10.0.0.2", 00:15:06.850 "trsvcid": "4420" 00:15:06.850 }, 00:15:06.850 "peer_address": { 00:15:06.850 "trtype": "TCP", 00:15:06.850 "adrfam": "IPv4", 00:15:06.850 "traddr": "10.0.0.1", 00:15:06.850 "trsvcid": "53530" 00:15:06.850 }, 00:15:06.850 "auth": { 00:15:06.850 "state": "completed", 00:15:06.850 "digest": "sha384", 00:15:06.850 "dhgroup": "ffdhe4096" 00:15:06.850 } 00:15:06.850 } 00:15:06.850 ]' 00:15:06.850 11:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:06.850 11:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:06.850 11:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:06.850 11:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:06.850 11:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:06.850 11:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.850 11:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.850 11:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.140 11:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTE0NzliNDY5YjliY2Q3ZWQzODI2ZTczYzczNjQ5NDZlZTQ5NTVjM2Y4NDhjODUw/9XR0g==: --dhchap-ctrl-secret DHHC-1:01:MjNlMDI2ZDFmMzUzODJjZDgyZWQxN2JjMTUwNTk5ZmNJddK+: 00:15:07.140 11:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:02:NTE0NzliNDY5YjliY2Q3ZWQzODI2ZTczYzczNjQ5NDZlZTQ5NTVjM2Y4NDhjODUw/9XR0g==: --dhchap-ctrl-secret DHHC-1:01:MjNlMDI2ZDFmMzUzODJjZDgyZWQxN2JjMTUwNTk5ZmNJddK+: 00:15:08.074 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.074 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.074 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:08.074 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.074 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.074 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.074 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:08.074 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:08.074 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:08.332 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:15:08.332 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:08.332 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:08.332 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:08.332 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:08.332 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.332 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:15:08.332 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.332 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.332 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.332 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:08.332 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:08.332 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:08.590 00:15:08.590 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:08.590 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:08.590 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.848 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.105 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.105 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.105 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.105 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.105 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:09.105 { 00:15:09.105 "cntlid": 79, 00:15:09.105 "qid": 0, 00:15:09.105 "state": "enabled", 00:15:09.105 "thread": "nvmf_tgt_poll_group_000", 00:15:09.105 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:15:09.105 "listen_address": { 00:15:09.105 "trtype": "TCP", 00:15:09.105 "adrfam": "IPv4", 00:15:09.105 "traddr": "10.0.0.2", 00:15:09.105 "trsvcid": "4420" 00:15:09.105 }, 00:15:09.105 "peer_address": { 00:15:09.105 "trtype": "TCP", 00:15:09.105 "adrfam": "IPv4", 00:15:09.105 "traddr": "10.0.0.1", 00:15:09.105 "trsvcid": "53562" 00:15:09.105 }, 00:15:09.105 "auth": { 00:15:09.105 "state": "completed", 00:15:09.105 "digest": "sha384", 00:15:09.105 "dhgroup": "ffdhe4096" 00:15:09.105 } 00:15:09.105 } 00:15:09.105 ]' 00:15:09.105 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:09.105 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:09.105 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:09.105 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:09.105 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:09.105 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.105 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.105 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.362 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjkzMTE2NDMzYjE2NmEyMTY2NmUxNjI3NjI4NDQxZjBlZGJmODM1ZmMxYTdlOWI5YjJiYTg5YWI2N2M5MjZhNgdVFY0=: 00:15:09.362 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:03:YjkzMTE2NDMzYjE2NmEyMTY2NmUxNjI3NjI4NDQxZjBlZGJmODM1ZmMxYTdlOWI5YjJiYTg5YWI2N2M5MjZhNgdVFY0=: 00:15:10.295 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.295 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:10.295 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.295 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.295 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.295 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:10.295 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:10.295 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:10.295 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:10.554 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:15:10.554 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:10.554 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:10.554 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:10.554 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:10.554 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.554 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.554 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.554 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.554 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.554 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.554 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.554 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:11.120 00:15:11.121 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:11.121 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.121 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:11.378 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.378 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.378 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.378 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.378 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.378 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:11.378 { 00:15:11.378 "cntlid": 81, 00:15:11.378 "qid": 0, 00:15:11.378 "state": "enabled", 00:15:11.378 "thread": "nvmf_tgt_poll_group_000", 00:15:11.378 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:15:11.378 "listen_address": { 00:15:11.378 "trtype": "TCP", 00:15:11.378 "adrfam": "IPv4", 00:15:11.378 "traddr": "10.0.0.2", 00:15:11.378 "trsvcid": "4420" 00:15:11.378 }, 00:15:11.378 "peer_address": { 00:15:11.378 "trtype": "TCP", 00:15:11.378 "adrfam": "IPv4", 00:15:11.378 "traddr": "10.0.0.1", 00:15:11.378 "trsvcid": "53578" 00:15:11.378 }, 00:15:11.378 "auth": { 00:15:11.378 "state": "completed", 00:15:11.378 "digest": "sha384", 00:15:11.378 "dhgroup": "ffdhe6144" 00:15:11.378 } 00:15:11.378 } 00:15:11.378 ]' 00:15:11.378 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:11.379 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:11.379 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:11.379 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:11.379 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:11.379 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.379 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.379 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.636 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGIxOGE2NWEzNjQxYjk3NmQxODIxOTFiNTQzZGMwODM2YzU4MDNlMzE4MjMzMWM4nuP7ZA==: --dhchap-ctrl-secret DHHC-1:03:Y2U5YWYwNmI2Yzg5NzMxMjE1MzQzZGFhNjJhZmZhYWRmZDdiMzk0MDBlOGNjYjk1MmIyOTU5OWY4YjQwYTE4OdJAE80=: 00:15:11.636 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:00:ZGIxOGE2NWEzNjQxYjk3NmQxODIxOTFiNTQzZGMwODM2YzU4MDNlMzE4MjMzMWM4nuP7ZA==: --dhchap-ctrl-secret DHHC-1:03:Y2U5YWYwNmI2Yzg5NzMxMjE1MzQzZGFhNjJhZmZhYWRmZDdiMzk0MDBlOGNjYjk1MmIyOTU5OWY4YjQwYTE4OdJAE80=: 00:15:12.569 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.570 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:12.570 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.570 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.570 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.570 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:12.570 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:12.570 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:12.828 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:15:12.828 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:12.828 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:12.828 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:12.828 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:12.828 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.828 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.828 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.828 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.828 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.828 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.828 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.828 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:13.394 00:15:13.394 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:13.394 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:13.394 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.653 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.653 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.653 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.653 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.653 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.653 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:13.653 { 00:15:13.653 "cntlid": 83, 00:15:13.653 "qid": 0, 00:15:13.653 "state": "enabled", 00:15:13.653 "thread": "nvmf_tgt_poll_group_000", 00:15:13.653 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:15:13.653 "listen_address": { 00:15:13.653 "trtype": "TCP", 00:15:13.653 "adrfam": "IPv4", 00:15:13.653 "traddr": "10.0.0.2", 00:15:13.653 "trsvcid": "4420" 00:15:13.653 }, 00:15:13.653 "peer_address": { 00:15:13.653 "trtype": "TCP", 00:15:13.653 "adrfam": "IPv4", 00:15:13.653 "traddr": "10.0.0.1", 00:15:13.653 "trsvcid": "52724" 00:15:13.653 }, 00:15:13.653 "auth": { 00:15:13.653 "state": "completed", 00:15:13.653 "digest": "sha384", 00:15:13.653 "dhgroup": "ffdhe6144" 00:15:13.653 } 00:15:13.653 } 00:15:13.653 ]' 00:15:13.653 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:13.653 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:13.653 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:13.910 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:13.910 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:13.910 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.910 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.910 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.168 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTE2MzEzMThhZjczYTc3ZGE3ZTgzMzIzY2U2Yzg5YmXOwQ9p: --dhchap-ctrl-secret DHHC-1:02:Mzk3M2FmOTc0NzQ5Zjg1MDNjNmFhMjYzNDdkYzUwMDcwNmExN2M1ZjE2ZmUxZjc0gyublA==: 00:15:14.168 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:01:OTE2MzEzMThhZjczYTc3ZGE3ZTgzMzIzY2U2Yzg5YmXOwQ9p: --dhchap-ctrl-secret DHHC-1:02:Mzk3M2FmOTc0NzQ5Zjg1MDNjNmFhMjYzNDdkYzUwMDcwNmExN2M1ZjE2ZmUxZjc0gyublA==: 00:15:15.103 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.103 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.103 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:15.103 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.103 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.103 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.103 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:15.103 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:15.103 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:15.360 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:15:15.360 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:15.360 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:15.360 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:15.360 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:15.360 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.360 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:15.360 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.360 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.360 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.360 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:15.360 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:15.361 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:15.927 00:15:15.927 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:15.927 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:15.927 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.185 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.185 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.185 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.185 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.185 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.185 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:16.185 { 00:15:16.185 "cntlid": 85, 00:15:16.185 "qid": 0, 00:15:16.185 "state": "enabled", 00:15:16.185 "thread": "nvmf_tgt_poll_group_000", 00:15:16.185 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:15:16.185 "listen_address": { 00:15:16.185 "trtype": "TCP", 00:15:16.185 "adrfam": "IPv4", 00:15:16.185 "traddr": "10.0.0.2", 00:15:16.185 "trsvcid": "4420" 00:15:16.185 }, 00:15:16.185 "peer_address": { 00:15:16.185 "trtype": "TCP", 00:15:16.185 "adrfam": "IPv4", 00:15:16.185 "traddr": "10.0.0.1", 00:15:16.185 "trsvcid": "52750" 00:15:16.185 }, 00:15:16.185 "auth": { 00:15:16.185 "state": "completed", 00:15:16.185 "digest": "sha384", 00:15:16.185 "dhgroup": "ffdhe6144" 00:15:16.185 } 00:15:16.185 } 00:15:16.185 ]' 00:15:16.185 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:16.185 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:16.185 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:16.185 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:16.185 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:16.185 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.185 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.185 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.443 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTE0NzliNDY5YjliY2Q3ZWQzODI2ZTczYzczNjQ5NDZlZTQ5NTVjM2Y4NDhjODUw/9XR0g==: --dhchap-ctrl-secret DHHC-1:01:MjNlMDI2ZDFmMzUzODJjZDgyZWQxN2JjMTUwNTk5ZmNJddK+: 00:15:16.443 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:02:NTE0NzliNDY5YjliY2Q3ZWQzODI2ZTczYzczNjQ5NDZlZTQ5NTVjM2Y4NDhjODUw/9XR0g==: --dhchap-ctrl-secret DHHC-1:01:MjNlMDI2ZDFmMzUzODJjZDgyZWQxN2JjMTUwNTk5ZmNJddK+: 00:15:17.376 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.376 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:17.376 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.376 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.377 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.377 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:17.377 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:17.377 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:17.634 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:15:17.634 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:17.634 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:17.634 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:17.634 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:17.634 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.634 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:15:17.634 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.634 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.634 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.634 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:17.634 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:17.634 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:18.200 00:15:18.200 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:18.200 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:18.200 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.458 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.458 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.458 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.458 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.458 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.458 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:18.458 { 00:15:18.458 "cntlid": 87, 00:15:18.458 "qid": 0, 00:15:18.458 "state": "enabled", 00:15:18.458 "thread": "nvmf_tgt_poll_group_000", 00:15:18.458 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:15:18.458 "listen_address": { 00:15:18.458 "trtype": "TCP", 00:15:18.458 "adrfam": "IPv4", 00:15:18.458 "traddr": "10.0.0.2", 00:15:18.458 "trsvcid": "4420" 00:15:18.458 }, 00:15:18.458 "peer_address": { 00:15:18.458 "trtype": "TCP", 00:15:18.458 "adrfam": "IPv4", 00:15:18.458 "traddr": "10.0.0.1", 00:15:18.458 "trsvcid": "52786" 00:15:18.458 }, 00:15:18.458 "auth": { 00:15:18.458 "state": "completed", 00:15:18.458 "digest": "sha384", 00:15:18.458 "dhgroup": "ffdhe6144" 00:15:18.458 } 00:15:18.458 } 00:15:18.458 ]' 00:15:18.458 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:18.458 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:18.458 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:18.716 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:18.716 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:18.716 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.716 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.716 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.974 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjkzMTE2NDMzYjE2NmEyMTY2NmUxNjI3NjI4NDQxZjBlZGJmODM1ZmMxYTdlOWI5YjJiYTg5YWI2N2M5MjZhNgdVFY0=: 00:15:18.974 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:03:YjkzMTE2NDMzYjE2NmEyMTY2NmUxNjI3NjI4NDQxZjBlZGJmODM1ZmMxYTdlOWI5YjJiYTg5YWI2N2M5MjZhNgdVFY0=: 00:15:19.908 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.908 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:19.908 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.908 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.908 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.908 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:19.908 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:19.908 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:19.908 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:20.166 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:15:20.166 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:20.166 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:20.166 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:20.166 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:20.166 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.166 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.166 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.166 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.166 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.166 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.166 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.166 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.099 00:15:21.099 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:21.099 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:21.099 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.357 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.357 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.357 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.357 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.357 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.357 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:21.357 { 00:15:21.357 "cntlid": 89, 00:15:21.357 "qid": 0, 00:15:21.357 "state": "enabled", 00:15:21.357 "thread": "nvmf_tgt_poll_group_000", 00:15:21.357 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:15:21.357 "listen_address": { 00:15:21.357 "trtype": "TCP", 00:15:21.357 "adrfam": "IPv4", 00:15:21.357 "traddr": "10.0.0.2", 00:15:21.357 "trsvcid": "4420" 00:15:21.357 }, 00:15:21.357 "peer_address": { 00:15:21.357 "trtype": "TCP", 00:15:21.357 "adrfam": "IPv4", 00:15:21.357 "traddr": "10.0.0.1", 00:15:21.357 "trsvcid": "52796" 00:15:21.357 }, 00:15:21.357 "auth": { 00:15:21.357 "state": "completed", 00:15:21.357 "digest": "sha384", 00:15:21.357 "dhgroup": "ffdhe8192" 00:15:21.357 } 00:15:21.357 } 00:15:21.357 ]' 00:15:21.357 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:21.357 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:21.357 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:21.357 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:21.357 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:21.357 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.357 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.357 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.615 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGIxOGE2NWEzNjQxYjk3NmQxODIxOTFiNTQzZGMwODM2YzU4MDNlMzE4MjMzMWM4nuP7ZA==: --dhchap-ctrl-secret DHHC-1:03:Y2U5YWYwNmI2Yzg5NzMxMjE1MzQzZGFhNjJhZmZhYWRmZDdiMzk0MDBlOGNjYjk1MmIyOTU5OWY4YjQwYTE4OdJAE80=: 00:15:21.615 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:00:ZGIxOGE2NWEzNjQxYjk3NmQxODIxOTFiNTQzZGMwODM2YzU4MDNlMzE4MjMzMWM4nuP7ZA==: --dhchap-ctrl-secret DHHC-1:03:Y2U5YWYwNmI2Yzg5NzMxMjE1MzQzZGFhNjJhZmZhYWRmZDdiMzk0MDBlOGNjYjk1MmIyOTU5OWY4YjQwYTE4OdJAE80=: 00:15:22.548 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.548 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.548 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:22.548 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.548 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.548 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.548 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:22.548 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:22.548 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:22.806 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:15:22.806 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:22.806 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:22.806 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:22.806 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:22.806 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.806 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.806 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.806 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.806 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.806 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.806 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.806 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.740 00:15:23.740 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:23.740 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:23.740 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.997 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.997 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:23.997 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.997 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.997 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.997 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:23.997 { 00:15:23.997 "cntlid": 91, 00:15:23.997 "qid": 0, 00:15:23.997 "state": "enabled", 00:15:23.997 "thread": "nvmf_tgt_poll_group_000", 00:15:23.997 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:15:23.997 "listen_address": { 00:15:23.997 "trtype": "TCP", 00:15:23.997 "adrfam": "IPv4", 00:15:23.997 "traddr": "10.0.0.2", 00:15:23.997 "trsvcid": "4420" 00:15:23.997 }, 00:15:23.997 "peer_address": { 00:15:23.997 "trtype": "TCP", 00:15:23.997 "adrfam": "IPv4", 00:15:23.997 "traddr": "10.0.0.1", 00:15:23.997 "trsvcid": "53356" 00:15:23.997 }, 00:15:23.997 "auth": { 00:15:23.997 "state": "completed", 00:15:23.997 "digest": "sha384", 00:15:23.997 "dhgroup": "ffdhe8192" 00:15:23.997 } 00:15:23.997 } 00:15:23.997 ]' 00:15:23.997 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:23.997 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:23.997 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:23.997 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:23.997 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:23.997 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:23.997 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.997 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.255 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTE2MzEzMThhZjczYTc3ZGE3ZTgzMzIzY2U2Yzg5YmXOwQ9p: --dhchap-ctrl-secret DHHC-1:02:Mzk3M2FmOTc0NzQ5Zjg1MDNjNmFhMjYzNDdkYzUwMDcwNmExN2M1ZjE2ZmUxZjc0gyublA==: 00:15:24.256 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:01:OTE2MzEzMThhZjczYTc3ZGE3ZTgzMzIzY2U2Yzg5YmXOwQ9p: --dhchap-ctrl-secret DHHC-1:02:Mzk3M2FmOTc0NzQ5Zjg1MDNjNmFhMjYzNDdkYzUwMDcwNmExN2M1ZjE2ZmUxZjc0gyublA==: 00:15:25.216 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.216 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:25.216 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.216 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.216 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.216 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:25.216 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:25.216 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:25.474 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:15:25.474 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:25.474 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:25.474 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:25.474 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:25.474 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.474 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.474 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.474 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.474 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.474 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.474 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.474 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:26.408 00:15:26.408 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:26.408 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:26.408 11:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.666 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.666 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.666 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.666 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.666 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.666 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:26.666 { 00:15:26.667 "cntlid": 93, 00:15:26.667 "qid": 0, 00:15:26.667 "state": "enabled", 00:15:26.667 "thread": "nvmf_tgt_poll_group_000", 00:15:26.667 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:15:26.667 "listen_address": { 00:15:26.667 "trtype": "TCP", 00:15:26.667 "adrfam": "IPv4", 00:15:26.667 "traddr": "10.0.0.2", 00:15:26.667 "trsvcid": "4420" 00:15:26.667 }, 00:15:26.667 "peer_address": { 00:15:26.667 "trtype": "TCP", 00:15:26.667 "adrfam": "IPv4", 00:15:26.667 "traddr": "10.0.0.1", 00:15:26.667 "trsvcid": "53376" 00:15:26.667 }, 00:15:26.667 "auth": { 00:15:26.667 "state": "completed", 00:15:26.667 "digest": "sha384", 00:15:26.667 "dhgroup": "ffdhe8192" 00:15:26.667 } 00:15:26.667 } 00:15:26.667 ]' 00:15:26.667 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:26.667 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:26.667 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:26.925 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:26.925 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:26.925 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.925 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.925 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.183 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTE0NzliNDY5YjliY2Q3ZWQzODI2ZTczYzczNjQ5NDZlZTQ5NTVjM2Y4NDhjODUw/9XR0g==: --dhchap-ctrl-secret DHHC-1:01:MjNlMDI2ZDFmMzUzODJjZDgyZWQxN2JjMTUwNTk5ZmNJddK+: 00:15:27.183 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:02:NTE0NzliNDY5YjliY2Q3ZWQzODI2ZTczYzczNjQ5NDZlZTQ5NTVjM2Y4NDhjODUw/9XR0g==: --dhchap-ctrl-secret DHHC-1:01:MjNlMDI2ZDFmMzUzODJjZDgyZWQxN2JjMTUwNTk5ZmNJddK+: 00:15:28.116 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.116 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:28.116 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.116 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.116 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.116 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:28.116 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:28.116 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:28.374 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:15:28.374 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:28.375 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:28.375 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:28.375 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:28.375 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.375 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:15:28.375 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.375 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.375 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.375 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:28.375 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:28.375 11:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:29.308 00:15:29.308 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:29.308 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:29.308 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.565 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.565 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.565 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.565 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.565 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.565 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:29.565 { 00:15:29.565 "cntlid": 95, 00:15:29.565 "qid": 0, 00:15:29.565 "state": "enabled", 00:15:29.565 "thread": "nvmf_tgt_poll_group_000", 00:15:29.565 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:15:29.565 "listen_address": { 00:15:29.565 "trtype": "TCP", 00:15:29.565 "adrfam": "IPv4", 00:15:29.565 "traddr": "10.0.0.2", 00:15:29.565 "trsvcid": "4420" 00:15:29.565 }, 00:15:29.565 "peer_address": { 00:15:29.565 "trtype": "TCP", 00:15:29.565 "adrfam": "IPv4", 00:15:29.565 "traddr": "10.0.0.1", 00:15:29.565 "trsvcid": "53404" 00:15:29.566 }, 00:15:29.566 "auth": { 00:15:29.566 "state": "completed", 00:15:29.566 "digest": "sha384", 00:15:29.566 "dhgroup": "ffdhe8192" 00:15:29.566 } 00:15:29.566 } 00:15:29.566 ]' 00:15:29.566 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:29.566 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:29.566 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:29.566 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:29.566 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:29.566 11:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.566 11:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.566 11:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.823 11:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjkzMTE2NDMzYjE2NmEyMTY2NmUxNjI3NjI4NDQxZjBlZGJmODM1ZmMxYTdlOWI5YjJiYTg5YWI2N2M5MjZhNgdVFY0=: 00:15:29.823 11:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:03:YjkzMTE2NDMzYjE2NmEyMTY2NmUxNjI3NjI4NDQxZjBlZGJmODM1ZmMxYTdlOWI5YjJiYTg5YWI2N2M5MjZhNgdVFY0=: 00:15:30.757 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.757 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.757 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:30.757 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.757 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.757 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.758 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:30.758 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:30.758 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:30.758 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:30.758 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:31.324 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:15:31.324 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:31.324 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:31.324 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:31.324 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:31.324 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.324 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.325 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.325 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.325 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.325 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.325 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.325 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.582 00:15:31.582 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:31.582 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:31.582 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.840 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.840 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.840 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.840 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.840 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.840 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:31.840 { 00:15:31.840 "cntlid": 97, 00:15:31.840 "qid": 0, 00:15:31.840 "state": "enabled", 00:15:31.840 "thread": "nvmf_tgt_poll_group_000", 00:15:31.840 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:15:31.840 "listen_address": { 00:15:31.840 "trtype": "TCP", 00:15:31.840 "adrfam": "IPv4", 00:15:31.840 "traddr": "10.0.0.2", 00:15:31.840 "trsvcid": "4420" 00:15:31.840 }, 00:15:31.840 "peer_address": { 00:15:31.840 "trtype": "TCP", 00:15:31.840 "adrfam": "IPv4", 00:15:31.840 "traddr": "10.0.0.1", 00:15:31.840 "trsvcid": "37504" 00:15:31.840 }, 00:15:31.840 "auth": { 00:15:31.840 "state": "completed", 00:15:31.840 "digest": "sha512", 00:15:31.840 "dhgroup": "null" 00:15:31.840 } 00:15:31.840 } 00:15:31.840 ]' 00:15:31.840 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:31.840 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:31.840 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:31.840 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:31.840 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:31.840 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.840 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.840 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.098 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGIxOGE2NWEzNjQxYjk3NmQxODIxOTFiNTQzZGMwODM2YzU4MDNlMzE4MjMzMWM4nuP7ZA==: --dhchap-ctrl-secret DHHC-1:03:Y2U5YWYwNmI2Yzg5NzMxMjE1MzQzZGFhNjJhZmZhYWRmZDdiMzk0MDBlOGNjYjk1MmIyOTU5OWY4YjQwYTE4OdJAE80=: 00:15:32.098 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:00:ZGIxOGE2NWEzNjQxYjk3NmQxODIxOTFiNTQzZGMwODM2YzU4MDNlMzE4MjMzMWM4nuP7ZA==: --dhchap-ctrl-secret DHHC-1:03:Y2U5YWYwNmI2Yzg5NzMxMjE1MzQzZGFhNjJhZmZhYWRmZDdiMzk0MDBlOGNjYjk1MmIyOTU5OWY4YjQwYTE4OdJAE80=: 00:15:33.031 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.031 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:33.031 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.031 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.031 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.031 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:33.032 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:33.032 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:33.290 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:15:33.290 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:33.290 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:33.290 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:33.290 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:33.290 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.290 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:33.290 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.290 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.290 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.290 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:33.290 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:33.290 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:33.857 00:15:33.857 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:33.857 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:33.857 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.857 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.857 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.857 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.857 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.115 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.115 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:34.115 { 00:15:34.115 "cntlid": 99, 00:15:34.115 "qid": 0, 00:15:34.115 "state": "enabled", 00:15:34.115 "thread": "nvmf_tgt_poll_group_000", 00:15:34.115 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:15:34.115 "listen_address": { 00:15:34.115 "trtype": "TCP", 00:15:34.115 "adrfam": "IPv4", 00:15:34.115 "traddr": "10.0.0.2", 00:15:34.115 "trsvcid": "4420" 00:15:34.115 }, 00:15:34.115 "peer_address": { 00:15:34.115 "trtype": "TCP", 00:15:34.115 "adrfam": "IPv4", 00:15:34.115 "traddr": "10.0.0.1", 00:15:34.115 "trsvcid": "37530" 00:15:34.115 }, 00:15:34.115 "auth": { 00:15:34.115 "state": "completed", 00:15:34.115 "digest": "sha512", 00:15:34.115 "dhgroup": "null" 00:15:34.115 } 00:15:34.115 } 00:15:34.115 ]' 00:15:34.115 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:34.115 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:34.115 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:34.115 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:34.115 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:34.115 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.115 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.115 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.373 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTE2MzEzMThhZjczYTc3ZGE3ZTgzMzIzY2U2Yzg5YmXOwQ9p: --dhchap-ctrl-secret DHHC-1:02:Mzk3M2FmOTc0NzQ5Zjg1MDNjNmFhMjYzNDdkYzUwMDcwNmExN2M1ZjE2ZmUxZjc0gyublA==: 00:15:34.374 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:01:OTE2MzEzMThhZjczYTc3ZGE3ZTgzMzIzY2U2Yzg5YmXOwQ9p: --dhchap-ctrl-secret DHHC-1:02:Mzk3M2FmOTc0NzQ5Zjg1MDNjNmFhMjYzNDdkYzUwMDcwNmExN2M1ZjE2ZmUxZjc0gyublA==: 00:15:35.306 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.306 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.306 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:35.306 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.306 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.306 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.306 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:35.306 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:35.306 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:35.563 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:15:35.563 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:35.563 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:35.563 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:35.563 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:35.563 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.563 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:35.563 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.563 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.563 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.563 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:35.563 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:35.563 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:35.823 00:15:35.823 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:35.823 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:35.823 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.389 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.389 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.389 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.389 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.389 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.389 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:36.389 { 00:15:36.389 "cntlid": 101, 00:15:36.389 "qid": 0, 00:15:36.389 "state": "enabled", 00:15:36.389 "thread": "nvmf_tgt_poll_group_000", 00:15:36.389 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:15:36.389 "listen_address": { 00:15:36.389 "trtype": "TCP", 00:15:36.389 "adrfam": "IPv4", 00:15:36.389 "traddr": "10.0.0.2", 00:15:36.389 "trsvcid": "4420" 00:15:36.389 }, 00:15:36.389 "peer_address": { 00:15:36.389 "trtype": "TCP", 00:15:36.389 "adrfam": "IPv4", 00:15:36.389 "traddr": "10.0.0.1", 00:15:36.389 "trsvcid": "37542" 00:15:36.389 }, 00:15:36.389 "auth": { 00:15:36.389 "state": "completed", 00:15:36.389 "digest": "sha512", 00:15:36.389 "dhgroup": "null" 00:15:36.389 } 00:15:36.389 } 00:15:36.389 ]' 00:15:36.389 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:36.389 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:36.389 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:36.389 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:36.389 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:36.389 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.389 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.389 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.647 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTE0NzliNDY5YjliY2Q3ZWQzODI2ZTczYzczNjQ5NDZlZTQ5NTVjM2Y4NDhjODUw/9XR0g==: --dhchap-ctrl-secret DHHC-1:01:MjNlMDI2ZDFmMzUzODJjZDgyZWQxN2JjMTUwNTk5ZmNJddK+: 00:15:36.647 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:02:NTE0NzliNDY5YjliY2Q3ZWQzODI2ZTczYzczNjQ5NDZlZTQ5NTVjM2Y4NDhjODUw/9XR0g==: --dhchap-ctrl-secret DHHC-1:01:MjNlMDI2ZDFmMzUzODJjZDgyZWQxN2JjMTUwNTk5ZmNJddK+: 00:15:37.581 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.581 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:37.581 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.581 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.581 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.581 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:37.581 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:37.581 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:37.841 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:15:37.841 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:37.841 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:37.841 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:37.841 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:37.841 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.841 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:15:37.841 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.841 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.841 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.841 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:37.841 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:37.841 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:38.099 00:15:38.099 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:38.100 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:38.100 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.357 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.357 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:38.357 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.357 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.357 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.357 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:38.357 { 00:15:38.357 "cntlid": 103, 00:15:38.357 "qid": 0, 00:15:38.357 "state": "enabled", 00:15:38.357 "thread": "nvmf_tgt_poll_group_000", 00:15:38.357 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:15:38.357 "listen_address": { 00:15:38.357 "trtype": "TCP", 00:15:38.357 "adrfam": "IPv4", 00:15:38.357 "traddr": "10.0.0.2", 00:15:38.357 "trsvcid": "4420" 00:15:38.357 }, 00:15:38.357 "peer_address": { 00:15:38.357 "trtype": "TCP", 00:15:38.357 "adrfam": "IPv4", 00:15:38.357 "traddr": "10.0.0.1", 00:15:38.357 "trsvcid": "37572" 00:15:38.357 }, 00:15:38.357 "auth": { 00:15:38.357 "state": "completed", 00:15:38.357 "digest": "sha512", 00:15:38.357 "dhgroup": "null" 00:15:38.357 } 00:15:38.357 } 00:15:38.357 ]' 00:15:38.357 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:38.357 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:38.357 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:38.357 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:38.357 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:38.615 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:38.615 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:38.615 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.874 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjkzMTE2NDMzYjE2NmEyMTY2NmUxNjI3NjI4NDQxZjBlZGJmODM1ZmMxYTdlOWI5YjJiYTg5YWI2N2M5MjZhNgdVFY0=: 00:15:38.874 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:03:YjkzMTE2NDMzYjE2NmEyMTY2NmUxNjI3NjI4NDQxZjBlZGJmODM1ZmMxYTdlOWI5YjJiYTg5YWI2N2M5MjZhNgdVFY0=: 00:15:39.808 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.808 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.808 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:39.808 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.808 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.808 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.808 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:39.808 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:39.808 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:39.808 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:39.808 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:15:39.808 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:39.808 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:39.808 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:39.808 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:39.808 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.808 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.808 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.808 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.808 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.808 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.808 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.808 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.374 00:15:40.374 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:40.374 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:40.374 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.632 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.632 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.632 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.632 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.632 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.632 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:40.632 { 00:15:40.632 "cntlid": 105, 00:15:40.632 "qid": 0, 00:15:40.632 "state": "enabled", 00:15:40.633 "thread": "nvmf_tgt_poll_group_000", 00:15:40.633 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:15:40.633 "listen_address": { 00:15:40.633 "trtype": "TCP", 00:15:40.633 "adrfam": "IPv4", 00:15:40.633 "traddr": "10.0.0.2", 00:15:40.633 "trsvcid": "4420" 00:15:40.633 }, 00:15:40.633 "peer_address": { 00:15:40.633 "trtype": "TCP", 00:15:40.633 "adrfam": "IPv4", 00:15:40.633 "traddr": "10.0.0.1", 00:15:40.633 "trsvcid": "37610" 00:15:40.633 }, 00:15:40.633 "auth": { 00:15:40.633 "state": "completed", 00:15:40.633 "digest": "sha512", 00:15:40.633 "dhgroup": "ffdhe2048" 00:15:40.633 } 00:15:40.633 } 00:15:40.633 ]' 00:15:40.633 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:40.633 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:40.633 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:40.633 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:40.633 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:40.633 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.633 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.633 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.891 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGIxOGE2NWEzNjQxYjk3NmQxODIxOTFiNTQzZGMwODM2YzU4MDNlMzE4MjMzMWM4nuP7ZA==: --dhchap-ctrl-secret DHHC-1:03:Y2U5YWYwNmI2Yzg5NzMxMjE1MzQzZGFhNjJhZmZhYWRmZDdiMzk0MDBlOGNjYjk1MmIyOTU5OWY4YjQwYTE4OdJAE80=: 00:15:40.891 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:00:ZGIxOGE2NWEzNjQxYjk3NmQxODIxOTFiNTQzZGMwODM2YzU4MDNlMzE4MjMzMWM4nuP7ZA==: --dhchap-ctrl-secret DHHC-1:03:Y2U5YWYwNmI2Yzg5NzMxMjE1MzQzZGFhNjJhZmZhYWRmZDdiMzk0MDBlOGNjYjk1MmIyOTU5OWY4YjQwYTE4OdJAE80=: 00:15:41.824 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.824 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.824 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:41.824 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.824 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.824 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.824 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:41.824 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:41.824 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:42.082 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:15:42.082 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:42.082 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:42.082 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:42.082 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:42.082 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.082 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.082 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.082 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.082 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.082 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.082 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.082 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.340 00:15:42.340 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:42.340 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:42.340 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.598 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.598 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.598 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.598 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.598 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.598 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:42.598 { 00:15:42.598 "cntlid": 107, 00:15:42.598 "qid": 0, 00:15:42.598 "state": "enabled", 00:15:42.598 "thread": "nvmf_tgt_poll_group_000", 00:15:42.598 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:15:42.598 "listen_address": { 00:15:42.598 "trtype": "TCP", 00:15:42.598 "adrfam": "IPv4", 00:15:42.598 "traddr": "10.0.0.2", 00:15:42.598 "trsvcid": "4420" 00:15:42.598 }, 00:15:42.598 "peer_address": { 00:15:42.598 "trtype": "TCP", 00:15:42.598 "adrfam": "IPv4", 00:15:42.598 "traddr": "10.0.0.1", 00:15:42.598 "trsvcid": "56350" 00:15:42.598 }, 00:15:42.598 "auth": { 00:15:42.598 "state": "completed", 00:15:42.598 "digest": "sha512", 00:15:42.598 "dhgroup": "ffdhe2048" 00:15:42.598 } 00:15:42.598 } 00:15:42.598 ]' 00:15:42.598 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:42.856 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:42.856 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:42.856 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:42.856 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:42.856 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.856 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.856 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.115 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTE2MzEzMThhZjczYTc3ZGE3ZTgzMzIzY2U2Yzg5YmXOwQ9p: --dhchap-ctrl-secret DHHC-1:02:Mzk3M2FmOTc0NzQ5Zjg1MDNjNmFhMjYzNDdkYzUwMDcwNmExN2M1ZjE2ZmUxZjc0gyublA==: 00:15:43.115 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:01:OTE2MzEzMThhZjczYTc3ZGE3ZTgzMzIzY2U2Yzg5YmXOwQ9p: --dhchap-ctrl-secret DHHC-1:02:Mzk3M2FmOTc0NzQ5Zjg1MDNjNmFhMjYzNDdkYzUwMDcwNmExN2M1ZjE2ZmUxZjc0gyublA==: 00:15:44.049 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.050 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:44.050 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.050 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.050 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.050 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:44.050 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:44.050 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:44.308 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:15:44.308 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:44.308 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:44.308 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:44.308 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:44.308 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.308 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.308 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.308 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.308 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.308 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.308 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.308 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.874 00:15:44.874 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:44.874 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:44.874 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.132 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.132 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.132 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.132 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.132 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.132 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:45.132 { 00:15:45.132 "cntlid": 109, 00:15:45.132 "qid": 0, 00:15:45.132 "state": "enabled", 00:15:45.132 "thread": "nvmf_tgt_poll_group_000", 00:15:45.132 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:15:45.132 "listen_address": { 00:15:45.132 "trtype": "TCP", 00:15:45.132 "adrfam": "IPv4", 00:15:45.132 "traddr": "10.0.0.2", 00:15:45.132 "trsvcid": "4420" 00:15:45.132 }, 00:15:45.132 "peer_address": { 00:15:45.132 "trtype": "TCP", 00:15:45.132 "adrfam": "IPv4", 00:15:45.132 "traddr": "10.0.0.1", 00:15:45.132 "trsvcid": "56380" 00:15:45.132 }, 00:15:45.132 "auth": { 00:15:45.132 "state": "completed", 00:15:45.132 "digest": "sha512", 00:15:45.132 "dhgroup": "ffdhe2048" 00:15:45.132 } 00:15:45.132 } 00:15:45.132 ]' 00:15:45.132 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:45.132 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:45.132 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:45.132 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:45.132 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:45.132 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.132 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.132 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.397 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTE0NzliNDY5YjliY2Q3ZWQzODI2ZTczYzczNjQ5NDZlZTQ5NTVjM2Y4NDhjODUw/9XR0g==: --dhchap-ctrl-secret DHHC-1:01:MjNlMDI2ZDFmMzUzODJjZDgyZWQxN2JjMTUwNTk5ZmNJddK+: 00:15:45.397 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:02:NTE0NzliNDY5YjliY2Q3ZWQzODI2ZTczYzczNjQ5NDZlZTQ5NTVjM2Y4NDhjODUw/9XR0g==: --dhchap-ctrl-secret DHHC-1:01:MjNlMDI2ZDFmMzUzODJjZDgyZWQxN2JjMTUwNTk5ZmNJddK+: 00:15:46.421 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.421 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.421 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:46.421 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.421 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.421 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.421 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:46.421 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:46.421 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:46.679 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:15:46.679 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:46.679 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:46.679 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:46.679 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:46.679 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.679 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:15:46.679 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.679 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.679 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.679 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:46.679 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:46.679 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:46.937 00:15:46.937 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:46.937 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:46.937 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.195 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.195 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.195 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.195 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.195 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.195 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.195 { 00:15:47.195 "cntlid": 111, 00:15:47.195 "qid": 0, 00:15:47.195 "state": "enabled", 00:15:47.195 "thread": "nvmf_tgt_poll_group_000", 00:15:47.195 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:15:47.195 "listen_address": { 00:15:47.195 "trtype": "TCP", 00:15:47.195 "adrfam": "IPv4", 00:15:47.195 "traddr": "10.0.0.2", 00:15:47.195 "trsvcid": "4420" 00:15:47.195 }, 00:15:47.195 "peer_address": { 00:15:47.195 "trtype": "TCP", 00:15:47.195 "adrfam": "IPv4", 00:15:47.195 "traddr": "10.0.0.1", 00:15:47.195 "trsvcid": "56400" 00:15:47.195 }, 00:15:47.195 "auth": { 00:15:47.195 "state": "completed", 00:15:47.195 "digest": "sha512", 00:15:47.195 "dhgroup": "ffdhe2048" 00:15:47.195 } 00:15:47.195 } 00:15:47.195 ]' 00:15:47.195 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.453 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:47.453 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.453 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:47.453 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:47.453 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.453 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.453 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.710 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjkzMTE2NDMzYjE2NmEyMTY2NmUxNjI3NjI4NDQxZjBlZGJmODM1ZmMxYTdlOWI5YjJiYTg5YWI2N2M5MjZhNgdVFY0=: 00:15:47.710 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:03:YjkzMTE2NDMzYjE2NmEyMTY2NmUxNjI3NjI4NDQxZjBlZGJmODM1ZmMxYTdlOWI5YjJiYTg5YWI2N2M5MjZhNgdVFY0=: 00:15:48.644 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.644 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:48.644 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.644 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.644 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.644 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:48.644 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.644 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:48.644 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:48.903 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:15:48.903 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:48.903 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:48.903 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:48.903 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:48.903 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.903 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.903 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.903 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.903 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.903 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.903 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.903 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.470 00:15:49.470 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:49.470 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:49.470 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.729 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.729 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.729 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.729 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.729 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.729 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:49.729 { 00:15:49.729 "cntlid": 113, 00:15:49.729 "qid": 0, 00:15:49.729 "state": "enabled", 00:15:49.729 "thread": "nvmf_tgt_poll_group_000", 00:15:49.729 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:15:49.729 "listen_address": { 00:15:49.729 "trtype": "TCP", 00:15:49.729 "adrfam": "IPv4", 00:15:49.729 "traddr": "10.0.0.2", 00:15:49.729 "trsvcid": "4420" 00:15:49.729 }, 00:15:49.729 "peer_address": { 00:15:49.729 "trtype": "TCP", 00:15:49.729 "adrfam": "IPv4", 00:15:49.729 "traddr": "10.0.0.1", 00:15:49.729 "trsvcid": "56432" 00:15:49.729 }, 00:15:49.729 "auth": { 00:15:49.729 "state": "completed", 00:15:49.729 "digest": "sha512", 00:15:49.729 "dhgroup": "ffdhe3072" 00:15:49.729 } 00:15:49.729 } 00:15:49.729 ]' 00:15:49.729 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.729 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:49.729 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.729 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:49.729 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.729 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.729 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.729 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.987 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGIxOGE2NWEzNjQxYjk3NmQxODIxOTFiNTQzZGMwODM2YzU4MDNlMzE4MjMzMWM4nuP7ZA==: --dhchap-ctrl-secret DHHC-1:03:Y2U5YWYwNmI2Yzg5NzMxMjE1MzQzZGFhNjJhZmZhYWRmZDdiMzk0MDBlOGNjYjk1MmIyOTU5OWY4YjQwYTE4OdJAE80=: 00:15:49.987 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:00:ZGIxOGE2NWEzNjQxYjk3NmQxODIxOTFiNTQzZGMwODM2YzU4MDNlMzE4MjMzMWM4nuP7ZA==: --dhchap-ctrl-secret DHHC-1:03:Y2U5YWYwNmI2Yzg5NzMxMjE1MzQzZGFhNjJhZmZhYWRmZDdiMzk0MDBlOGNjYjk1MmIyOTU5OWY4YjQwYTE4OdJAE80=: 00:15:50.920 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.920 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.920 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:50.920 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.920 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.920 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.920 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:50.920 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:50.920 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:51.178 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:15:51.178 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:51.178 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:51.178 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:51.178 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:51.178 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.178 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.178 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.178 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.178 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.178 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.178 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.178 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.744 00:15:51.744 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:51.744 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:51.744 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.002 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.003 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.003 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.003 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.003 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.003 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:52.003 { 00:15:52.003 "cntlid": 115, 00:15:52.003 "qid": 0, 00:15:52.003 "state": "enabled", 00:15:52.003 "thread": "nvmf_tgt_poll_group_000", 00:15:52.003 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:15:52.003 "listen_address": { 00:15:52.003 "trtype": "TCP", 00:15:52.003 "adrfam": "IPv4", 00:15:52.003 "traddr": "10.0.0.2", 00:15:52.003 "trsvcid": "4420" 00:15:52.003 }, 00:15:52.003 "peer_address": { 00:15:52.003 "trtype": "TCP", 00:15:52.003 "adrfam": "IPv4", 00:15:52.003 "traddr": "10.0.0.1", 00:15:52.003 "trsvcid": "37904" 00:15:52.003 }, 00:15:52.003 "auth": { 00:15:52.003 "state": "completed", 00:15:52.003 "digest": "sha512", 00:15:52.003 "dhgroup": "ffdhe3072" 00:15:52.003 } 00:15:52.003 } 00:15:52.003 ]' 00:15:52.003 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:52.003 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:52.003 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:52.003 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:52.003 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:52.003 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.003 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.003 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.261 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTE2MzEzMThhZjczYTc3ZGE3ZTgzMzIzY2U2Yzg5YmXOwQ9p: --dhchap-ctrl-secret DHHC-1:02:Mzk3M2FmOTc0NzQ5Zjg1MDNjNmFhMjYzNDdkYzUwMDcwNmExN2M1ZjE2ZmUxZjc0gyublA==: 00:15:52.261 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:01:OTE2MzEzMThhZjczYTc3ZGE3ZTgzMzIzY2U2Yzg5YmXOwQ9p: --dhchap-ctrl-secret DHHC-1:02:Mzk3M2FmOTc0NzQ5Zjg1MDNjNmFhMjYzNDdkYzUwMDcwNmExN2M1ZjE2ZmUxZjc0gyublA==: 00:15:53.195 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.195 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.196 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:53.196 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.196 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.196 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.196 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:53.196 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:53.196 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:53.454 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:15:53.454 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.454 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:53.454 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:53.454 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:53.454 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.454 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.454 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.454 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.454 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.454 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.454 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.454 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.019 00:15:54.019 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:54.019 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:54.019 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.277 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.277 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.277 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.277 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.277 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.277 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:54.277 { 00:15:54.277 "cntlid": 117, 00:15:54.277 "qid": 0, 00:15:54.277 "state": "enabled", 00:15:54.277 "thread": "nvmf_tgt_poll_group_000", 00:15:54.277 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:15:54.277 "listen_address": { 00:15:54.277 "trtype": "TCP", 00:15:54.277 "adrfam": "IPv4", 00:15:54.277 "traddr": "10.0.0.2", 00:15:54.277 "trsvcid": "4420" 00:15:54.277 }, 00:15:54.277 "peer_address": { 00:15:54.277 "trtype": "TCP", 00:15:54.277 "adrfam": "IPv4", 00:15:54.277 "traddr": "10.0.0.1", 00:15:54.277 "trsvcid": "37928" 00:15:54.277 }, 00:15:54.277 "auth": { 00:15:54.277 "state": "completed", 00:15:54.277 "digest": "sha512", 00:15:54.277 "dhgroup": "ffdhe3072" 00:15:54.277 } 00:15:54.277 } 00:15:54.277 ]' 00:15:54.277 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:54.277 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:54.277 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:54.277 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:54.277 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:54.277 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.277 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.277 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.536 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTE0NzliNDY5YjliY2Q3ZWQzODI2ZTczYzczNjQ5NDZlZTQ5NTVjM2Y4NDhjODUw/9XR0g==: --dhchap-ctrl-secret DHHC-1:01:MjNlMDI2ZDFmMzUzODJjZDgyZWQxN2JjMTUwNTk5ZmNJddK+: 00:15:54.536 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:02:NTE0NzliNDY5YjliY2Q3ZWQzODI2ZTczYzczNjQ5NDZlZTQ5NTVjM2Y4NDhjODUw/9XR0g==: --dhchap-ctrl-secret DHHC-1:01:MjNlMDI2ZDFmMzUzODJjZDgyZWQxN2JjMTUwNTk5ZmNJddK+: 00:15:55.469 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.469 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.469 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:55.469 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.469 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.469 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.469 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:55.469 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:55.469 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:55.727 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:15:55.727 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:55.727 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:55.727 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:55.727 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:55.727 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.727 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:15:55.727 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.727 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.727 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.727 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:55.727 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:55.727 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:56.293 00:15:56.293 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:56.293 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.293 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:56.551 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.551 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.551 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.551 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.551 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.551 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:56.551 { 00:15:56.551 "cntlid": 119, 00:15:56.551 "qid": 0, 00:15:56.551 "state": "enabled", 00:15:56.551 "thread": "nvmf_tgt_poll_group_000", 00:15:56.551 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:15:56.551 "listen_address": { 00:15:56.551 "trtype": "TCP", 00:15:56.551 "adrfam": "IPv4", 00:15:56.551 "traddr": "10.0.0.2", 00:15:56.551 "trsvcid": "4420" 00:15:56.551 }, 00:15:56.551 "peer_address": { 00:15:56.551 "trtype": "TCP", 00:15:56.551 "adrfam": "IPv4", 00:15:56.551 "traddr": "10.0.0.1", 00:15:56.551 "trsvcid": "37960" 00:15:56.551 }, 00:15:56.551 "auth": { 00:15:56.551 "state": "completed", 00:15:56.551 "digest": "sha512", 00:15:56.551 "dhgroup": "ffdhe3072" 00:15:56.551 } 00:15:56.551 } 00:15:56.551 ]' 00:15:56.551 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:56.551 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:56.551 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:56.551 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:56.551 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:56.551 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.551 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.551 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.809 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjkzMTE2NDMzYjE2NmEyMTY2NmUxNjI3NjI4NDQxZjBlZGJmODM1ZmMxYTdlOWI5YjJiYTg5YWI2N2M5MjZhNgdVFY0=: 00:15:56.809 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:03:YjkzMTE2NDMzYjE2NmEyMTY2NmUxNjI3NjI4NDQxZjBlZGJmODM1ZmMxYTdlOWI5YjJiYTg5YWI2N2M5MjZhNgdVFY0=: 00:15:57.742 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.742 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.742 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:57.742 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.742 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.742 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.742 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:57.742 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:57.742 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:57.742 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:58.000 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:15:58.000 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:58.000 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:58.000 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:58.000 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:58.000 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.000 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.000 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.000 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.000 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.000 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.000 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.000 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.257 00:15:58.257 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:58.257 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:58.257 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.822 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.822 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.822 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.822 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.822 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.822 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:58.822 { 00:15:58.822 "cntlid": 121, 00:15:58.822 "qid": 0, 00:15:58.822 "state": "enabled", 00:15:58.822 "thread": "nvmf_tgt_poll_group_000", 00:15:58.822 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:15:58.822 "listen_address": { 00:15:58.822 "trtype": "TCP", 00:15:58.822 "adrfam": "IPv4", 00:15:58.822 "traddr": "10.0.0.2", 00:15:58.822 "trsvcid": "4420" 00:15:58.822 }, 00:15:58.822 "peer_address": { 00:15:58.822 "trtype": "TCP", 00:15:58.822 "adrfam": "IPv4", 00:15:58.822 "traddr": "10.0.0.1", 00:15:58.822 "trsvcid": "37984" 00:15:58.822 }, 00:15:58.822 "auth": { 00:15:58.822 "state": "completed", 00:15:58.822 "digest": "sha512", 00:15:58.822 "dhgroup": "ffdhe4096" 00:15:58.822 } 00:15:58.822 } 00:15:58.822 ]' 00:15:58.822 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:58.822 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:58.822 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:58.822 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:58.822 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:58.822 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.822 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.822 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.080 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGIxOGE2NWEzNjQxYjk3NmQxODIxOTFiNTQzZGMwODM2YzU4MDNlMzE4MjMzMWM4nuP7ZA==: --dhchap-ctrl-secret DHHC-1:03:Y2U5YWYwNmI2Yzg5NzMxMjE1MzQzZGFhNjJhZmZhYWRmZDdiMzk0MDBlOGNjYjk1MmIyOTU5OWY4YjQwYTE4OdJAE80=: 00:15:59.080 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:00:ZGIxOGE2NWEzNjQxYjk3NmQxODIxOTFiNTQzZGMwODM2YzU4MDNlMzE4MjMzMWM4nuP7ZA==: --dhchap-ctrl-secret DHHC-1:03:Y2U5YWYwNmI2Yzg5NzMxMjE1MzQzZGFhNjJhZmZhYWRmZDdiMzk0MDBlOGNjYjk1MmIyOTU5OWY4YjQwYTE4OdJAE80=: 00:16:00.013 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.013 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:00.013 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.013 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.013 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.013 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:00.013 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:00.013 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:00.271 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:16:00.271 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:00.271 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:00.271 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:00.271 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:00.271 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.271 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.271 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.271 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.271 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.271 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.271 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.271 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.529 00:16:00.529 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.530 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.530 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.788 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.788 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.788 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.788 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.788 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.788 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:00.788 { 00:16:00.788 "cntlid": 123, 00:16:00.788 "qid": 0, 00:16:00.788 "state": "enabled", 00:16:00.788 "thread": "nvmf_tgt_poll_group_000", 00:16:00.788 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:16:00.788 "listen_address": { 00:16:00.788 "trtype": "TCP", 00:16:00.788 "adrfam": "IPv4", 00:16:00.788 "traddr": "10.0.0.2", 00:16:00.788 "trsvcid": "4420" 00:16:00.788 }, 00:16:00.788 "peer_address": { 00:16:00.788 "trtype": "TCP", 00:16:00.788 "adrfam": "IPv4", 00:16:00.788 "traddr": "10.0.0.1", 00:16:00.788 "trsvcid": "38016" 00:16:00.788 }, 00:16:00.788 "auth": { 00:16:00.788 "state": "completed", 00:16:00.788 "digest": "sha512", 00:16:00.788 "dhgroup": "ffdhe4096" 00:16:00.788 } 00:16:00.788 } 00:16:00.788 ]' 00:16:00.788 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:01.046 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:01.046 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:01.046 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:01.046 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:01.046 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.046 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.046 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.303 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTE2MzEzMThhZjczYTc3ZGE3ZTgzMzIzY2U2Yzg5YmXOwQ9p: --dhchap-ctrl-secret DHHC-1:02:Mzk3M2FmOTc0NzQ5Zjg1MDNjNmFhMjYzNDdkYzUwMDcwNmExN2M1ZjE2ZmUxZjc0gyublA==: 00:16:01.303 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:01:OTE2MzEzMThhZjczYTc3ZGE3ZTgzMzIzY2U2Yzg5YmXOwQ9p: --dhchap-ctrl-secret DHHC-1:02:Mzk3M2FmOTc0NzQ5Zjg1MDNjNmFhMjYzNDdkYzUwMDcwNmExN2M1ZjE2ZmUxZjc0gyublA==: 00:16:02.238 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.238 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:02.238 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.238 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.238 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.238 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:02.238 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:02.238 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:02.496 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:16:02.496 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.496 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:02.496 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:02.496 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:02.496 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.496 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.496 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.496 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.496 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.496 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.496 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.496 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.755 00:16:02.755 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:02.755 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:02.755 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.013 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.013 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.013 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.013 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.013 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.013 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.013 { 00:16:03.013 "cntlid": 125, 00:16:03.013 "qid": 0, 00:16:03.013 "state": "enabled", 00:16:03.013 "thread": "nvmf_tgt_poll_group_000", 00:16:03.013 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:16:03.013 "listen_address": { 00:16:03.013 "trtype": "TCP", 00:16:03.013 "adrfam": "IPv4", 00:16:03.013 "traddr": "10.0.0.2", 00:16:03.013 "trsvcid": "4420" 00:16:03.013 }, 00:16:03.013 "peer_address": { 00:16:03.013 "trtype": "TCP", 00:16:03.013 "adrfam": "IPv4", 00:16:03.013 "traddr": "10.0.0.1", 00:16:03.013 "trsvcid": "60668" 00:16:03.013 }, 00:16:03.013 "auth": { 00:16:03.013 "state": "completed", 00:16:03.013 "digest": "sha512", 00:16:03.013 "dhgroup": "ffdhe4096" 00:16:03.013 } 00:16:03.013 } 00:16:03.013 ]' 00:16:03.013 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.013 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:03.271 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.271 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:03.271 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.271 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.271 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.271 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.528 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTE0NzliNDY5YjliY2Q3ZWQzODI2ZTczYzczNjQ5NDZlZTQ5NTVjM2Y4NDhjODUw/9XR0g==: --dhchap-ctrl-secret DHHC-1:01:MjNlMDI2ZDFmMzUzODJjZDgyZWQxN2JjMTUwNTk5ZmNJddK+: 00:16:03.528 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:02:NTE0NzliNDY5YjliY2Q3ZWQzODI2ZTczYzczNjQ5NDZlZTQ5NTVjM2Y4NDhjODUw/9XR0g==: --dhchap-ctrl-secret DHHC-1:01:MjNlMDI2ZDFmMzUzODJjZDgyZWQxN2JjMTUwNTk5ZmNJddK+: 00:16:04.462 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.462 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:04.462 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.462 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.462 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.462 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.462 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:04.462 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:04.721 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:16:04.721 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:04.721 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:04.721 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:04.721 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:04.721 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.721 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:16:04.721 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.721 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.721 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.721 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:04.721 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:04.721 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:05.015 00:16:05.015 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:05.015 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.015 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.279 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.279 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.279 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.279 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.279 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.279 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.279 { 00:16:05.279 "cntlid": 127, 00:16:05.279 "qid": 0, 00:16:05.279 "state": "enabled", 00:16:05.279 "thread": "nvmf_tgt_poll_group_000", 00:16:05.279 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:16:05.279 "listen_address": { 00:16:05.279 "trtype": "TCP", 00:16:05.279 "adrfam": "IPv4", 00:16:05.279 "traddr": "10.0.0.2", 00:16:05.279 "trsvcid": "4420" 00:16:05.279 }, 00:16:05.279 "peer_address": { 00:16:05.279 "trtype": "TCP", 00:16:05.279 "adrfam": "IPv4", 00:16:05.279 "traddr": "10.0.0.1", 00:16:05.279 "trsvcid": "60694" 00:16:05.279 }, 00:16:05.279 "auth": { 00:16:05.279 "state": "completed", 00:16:05.279 "digest": "sha512", 00:16:05.279 "dhgroup": "ffdhe4096" 00:16:05.279 } 00:16:05.279 } 00:16:05.279 ]' 00:16:05.279 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.279 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:05.279 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.537 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:05.537 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.537 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.537 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.537 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.795 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjkzMTE2NDMzYjE2NmEyMTY2NmUxNjI3NjI4NDQxZjBlZGJmODM1ZmMxYTdlOWI5YjJiYTg5YWI2N2M5MjZhNgdVFY0=: 00:16:05.795 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:03:YjkzMTE2NDMzYjE2NmEyMTY2NmUxNjI3NjI4NDQxZjBlZGJmODM1ZmMxYTdlOWI5YjJiYTg5YWI2N2M5MjZhNgdVFY0=: 00:16:06.728 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.728 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:06.728 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.728 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.728 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.728 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:06.728 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.728 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:06.728 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:06.986 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:16:06.986 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.986 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:06.986 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:06.986 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:06.986 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.986 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.986 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.986 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.986 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.986 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.986 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.986 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.552 00:16:07.552 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.552 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.552 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.810 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.810 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.810 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.810 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.810 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.810 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.810 { 00:16:07.810 "cntlid": 129, 00:16:07.810 "qid": 0, 00:16:07.810 "state": "enabled", 00:16:07.810 "thread": "nvmf_tgt_poll_group_000", 00:16:07.810 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:16:07.810 "listen_address": { 00:16:07.810 "trtype": "TCP", 00:16:07.810 "adrfam": "IPv4", 00:16:07.810 "traddr": "10.0.0.2", 00:16:07.810 "trsvcid": "4420" 00:16:07.810 }, 00:16:07.810 "peer_address": { 00:16:07.810 "trtype": "TCP", 00:16:07.810 "adrfam": "IPv4", 00:16:07.810 "traddr": "10.0.0.1", 00:16:07.810 "trsvcid": "60710" 00:16:07.810 }, 00:16:07.810 "auth": { 00:16:07.810 "state": "completed", 00:16:07.810 "digest": "sha512", 00:16:07.810 "dhgroup": "ffdhe6144" 00:16:07.810 } 00:16:07.810 } 00:16:07.810 ]' 00:16:07.810 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.810 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:07.810 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.810 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:07.810 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.810 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.810 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.810 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.068 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGIxOGE2NWEzNjQxYjk3NmQxODIxOTFiNTQzZGMwODM2YzU4MDNlMzE4MjMzMWM4nuP7ZA==: --dhchap-ctrl-secret DHHC-1:03:Y2U5YWYwNmI2Yzg5NzMxMjE1MzQzZGFhNjJhZmZhYWRmZDdiMzk0MDBlOGNjYjk1MmIyOTU5OWY4YjQwYTE4OdJAE80=: 00:16:08.068 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:00:ZGIxOGE2NWEzNjQxYjk3NmQxODIxOTFiNTQzZGMwODM2YzU4MDNlMzE4MjMzMWM4nuP7ZA==: --dhchap-ctrl-secret DHHC-1:03:Y2U5YWYwNmI2Yzg5NzMxMjE1MzQzZGFhNjJhZmZhYWRmZDdiMzk0MDBlOGNjYjk1MmIyOTU5OWY4YjQwYTE4OdJAE80=: 00:16:09.002 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.002 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:09.002 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.002 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.002 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.002 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:09.002 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:09.002 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:09.260 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:16:09.260 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:09.260 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:09.260 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:09.260 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:09.260 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.260 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.260 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.260 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.260 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.260 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.260 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.260 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.826 00:16:09.826 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.826 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.826 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.085 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.085 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.085 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.085 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.085 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.085 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.085 { 00:16:10.085 "cntlid": 131, 00:16:10.085 "qid": 0, 00:16:10.085 "state": "enabled", 00:16:10.085 "thread": "nvmf_tgt_poll_group_000", 00:16:10.085 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:16:10.085 "listen_address": { 00:16:10.085 "trtype": "TCP", 00:16:10.085 "adrfam": "IPv4", 00:16:10.085 "traddr": "10.0.0.2", 00:16:10.085 "trsvcid": "4420" 00:16:10.085 }, 00:16:10.085 "peer_address": { 00:16:10.085 "trtype": "TCP", 00:16:10.085 "adrfam": "IPv4", 00:16:10.085 "traddr": "10.0.0.1", 00:16:10.085 "trsvcid": "60736" 00:16:10.085 }, 00:16:10.085 "auth": { 00:16:10.085 "state": "completed", 00:16:10.085 "digest": "sha512", 00:16:10.085 "dhgroup": "ffdhe6144" 00:16:10.085 } 00:16:10.085 } 00:16:10.085 ]' 00:16:10.085 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.343 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:10.343 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.343 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:10.343 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.343 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.343 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.343 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.601 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTE2MzEzMThhZjczYTc3ZGE3ZTgzMzIzY2U2Yzg5YmXOwQ9p: --dhchap-ctrl-secret DHHC-1:02:Mzk3M2FmOTc0NzQ5Zjg1MDNjNmFhMjYzNDdkYzUwMDcwNmExN2M1ZjE2ZmUxZjc0gyublA==: 00:16:10.601 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:01:OTE2MzEzMThhZjczYTc3ZGE3ZTgzMzIzY2U2Yzg5YmXOwQ9p: --dhchap-ctrl-secret DHHC-1:02:Mzk3M2FmOTc0NzQ5Zjg1MDNjNmFhMjYzNDdkYzUwMDcwNmExN2M1ZjE2ZmUxZjc0gyublA==: 00:16:11.535 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.535 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.535 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:11.535 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.535 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.535 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.535 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.535 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:11.535 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:11.801 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:16:11.801 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.802 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:11.802 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:11.802 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:11.802 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.802 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.802 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.802 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.802 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.802 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.802 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.802 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.373 00:16:12.373 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:12.373 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.373 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.631 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.631 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.631 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.631 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.631 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.631 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.631 { 00:16:12.631 "cntlid": 133, 00:16:12.631 "qid": 0, 00:16:12.631 "state": "enabled", 00:16:12.631 "thread": "nvmf_tgt_poll_group_000", 00:16:12.631 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:16:12.631 "listen_address": { 00:16:12.631 "trtype": "TCP", 00:16:12.631 "adrfam": "IPv4", 00:16:12.631 "traddr": "10.0.0.2", 00:16:12.631 "trsvcid": "4420" 00:16:12.631 }, 00:16:12.631 "peer_address": { 00:16:12.631 "trtype": "TCP", 00:16:12.631 "adrfam": "IPv4", 00:16:12.631 "traddr": "10.0.0.1", 00:16:12.631 "trsvcid": "60974" 00:16:12.631 }, 00:16:12.631 "auth": { 00:16:12.631 "state": "completed", 00:16:12.631 "digest": "sha512", 00:16:12.631 "dhgroup": "ffdhe6144" 00:16:12.631 } 00:16:12.631 } 00:16:12.631 ]' 00:16:12.631 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.631 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:12.631 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.889 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:12.889 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.889 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.889 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.889 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.147 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTE0NzliNDY5YjliY2Q3ZWQzODI2ZTczYzczNjQ5NDZlZTQ5NTVjM2Y4NDhjODUw/9XR0g==: --dhchap-ctrl-secret DHHC-1:01:MjNlMDI2ZDFmMzUzODJjZDgyZWQxN2JjMTUwNTk5ZmNJddK+: 00:16:13.147 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:02:NTE0NzliNDY5YjliY2Q3ZWQzODI2ZTczYzczNjQ5NDZlZTQ5NTVjM2Y4NDhjODUw/9XR0g==: --dhchap-ctrl-secret DHHC-1:01:MjNlMDI2ZDFmMzUzODJjZDgyZWQxN2JjMTUwNTk5ZmNJddK+: 00:16:14.081 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.081 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:14.081 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.081 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.081 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.081 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:14.081 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:14.081 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:14.339 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:16:14.339 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:14.339 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:14.339 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:14.339 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:14.339 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.339 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:16:14.339 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.339 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.339 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.339 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:14.339 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:14.339 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:14.905 00:16:14.905 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:14.905 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:14.905 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.164 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.164 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.164 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.164 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.164 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.164 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.164 { 00:16:15.164 "cntlid": 135, 00:16:15.164 "qid": 0, 00:16:15.164 "state": "enabled", 00:16:15.164 "thread": "nvmf_tgt_poll_group_000", 00:16:15.164 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:16:15.164 "listen_address": { 00:16:15.164 "trtype": "TCP", 00:16:15.164 "adrfam": "IPv4", 00:16:15.164 "traddr": "10.0.0.2", 00:16:15.164 "trsvcid": "4420" 00:16:15.164 }, 00:16:15.164 "peer_address": { 00:16:15.164 "trtype": "TCP", 00:16:15.164 "adrfam": "IPv4", 00:16:15.164 "traddr": "10.0.0.1", 00:16:15.164 "trsvcid": "32778" 00:16:15.164 }, 00:16:15.164 "auth": { 00:16:15.164 "state": "completed", 00:16:15.164 "digest": "sha512", 00:16:15.164 "dhgroup": "ffdhe6144" 00:16:15.164 } 00:16:15.164 } 00:16:15.164 ]' 00:16:15.164 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.164 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:15.164 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.164 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:15.164 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.164 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.164 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.164 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.423 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjkzMTE2NDMzYjE2NmEyMTY2NmUxNjI3NjI4NDQxZjBlZGJmODM1ZmMxYTdlOWI5YjJiYTg5YWI2N2M5MjZhNgdVFY0=: 00:16:15.423 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:03:YjkzMTE2NDMzYjE2NmEyMTY2NmUxNjI3NjI4NDQxZjBlZGJmODM1ZmMxYTdlOWI5YjJiYTg5YWI2N2M5MjZhNgdVFY0=: 00:16:16.357 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.357 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.357 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:16.357 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.357 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.357 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.357 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:16.357 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.357 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:16.357 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:16.615 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:16:16.615 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.615 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:16.615 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:16.615 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:16.615 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.615 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.615 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.615 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.615 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.615 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.615 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.615 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.549 00:16:17.549 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:17.549 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.549 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.807 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.807 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.807 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.807 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.807 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.807 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.807 { 00:16:17.807 "cntlid": 137, 00:16:17.807 "qid": 0, 00:16:17.807 "state": "enabled", 00:16:17.807 "thread": "nvmf_tgt_poll_group_000", 00:16:17.807 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:16:17.807 "listen_address": { 00:16:17.807 "trtype": "TCP", 00:16:17.807 "adrfam": "IPv4", 00:16:17.807 "traddr": "10.0.0.2", 00:16:17.807 "trsvcid": "4420" 00:16:17.807 }, 00:16:17.807 "peer_address": { 00:16:17.807 "trtype": "TCP", 00:16:17.807 "adrfam": "IPv4", 00:16:17.807 "traddr": "10.0.0.1", 00:16:17.807 "trsvcid": "32804" 00:16:17.807 }, 00:16:17.807 "auth": { 00:16:17.807 "state": "completed", 00:16:17.807 "digest": "sha512", 00:16:17.807 "dhgroup": "ffdhe8192" 00:16:17.807 } 00:16:17.807 } 00:16:17.807 ]' 00:16:17.807 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:17.807 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:17.807 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:17.807 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:17.807 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.807 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.807 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.807 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.374 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGIxOGE2NWEzNjQxYjk3NmQxODIxOTFiNTQzZGMwODM2YzU4MDNlMzE4MjMzMWM4nuP7ZA==: --dhchap-ctrl-secret DHHC-1:03:Y2U5YWYwNmI2Yzg5NzMxMjE1MzQzZGFhNjJhZmZhYWRmZDdiMzk0MDBlOGNjYjk1MmIyOTU5OWY4YjQwYTE4OdJAE80=: 00:16:18.374 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:00:ZGIxOGE2NWEzNjQxYjk3NmQxODIxOTFiNTQzZGMwODM2YzU4MDNlMzE4MjMzMWM4nuP7ZA==: --dhchap-ctrl-secret DHHC-1:03:Y2U5YWYwNmI2Yzg5NzMxMjE1MzQzZGFhNjJhZmZhYWRmZDdiMzk0MDBlOGNjYjk1MmIyOTU5OWY4YjQwYTE4OdJAE80=: 00:16:19.308 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.309 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:19.309 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.309 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.309 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.309 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.309 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:19.309 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:19.309 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:16:19.309 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:19.309 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:19.309 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:19.309 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:19.309 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.309 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.309 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.309 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.309 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.309 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.309 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.309 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.243 00:16:20.244 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.244 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.244 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.502 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.502 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.502 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.502 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.502 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.502 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.502 { 00:16:20.502 "cntlid": 139, 00:16:20.502 "qid": 0, 00:16:20.502 "state": "enabled", 00:16:20.502 "thread": "nvmf_tgt_poll_group_000", 00:16:20.502 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:16:20.502 "listen_address": { 00:16:20.502 "trtype": "TCP", 00:16:20.502 "adrfam": "IPv4", 00:16:20.502 "traddr": "10.0.0.2", 00:16:20.502 "trsvcid": "4420" 00:16:20.502 }, 00:16:20.502 "peer_address": { 00:16:20.502 "trtype": "TCP", 00:16:20.502 "adrfam": "IPv4", 00:16:20.502 "traddr": "10.0.0.1", 00:16:20.502 "trsvcid": "32826" 00:16:20.502 }, 00:16:20.502 "auth": { 00:16:20.502 "state": "completed", 00:16:20.502 "digest": "sha512", 00:16:20.502 "dhgroup": "ffdhe8192" 00:16:20.502 } 00:16:20.502 } 00:16:20.502 ]' 00:16:20.502 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.502 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:20.502 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.502 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:20.502 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.760 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.760 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.760 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.018 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTE2MzEzMThhZjczYTc3ZGE3ZTgzMzIzY2U2Yzg5YmXOwQ9p: --dhchap-ctrl-secret DHHC-1:02:Mzk3M2FmOTc0NzQ5Zjg1MDNjNmFhMjYzNDdkYzUwMDcwNmExN2M1ZjE2ZmUxZjc0gyublA==: 00:16:21.018 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:01:OTE2MzEzMThhZjczYTc3ZGE3ZTgzMzIzY2U2Yzg5YmXOwQ9p: --dhchap-ctrl-secret DHHC-1:02:Mzk3M2FmOTc0NzQ5Zjg1MDNjNmFhMjYzNDdkYzUwMDcwNmExN2M1ZjE2ZmUxZjc0gyublA==: 00:16:21.951 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.951 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:21.951 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.951 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.951 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.951 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.951 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:21.951 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:22.209 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:16:22.209 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.209 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:22.209 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:22.209 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:22.209 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.209 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.209 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.209 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.209 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.209 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.209 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.209 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.142 00:16:23.142 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.142 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.142 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.400 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.400 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.400 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.400 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.400 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.400 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.400 { 00:16:23.400 "cntlid": 141, 00:16:23.400 "qid": 0, 00:16:23.400 "state": "enabled", 00:16:23.400 "thread": "nvmf_tgt_poll_group_000", 00:16:23.400 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:16:23.400 "listen_address": { 00:16:23.400 "trtype": "TCP", 00:16:23.400 "adrfam": "IPv4", 00:16:23.400 "traddr": "10.0.0.2", 00:16:23.400 "trsvcid": "4420" 00:16:23.400 }, 00:16:23.400 "peer_address": { 00:16:23.400 "trtype": "TCP", 00:16:23.400 "adrfam": "IPv4", 00:16:23.400 "traddr": "10.0.0.1", 00:16:23.400 "trsvcid": "45586" 00:16:23.400 }, 00:16:23.400 "auth": { 00:16:23.400 "state": "completed", 00:16:23.400 "digest": "sha512", 00:16:23.400 "dhgroup": "ffdhe8192" 00:16:23.400 } 00:16:23.400 } 00:16:23.400 ]' 00:16:23.400 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.400 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:23.400 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.400 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:23.400 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.400 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.400 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.400 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.964 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTE0NzliNDY5YjliY2Q3ZWQzODI2ZTczYzczNjQ5NDZlZTQ5NTVjM2Y4NDhjODUw/9XR0g==: --dhchap-ctrl-secret DHHC-1:01:MjNlMDI2ZDFmMzUzODJjZDgyZWQxN2JjMTUwNTk5ZmNJddK+: 00:16:23.964 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:02:NTE0NzliNDY5YjliY2Q3ZWQzODI2ZTczYzczNjQ5NDZlZTQ5NTVjM2Y4NDhjODUw/9XR0g==: --dhchap-ctrl-secret DHHC-1:01:MjNlMDI2ZDFmMzUzODJjZDgyZWQxN2JjMTUwNTk5ZmNJddK+: 00:16:24.897 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.897 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:24.897 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.897 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.897 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.897 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.897 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:24.897 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:24.897 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:16:24.897 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.897 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:24.897 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:24.897 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:24.897 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.897 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:16:24.897 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.897 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.897 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.897 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:24.897 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:24.897 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:25.905 00:16:25.905 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.905 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.905 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.163 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.163 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.163 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.163 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.163 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.163 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.163 { 00:16:26.163 "cntlid": 143, 00:16:26.163 "qid": 0, 00:16:26.163 "state": "enabled", 00:16:26.163 "thread": "nvmf_tgt_poll_group_000", 00:16:26.163 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:16:26.163 "listen_address": { 00:16:26.163 "trtype": "TCP", 00:16:26.163 "adrfam": "IPv4", 00:16:26.163 "traddr": "10.0.0.2", 00:16:26.163 "trsvcid": "4420" 00:16:26.163 }, 00:16:26.163 "peer_address": { 00:16:26.163 "trtype": "TCP", 00:16:26.163 "adrfam": "IPv4", 00:16:26.163 "traddr": "10.0.0.1", 00:16:26.163 "trsvcid": "45614" 00:16:26.163 }, 00:16:26.163 "auth": { 00:16:26.163 "state": "completed", 00:16:26.163 "digest": "sha512", 00:16:26.163 "dhgroup": "ffdhe8192" 00:16:26.163 } 00:16:26.163 } 00:16:26.163 ]' 00:16:26.163 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.163 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:26.163 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.163 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:26.163 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.163 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.163 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.163 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.421 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjkzMTE2NDMzYjE2NmEyMTY2NmUxNjI3NjI4NDQxZjBlZGJmODM1ZmMxYTdlOWI5YjJiYTg5YWI2N2M5MjZhNgdVFY0=: 00:16:26.422 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:03:YjkzMTE2NDMzYjE2NmEyMTY2NmUxNjI3NjI4NDQxZjBlZGJmODM1ZmMxYTdlOWI5YjJiYTg5YWI2N2M5MjZhNgdVFY0=: 00:16:27.355 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.355 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:27.355 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.355 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.355 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.355 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:27.355 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:16:27.355 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:27.355 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:27.355 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:27.355 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:27.614 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:16:27.614 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.614 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:27.614 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:27.614 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:27.614 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.614 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.614 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.614 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.614 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.614 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.614 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.614 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.548 00:16:28.549 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.549 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.549 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.807 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.807 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.807 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.807 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.807 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.807 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.807 { 00:16:28.807 "cntlid": 145, 00:16:28.807 "qid": 0, 00:16:28.807 "state": "enabled", 00:16:28.807 "thread": "nvmf_tgt_poll_group_000", 00:16:28.807 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:16:28.807 "listen_address": { 00:16:28.807 "trtype": "TCP", 00:16:28.807 "adrfam": "IPv4", 00:16:28.807 "traddr": "10.0.0.2", 00:16:28.807 "trsvcid": "4420" 00:16:28.807 }, 00:16:28.807 "peer_address": { 00:16:28.807 "trtype": "TCP", 00:16:28.807 "adrfam": "IPv4", 00:16:28.807 "traddr": "10.0.0.1", 00:16:28.807 "trsvcid": "45654" 00:16:28.807 }, 00:16:28.807 "auth": { 00:16:28.807 "state": "completed", 00:16:28.807 "digest": "sha512", 00:16:28.807 "dhgroup": "ffdhe8192" 00:16:28.807 } 00:16:28.807 } 00:16:28.807 ]' 00:16:28.807 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.807 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:28.807 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.807 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:28.807 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.807 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.807 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.807 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.065 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGIxOGE2NWEzNjQxYjk3NmQxODIxOTFiNTQzZGMwODM2YzU4MDNlMzE4MjMzMWM4nuP7ZA==: --dhchap-ctrl-secret DHHC-1:03:Y2U5YWYwNmI2Yzg5NzMxMjE1MzQzZGFhNjJhZmZhYWRmZDdiMzk0MDBlOGNjYjk1MmIyOTU5OWY4YjQwYTE4OdJAE80=: 00:16:29.065 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:00:ZGIxOGE2NWEzNjQxYjk3NmQxODIxOTFiNTQzZGMwODM2YzU4MDNlMzE4MjMzMWM4nuP7ZA==: --dhchap-ctrl-secret DHHC-1:03:Y2U5YWYwNmI2Yzg5NzMxMjE1MzQzZGFhNjJhZmZhYWRmZDdiMzk0MDBlOGNjYjk1MmIyOTU5OWY4YjQwYTE4OdJAE80=: 00:16:29.998 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.998 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.998 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:29.998 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.998 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.998 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.998 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 00:16:29.998 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.998 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.998 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.998 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:16:29.998 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:29.998 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:16:29.998 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:29.998 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:29.998 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:29.998 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:29.998 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:16:29.998 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:16:29.999 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:16:30.933 request: 00:16:30.933 { 00:16:30.933 "name": "nvme0", 00:16:30.933 "trtype": "tcp", 00:16:30.933 "traddr": "10.0.0.2", 00:16:30.933 "adrfam": "ipv4", 00:16:30.933 "trsvcid": "4420", 00:16:30.933 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:30.933 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:16:30.933 "prchk_reftag": false, 00:16:30.933 "prchk_guard": false, 00:16:30.933 "hdgst": false, 00:16:30.933 "ddgst": false, 00:16:30.933 "dhchap_key": "key2", 00:16:30.933 "allow_unrecognized_csi": false, 00:16:30.933 "method": "bdev_nvme_attach_controller", 00:16:30.933 "req_id": 1 00:16:30.933 } 00:16:30.933 Got JSON-RPC error response 00:16:30.933 response: 00:16:30.933 { 00:16:30.933 "code": -5, 00:16:30.933 "message": "Input/output error" 00:16:30.933 } 00:16:30.933 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:30.933 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:30.933 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:30.933 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:30.933 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:30.933 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.933 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.933 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.933 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.933 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.933 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.933 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.933 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:30.933 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:30.933 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:30.934 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:30.934 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:30.934 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:30.934 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:30.934 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:30.934 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:30.934 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:31.868 request: 00:16:31.868 { 00:16:31.868 "name": "nvme0", 00:16:31.868 "trtype": "tcp", 00:16:31.868 "traddr": "10.0.0.2", 00:16:31.868 "adrfam": "ipv4", 00:16:31.868 "trsvcid": "4420", 00:16:31.868 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:31.868 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:16:31.868 "prchk_reftag": false, 00:16:31.868 "prchk_guard": false, 00:16:31.868 "hdgst": false, 00:16:31.868 "ddgst": false, 00:16:31.868 "dhchap_key": "key1", 00:16:31.868 "dhchap_ctrlr_key": "ckey2", 00:16:31.868 "allow_unrecognized_csi": false, 00:16:31.868 "method": "bdev_nvme_attach_controller", 00:16:31.868 "req_id": 1 00:16:31.868 } 00:16:31.868 Got JSON-RPC error response 00:16:31.868 response: 00:16:31.868 { 00:16:31.868 "code": -5, 00:16:31.868 "message": "Input/output error" 00:16:31.868 } 00:16:31.868 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:31.868 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:31.868 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:31.868 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:31.868 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:31.868 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.868 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.868 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.868 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 00:16:31.868 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.868 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.868 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.868 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.868 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:31.868 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.868 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:31.868 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:31.868 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:31.868 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:31.868 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.868 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.868 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.433 request: 00:16:32.433 { 00:16:32.433 "name": "nvme0", 00:16:32.433 "trtype": "tcp", 00:16:32.433 "traddr": "10.0.0.2", 00:16:32.433 "adrfam": "ipv4", 00:16:32.433 "trsvcid": "4420", 00:16:32.433 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:32.433 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:16:32.433 "prchk_reftag": false, 00:16:32.433 "prchk_guard": false, 00:16:32.433 "hdgst": false, 00:16:32.433 "ddgst": false, 00:16:32.433 "dhchap_key": "key1", 00:16:32.433 "dhchap_ctrlr_key": "ckey1", 00:16:32.433 "allow_unrecognized_csi": false, 00:16:32.433 "method": "bdev_nvme_attach_controller", 00:16:32.433 "req_id": 1 00:16:32.433 } 00:16:32.433 Got JSON-RPC error response 00:16:32.433 response: 00:16:32.433 { 00:16:32.433 "code": -5, 00:16:32.433 "message": "Input/output error" 00:16:32.433 } 00:16:32.433 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:32.433 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:32.433 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:32.433 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:32.433 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:32.433 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.433 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.433 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.433 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2590990 00:16:32.433 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2590990 ']' 00:16:32.433 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2590990 00:16:32.433 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:16:32.433 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:32.433 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2590990 00:16:32.691 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:32.691 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:32.691 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2590990' 00:16:32.691 killing process with pid 2590990 00:16:32.691 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2590990 00:16:32.691 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2590990 00:16:32.691 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:16:32.691 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:32.691 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:32.691 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.691 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2613749 00:16:32.691 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:16:32.691 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2613749 00:16:32.691 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2613749 ']' 00:16:32.691 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.691 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:32.691 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.691 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:32.691 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.950 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:32.950 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:32.950 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:32.950 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:32.950 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.950 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:32.950 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:32.950 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2613749 00:16:32.950 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2613749 ']' 00:16:32.950 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.950 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:32.950 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.950 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:32.950 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.516 null0 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ew0 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.HLS ]] 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.HLS 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.L8Q 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.nHL ]] 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.nHL 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.TLq 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.iiS ]] 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.iiS 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.WhC 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.516 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.517 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.517 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:33.517 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:33.517 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:34.890 nvme0n1 00:16:34.890 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.890 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.890 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.457 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.457 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.457 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.457 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.457 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.457 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.457 { 00:16:35.457 "cntlid": 1, 00:16:35.457 "qid": 0, 00:16:35.457 "state": "enabled", 00:16:35.457 "thread": "nvmf_tgt_poll_group_000", 00:16:35.457 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:16:35.457 "listen_address": { 00:16:35.457 "trtype": "TCP", 00:16:35.457 "adrfam": "IPv4", 00:16:35.457 "traddr": "10.0.0.2", 00:16:35.457 "trsvcid": "4420" 00:16:35.457 }, 00:16:35.457 "peer_address": { 00:16:35.457 "trtype": "TCP", 00:16:35.457 "adrfam": "IPv4", 00:16:35.457 "traddr": "10.0.0.1", 00:16:35.457 "trsvcid": "48892" 00:16:35.457 }, 00:16:35.457 "auth": { 00:16:35.457 "state": "completed", 00:16:35.457 "digest": "sha512", 00:16:35.457 "dhgroup": "ffdhe8192" 00:16:35.457 } 00:16:35.457 } 00:16:35.457 ]' 00:16:35.457 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.457 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:35.457 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.457 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:35.457 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.457 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.457 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.457 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.716 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjkzMTE2NDMzYjE2NmEyMTY2NmUxNjI3NjI4NDQxZjBlZGJmODM1ZmMxYTdlOWI5YjJiYTg5YWI2N2M5MjZhNgdVFY0=: 00:16:35.716 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:03:YjkzMTE2NDMzYjE2NmEyMTY2NmUxNjI3NjI4NDQxZjBlZGJmODM1ZmMxYTdlOWI5YjJiYTg5YWI2N2M5MjZhNgdVFY0=: 00:16:36.650 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.650 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:36.650 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.650 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.650 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.650 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key3 00:16:36.650 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.650 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.650 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.650 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:16:36.650 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:16:36.908 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:16:36.908 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:36.908 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:16:36.908 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:36.908 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:36.908 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:36.908 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:36.908 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:36.908 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:36.908 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.166 request: 00:16:37.166 { 00:16:37.166 "name": "nvme0", 00:16:37.166 "trtype": "tcp", 00:16:37.166 "traddr": "10.0.0.2", 00:16:37.166 "adrfam": "ipv4", 00:16:37.166 "trsvcid": "4420", 00:16:37.166 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:37.166 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:16:37.166 "prchk_reftag": false, 00:16:37.166 "prchk_guard": false, 00:16:37.166 "hdgst": false, 00:16:37.166 "ddgst": false, 00:16:37.166 "dhchap_key": "key3", 00:16:37.166 "allow_unrecognized_csi": false, 00:16:37.166 "method": "bdev_nvme_attach_controller", 00:16:37.166 "req_id": 1 00:16:37.166 } 00:16:37.166 Got JSON-RPC error response 00:16:37.166 response: 00:16:37.166 { 00:16:37.166 "code": -5, 00:16:37.166 "message": "Input/output error" 00:16:37.166 } 00:16:37.166 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:37.166 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:37.166 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:37.166 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:37.166 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:16:37.166 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:16:37.166 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:37.166 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:37.424 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:16:37.424 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:37.424 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:16:37.424 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:37.424 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:37.424 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:37.424 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:37.424 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:37.424 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.424 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.682 request: 00:16:37.682 { 00:16:37.682 "name": "nvme0", 00:16:37.682 "trtype": "tcp", 00:16:37.682 "traddr": "10.0.0.2", 00:16:37.682 "adrfam": "ipv4", 00:16:37.682 "trsvcid": "4420", 00:16:37.682 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:37.682 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:16:37.682 "prchk_reftag": false, 00:16:37.682 "prchk_guard": false, 00:16:37.682 "hdgst": false, 00:16:37.682 "ddgst": false, 00:16:37.682 "dhchap_key": "key3", 00:16:37.682 "allow_unrecognized_csi": false, 00:16:37.682 "method": "bdev_nvme_attach_controller", 00:16:37.682 "req_id": 1 00:16:37.682 } 00:16:37.682 Got JSON-RPC error response 00:16:37.682 response: 00:16:37.682 { 00:16:37.682 "code": -5, 00:16:37.682 "message": "Input/output error" 00:16:37.682 } 00:16:37.940 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:37.940 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:37.940 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:37.940 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:37.940 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:16:37.940 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:16:37.940 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:16:37.940 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:37.940 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:37.940 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:38.199 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:38.199 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.199 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.199 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.199 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:38.199 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.199 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.199 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.199 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:38.199 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:38.199 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:38.199 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:38.199 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:38.199 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:38.199 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:38.199 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:38.199 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:38.199 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:38.765 request: 00:16:38.765 { 00:16:38.765 "name": "nvme0", 00:16:38.765 "trtype": "tcp", 00:16:38.765 "traddr": "10.0.0.2", 00:16:38.765 "adrfam": "ipv4", 00:16:38.765 "trsvcid": "4420", 00:16:38.765 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:38.765 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:16:38.765 "prchk_reftag": false, 00:16:38.765 "prchk_guard": false, 00:16:38.765 "hdgst": false, 00:16:38.765 "ddgst": false, 00:16:38.765 "dhchap_key": "key0", 00:16:38.765 "dhchap_ctrlr_key": "key1", 00:16:38.765 "allow_unrecognized_csi": false, 00:16:38.765 "method": "bdev_nvme_attach_controller", 00:16:38.765 "req_id": 1 00:16:38.765 } 00:16:38.765 Got JSON-RPC error response 00:16:38.765 response: 00:16:38.765 { 00:16:38.765 "code": -5, 00:16:38.765 "message": "Input/output error" 00:16:38.765 } 00:16:38.765 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:38.765 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:38.765 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:38.765 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:38.765 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:16:38.765 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:16:38.765 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:16:39.023 nvme0n1 00:16:39.023 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:16:39.023 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:16:39.023 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.281 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.281 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.281 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.539 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 00:16:39.539 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.539 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.539 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.539 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:16:39.539 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:39.539 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:40.912 nvme0n1 00:16:40.912 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:16:40.912 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.912 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:16:41.171 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.171 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:41.171 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.171 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.171 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.171 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:16:41.171 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:16:41.171 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.429 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.429 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NTE0NzliNDY5YjliY2Q3ZWQzODI2ZTczYzczNjQ5NDZlZTQ5NTVjM2Y4NDhjODUw/9XR0g==: --dhchap-ctrl-secret DHHC-1:03:YjkzMTE2NDMzYjE2NmEyMTY2NmUxNjI3NjI4NDQxZjBlZGJmODM1ZmMxYTdlOWI5YjJiYTg5YWI2N2M5MjZhNgdVFY0=: 00:16:41.429 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid 8b464f06-2980-e311-ba20-001e67a94acd -l 0 --dhchap-secret DHHC-1:02:NTE0NzliNDY5YjliY2Q3ZWQzODI2ZTczYzczNjQ5NDZlZTQ5NTVjM2Y4NDhjODUw/9XR0g==: --dhchap-ctrl-secret DHHC-1:03:YjkzMTE2NDMzYjE2NmEyMTY2NmUxNjI3NjI4NDQxZjBlZGJmODM1ZmMxYTdlOWI5YjJiYTg5YWI2N2M5MjZhNgdVFY0=: 00:16:42.362 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:16:42.362 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:16:42.363 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:16:42.363 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:16:42.363 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:16:42.363 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:16:42.363 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:16:42.363 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.363 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.621 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:16:42.621 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:42.621 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:16:42.621 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:42.621 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:42.621 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:42.621 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:42.621 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:16:42.621 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:42.621 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:43.555 request: 00:16:43.555 { 00:16:43.555 "name": "nvme0", 00:16:43.555 "trtype": "tcp", 00:16:43.555 "traddr": "10.0.0.2", 00:16:43.555 "adrfam": "ipv4", 00:16:43.555 "trsvcid": "4420", 00:16:43.555 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:43.555 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd", 00:16:43.555 "prchk_reftag": false, 00:16:43.555 "prchk_guard": false, 00:16:43.555 "hdgst": false, 00:16:43.555 "ddgst": false, 00:16:43.555 "dhchap_key": "key1", 00:16:43.555 "allow_unrecognized_csi": false, 00:16:43.555 "method": "bdev_nvme_attach_controller", 00:16:43.555 "req_id": 1 00:16:43.555 } 00:16:43.555 Got JSON-RPC error response 00:16:43.555 response: 00:16:43.555 { 00:16:43.555 "code": -5, 00:16:43.555 "message": "Input/output error" 00:16:43.555 } 00:16:43.555 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:43.555 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:43.555 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:43.555 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:43.555 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:43.555 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:43.555 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:44.929 nvme0n1 00:16:44.929 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:16:44.929 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:16:44.929 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.187 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.187 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.187 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.446 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:45.446 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.446 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.446 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.446 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:16:45.446 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:16:45.446 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:16:45.704 nvme0n1 00:16:45.704 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:16:45.704 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:16:45.704 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.962 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.962 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.962 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.249 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:46.249 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.249 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.249 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.249 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:OTE2MzEzMThhZjczYTc3ZGE3ZTgzMzIzY2U2Yzg5YmXOwQ9p: '' 2s 00:16:46.249 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:16:46.249 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:16:46.249 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:OTE2MzEzMThhZjczYTc3ZGE3ZTgzMzIzY2U2Yzg5YmXOwQ9p: 00:16:46.249 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:16:46.249 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:16:46.249 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:16:46.249 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:OTE2MzEzMThhZjczYTc3ZGE3ZTgzMzIzY2U2Yzg5YmXOwQ9p: ]] 00:16:46.249 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:OTE2MzEzMThhZjczYTc3ZGE3ZTgzMzIzY2U2Yzg5YmXOwQ9p: 00:16:46.249 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:16:46.249 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:16:46.249 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:16:48.194 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:16:48.194 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:16:48.194 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:48.194 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:16:48.194 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:48.194 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:16:48.194 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:16:48.194 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key1 --dhchap-ctrlr-key key2 00:16:48.194 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.194 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.194 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.194 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NTE0NzliNDY5YjliY2Q3ZWQzODI2ZTczYzczNjQ5NDZlZTQ5NTVjM2Y4NDhjODUw/9XR0g==: 2s 00:16:48.194 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:16:48.194 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:16:48.194 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:16:48.194 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NTE0NzliNDY5YjliY2Q3ZWQzODI2ZTczYzczNjQ5NDZlZTQ5NTVjM2Y4NDhjODUw/9XR0g==: 00:16:48.194 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:16:48.194 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:16:48.194 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:16:48.194 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NTE0NzliNDY5YjliY2Q3ZWQzODI2ZTczYzczNjQ5NDZlZTQ5NTVjM2Y4NDhjODUw/9XR0g==: ]] 00:16:48.194 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NTE0NzliNDY5YjliY2Q3ZWQzODI2ZTczYzczNjQ5NDZlZTQ5NTVjM2Y4NDhjODUw/9XR0g==: 00:16:48.194 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:16:48.194 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:16:50.721 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:16:50.721 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:16:50.721 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:50.721 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:16:50.721 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:50.721 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:16:50.721 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:16:50.721 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.721 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:50.721 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.721 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.721 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.721 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:50.721 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:50.721 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:51.655 nvme0n1 00:16:51.655 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:51.655 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.655 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.655 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.655 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:51.655 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:52.587 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:16:52.588 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:16:52.588 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.845 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.845 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:52.845 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.845 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.845 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.845 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:16:52.845 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:16:53.103 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:16:53.103 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:16:53.103 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.360 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.360 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:53.360 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.360 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.360 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.360 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:53.360 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:53.360 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:53.360 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:16:53.360 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:53.360 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:16:53.360 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:53.360 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:53.360 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:54.293 request: 00:16:54.293 { 00:16:54.293 "name": "nvme0", 00:16:54.293 "dhchap_key": "key1", 00:16:54.293 "dhchap_ctrlr_key": "key3", 00:16:54.293 "method": "bdev_nvme_set_keys", 00:16:54.293 "req_id": 1 00:16:54.293 } 00:16:54.293 Got JSON-RPC error response 00:16:54.293 response: 00:16:54.293 { 00:16:54.293 "code": -13, 00:16:54.293 "message": "Permission denied" 00:16:54.293 } 00:16:54.293 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:54.293 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:54.293 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:54.293 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:54.293 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:16:54.293 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:16:54.293 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.551 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:16:54.551 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:16:55.486 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:16:55.486 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:16:55.486 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.744 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:16:55.744 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:55.744 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.744 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.744 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.744 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:55.744 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:55.744 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:57.118 nvme0n1 00:16:57.118 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:57.118 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.118 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.118 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.118 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:57.118 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:57.118 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:57.118 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:16:57.118 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:57.118 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:16:57.118 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:57.118 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:57.118 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:58.052 request: 00:16:58.052 { 00:16:58.052 "name": "nvme0", 00:16:58.052 "dhchap_key": "key2", 00:16:58.052 "dhchap_ctrlr_key": "key0", 00:16:58.052 "method": "bdev_nvme_set_keys", 00:16:58.052 "req_id": 1 00:16:58.052 } 00:16:58.052 Got JSON-RPC error response 00:16:58.052 response: 00:16:58.052 { 00:16:58.052 "code": -13, 00:16:58.052 "message": "Permission denied" 00:16:58.052 } 00:16:58.052 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:58.052 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:58.052 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:58.052 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:58.052 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:16:58.052 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.052 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:16:58.310 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:16:58.310 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:16:59.243 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:16:59.243 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:16:59.243 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.501 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:16:59.501 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:16:59.501 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:16:59.501 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2591128 00:16:59.501 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2591128 ']' 00:16:59.501 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2591128 00:16:59.501 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:16:59.501 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:59.501 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2591128 00:16:59.501 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:59.501 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:59.501 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2591128' 00:16:59.501 killing process with pid 2591128 00:16:59.501 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2591128 00:16:59.501 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2591128 00:17:00.067 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:00.067 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:00.067 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:17:00.067 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:00.067 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:17:00.067 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:00.067 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:00.067 rmmod nvme_tcp 00:17:00.067 rmmod nvme_fabrics 00:17:00.067 rmmod nvme_keyring 00:17:00.067 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:00.067 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:17:00.067 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:17:00.067 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2613749 ']' 00:17:00.067 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2613749 00:17:00.067 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2613749 ']' 00:17:00.067 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2613749 00:17:00.067 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:00.067 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:00.067 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2613749 00:17:00.067 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:00.067 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:00.067 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2613749' 00:17:00.067 killing process with pid 2613749 00:17:00.067 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2613749 00:17:00.067 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2613749 00:17:00.326 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:00.326 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:00.326 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:00.326 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:17:00.326 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:17:00.326 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:00.326 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:00.326 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:00.326 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:00.326 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.326 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:00.326 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.867 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:02.867 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.ew0 /tmp/spdk.key-sha256.L8Q /tmp/spdk.key-sha384.TLq /tmp/spdk.key-sha512.WhC /tmp/spdk.key-sha512.HLS /tmp/spdk.key-sha384.nHL /tmp/spdk.key-sha256.iiS '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:02.867 00:17:02.867 real 3m31.334s 00:17:02.867 user 8m15.358s 00:17:02.867 sys 0m28.420s 00:17:02.867 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:02.867 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.867 ************************************ 00:17:02.867 END TEST nvmf_auth_target 00:17:02.867 ************************************ 00:17:02.867 11:18:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:02.867 11:18:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:02.867 11:18:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:02.867 11:18:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:02.867 11:18:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:02.867 ************************************ 00:17:02.867 START TEST nvmf_bdevio_no_huge 00:17:02.867 ************************************ 00:17:02.867 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:02.867 * Looking for test storage... 00:17:02.867 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:02.867 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:02.867 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:17:02.867 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:02.867 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:02.867 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:02.867 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:02.867 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:02.867 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:17:02.867 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:17:02.867 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:17:02.867 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:17:02.867 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:17:02.867 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:17:02.867 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:17:02.867 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:02.867 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:17:02.868 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:17:02.868 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:02.868 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:02.868 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:17:02.868 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:17:02.868 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:02.868 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:17:02.868 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:17:02.868 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:17:02.868 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:17:02.868 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:02.868 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:17:02.868 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:17:02.868 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:02.868 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:02.868 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:17:02.868 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:02.868 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:02.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.868 --rc genhtml_branch_coverage=1 00:17:02.868 --rc genhtml_function_coverage=1 00:17:02.868 --rc genhtml_legend=1 00:17:02.868 --rc geninfo_all_blocks=1 00:17:02.868 --rc geninfo_unexecuted_blocks=1 00:17:02.868 00:17:02.868 ' 00:17:02.868 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:02.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.868 --rc genhtml_branch_coverage=1 00:17:02.868 --rc genhtml_function_coverage=1 00:17:02.868 --rc genhtml_legend=1 00:17:02.868 --rc geninfo_all_blocks=1 00:17:02.868 --rc geninfo_unexecuted_blocks=1 00:17:02.868 00:17:02.868 ' 00:17:02.868 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:02.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.868 --rc genhtml_branch_coverage=1 00:17:02.868 --rc genhtml_function_coverage=1 00:17:02.868 --rc genhtml_legend=1 00:17:02.868 --rc geninfo_all_blocks=1 00:17:02.868 --rc geninfo_unexecuted_blocks=1 00:17:02.868 00:17:02.868 ' 00:17:02.868 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:02.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.868 --rc genhtml_branch_coverage=1 00:17:02.868 --rc genhtml_function_coverage=1 00:17:02.868 --rc genhtml_legend=1 00:17:02.868 --rc geninfo_all_blocks=1 00:17:02.868 --rc geninfo_unexecuted_blocks=1 00:17:02.868 00:17:02.868 ' 00:17:02.868 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:02.868 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:02.868 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:02.868 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:02.868 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:02.868 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:02.868 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:02.868 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:02.868 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:02.868 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:02.868 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:02.868 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:02.868 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:02.868 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:17:02.868 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:02.868 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:02.868 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:02.868 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:02.868 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:02.868 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:17:02.868 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:02.868 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:02.868 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:02.868 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.868 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.869 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.869 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:02.869 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.869 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:17:02.869 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:02.869 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:02.869 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:02.869 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:02.869 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:02.869 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:02.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:02.869 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:02.869 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:02.869 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:02.869 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:02.869 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:02.869 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:02.869 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:02.869 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:02.869 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:02.869 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:02.869 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:02.869 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:02.869 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:02.869 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.869 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:02.869 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:02.869 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:17:02.869 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:17:05.404 Found 0000:82:00.0 (0x8086 - 0x159b) 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:17:05.404 Found 0000:82:00.1 (0x8086 - 0x159b) 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:17:05.404 Found net devices under 0000:82:00.0: cvl_0_0 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:17:05.404 Found net devices under 0000:82:00.1: cvl_0_1 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:05.404 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:05.405 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:05.405 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:05.405 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:05.405 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:05.405 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:05.405 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:05.405 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:05.405 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:05.405 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:05.405 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:05.405 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:05.405 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:05.405 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:05.405 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:05.405 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:05.405 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:05.405 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:05.405 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:05.405 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:05.405 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:05.405 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:05.405 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:05.405 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:05.405 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:05.405 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:05.405 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:17:05.405 00:17:05.405 --- 10.0.0.2 ping statistics --- 00:17:05.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.405 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:17:05.405 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:05.405 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:05.405 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:17:05.405 00:17:05.405 --- 10.0.0.1 ping statistics --- 00:17:05.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.405 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:17:05.405 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:05.405 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:17:05.405 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:05.405 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:05.405 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:05.405 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:05.405 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:05.405 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:05.405 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:05.405 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:05.405 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:05.405 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:05.405 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:05.405 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2619424 00:17:05.405 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:05.405 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2619424 00:17:05.405 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 2619424 ']' 00:17:05.405 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.405 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:05.405 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.405 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:05.405 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:05.405 [2024-11-19 11:19:00.826235] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:17:05.405 [2024-11-19 11:19:00.826311] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:05.664 [2024-11-19 11:19:00.913911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:05.664 [2024-11-19 11:19:00.974509] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:05.664 [2024-11-19 11:19:00.974592] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:05.664 [2024-11-19 11:19:00.974620] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:05.664 [2024-11-19 11:19:00.974636] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:05.664 [2024-11-19 11:19:00.974646] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:05.664 [2024-11-19 11:19:00.975776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:05.664 [2024-11-19 11:19:00.975836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:17:05.664 [2024-11-19 11:19:00.975905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:17:05.664 [2024-11-19 11:19:00.975908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:05.664 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:05.664 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:17:05.664 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:05.664 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:05.664 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:05.664 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:05.664 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:05.664 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.664 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:05.664 [2024-11-19 11:19:01.137194] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:05.664 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.664 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:05.664 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.664 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:05.664 Malloc0 00:17:05.664 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.664 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:05.664 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.664 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:05.922 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.922 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:05.922 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.922 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:05.922 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.922 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:05.922 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.922 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:05.923 [2024-11-19 11:19:01.175441] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:05.923 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.923 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:05.923 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:05.923 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:17:05.923 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:17:05.923 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:05.923 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:05.923 { 00:17:05.923 "params": { 00:17:05.923 "name": "Nvme$subsystem", 00:17:05.923 "trtype": "$TEST_TRANSPORT", 00:17:05.923 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:05.923 "adrfam": "ipv4", 00:17:05.923 "trsvcid": "$NVMF_PORT", 00:17:05.923 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:05.923 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:05.923 "hdgst": ${hdgst:-false}, 00:17:05.923 "ddgst": ${ddgst:-false} 00:17:05.923 }, 00:17:05.923 "method": "bdev_nvme_attach_controller" 00:17:05.923 } 00:17:05.923 EOF 00:17:05.923 )") 00:17:05.923 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:17:05.923 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:17:05.923 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:17:05.923 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:05.923 "params": { 00:17:05.923 "name": "Nvme1", 00:17:05.923 "trtype": "tcp", 00:17:05.923 "traddr": "10.0.0.2", 00:17:05.923 "adrfam": "ipv4", 00:17:05.923 "trsvcid": "4420", 00:17:05.923 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:05.923 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:05.923 "hdgst": false, 00:17:05.923 "ddgst": false 00:17:05.923 }, 00:17:05.923 "method": "bdev_nvme_attach_controller" 00:17:05.923 }' 00:17:05.923 [2024-11-19 11:19:01.226453] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:17:05.923 [2024-11-19 11:19:01.226523] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2619460 ] 00:17:05.923 [2024-11-19 11:19:01.307218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:05.923 [2024-11-19 11:19:01.372917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:05.923 [2024-11-19 11:19:01.372970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:05.923 [2024-11-19 11:19:01.372975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.181 I/O targets: 00:17:06.181 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:06.181 00:17:06.181 00:17:06.181 CUnit - A unit testing framework for C - Version 2.1-3 00:17:06.181 http://cunit.sourceforge.net/ 00:17:06.181 00:17:06.181 00:17:06.181 Suite: bdevio tests on: Nvme1n1 00:17:06.181 Test: blockdev write read block ...passed 00:17:06.439 Test: blockdev write zeroes read block ...passed 00:17:06.439 Test: blockdev write zeroes read no split ...passed 00:17:06.440 Test: blockdev write zeroes read split ...passed 00:17:06.440 Test: blockdev write zeroes read split partial ...passed 00:17:06.440 Test: blockdev reset ...[2024-11-19 11:19:01.771870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:06.440 [2024-11-19 11:19:01.772006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c86e0 (9): Bad file descriptor 00:17:06.440 [2024-11-19 11:19:01.829722] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:17:06.440 passed 00:17:06.440 Test: blockdev write read 8 blocks ...passed 00:17:06.440 Test: blockdev write read size > 128k ...passed 00:17:06.440 Test: blockdev write read invalid size ...passed 00:17:06.440 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:06.440 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:06.440 Test: blockdev write read max offset ...passed 00:17:06.698 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:06.698 Test: blockdev writev readv 8 blocks ...passed 00:17:06.698 Test: blockdev writev readv 30 x 1block ...passed 00:17:06.698 Test: blockdev writev readv block ...passed 00:17:06.698 Test: blockdev writev readv size > 128k ...passed 00:17:06.698 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:06.698 Test: blockdev comparev and writev ...[2024-11-19 11:19:02.003245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:06.698 [2024-11-19 11:19:02.003298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:06.698 [2024-11-19 11:19:02.003324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:06.698 [2024-11-19 11:19:02.003341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.698 [2024-11-19 11:19:02.003759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:06.698 [2024-11-19 11:19:02.003784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:06.698 [2024-11-19 11:19:02.003807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:06.698 [2024-11-19 11:19:02.003823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:06.698 [2024-11-19 11:19:02.004282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:06.698 [2024-11-19 11:19:02.004306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:06.698 [2024-11-19 11:19:02.004327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:06.698 [2024-11-19 11:19:02.004343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:06.698 [2024-11-19 11:19:02.004820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:06.698 [2024-11-19 11:19:02.004844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:06.698 [2024-11-19 11:19:02.004865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:06.698 [2024-11-19 11:19:02.004881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:06.699 passed 00:17:06.699 Test: blockdev nvme passthru rw ...passed 00:17:06.699 Test: blockdev nvme passthru vendor specific ...[2024-11-19 11:19:02.086751] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:06.699 [2024-11-19 11:19:02.086779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:06.699 [2024-11-19 11:19:02.086938] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:06.699 [2024-11-19 11:19:02.086963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:06.699 [2024-11-19 11:19:02.087103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:06.699 [2024-11-19 11:19:02.087126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:06.699 [2024-11-19 11:19:02.087266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:06.699 [2024-11-19 11:19:02.087289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:06.699 passed 00:17:06.699 Test: blockdev nvme admin passthru ...passed 00:17:06.699 Test: blockdev copy ...passed 00:17:06.699 00:17:06.699 Run Summary: Type Total Ran Passed Failed Inactive 00:17:06.699 suites 1 1 n/a 0 0 00:17:06.699 tests 23 23 23 0 0 00:17:06.699 asserts 152 152 152 0 n/a 00:17:06.699 00:17:06.699 Elapsed time = 1.089 seconds 00:17:07.267 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:07.267 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.267 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:07.267 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.267 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:07.267 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:07.267 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:07.267 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:17:07.267 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:07.267 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:17:07.267 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:07.267 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:07.267 rmmod nvme_tcp 00:17:07.267 rmmod nvme_fabrics 00:17:07.267 rmmod nvme_keyring 00:17:07.267 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:07.267 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:17:07.267 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:17:07.267 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2619424 ']' 00:17:07.267 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2619424 00:17:07.267 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 2619424 ']' 00:17:07.267 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 2619424 00:17:07.267 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:17:07.267 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:07.267 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2619424 00:17:07.267 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:17:07.267 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:17:07.267 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2619424' 00:17:07.267 killing process with pid 2619424 00:17:07.267 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 2619424 00:17:07.267 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 2619424 00:17:07.527 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:07.527 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:07.527 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:07.527 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:17:07.527 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:17:07.527 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:07.527 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:17:07.527 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:07.527 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:07.527 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:07.527 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:07.527 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.066 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:10.066 00:17:10.066 real 0m7.200s 00:17:10.066 user 0m10.789s 00:17:10.066 sys 0m3.006s 00:17:10.066 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:10.066 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:10.066 ************************************ 00:17:10.066 END TEST nvmf_bdevio_no_huge 00:17:10.066 ************************************ 00:17:10.066 11:19:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:10.066 11:19:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:10.066 11:19:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:10.066 11:19:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:10.066 ************************************ 00:17:10.066 START TEST nvmf_tls 00:17:10.066 ************************************ 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:10.067 * Looking for test storage... 00:17:10.067 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:10.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.067 --rc genhtml_branch_coverage=1 00:17:10.067 --rc genhtml_function_coverage=1 00:17:10.067 --rc genhtml_legend=1 00:17:10.067 --rc geninfo_all_blocks=1 00:17:10.067 --rc geninfo_unexecuted_blocks=1 00:17:10.067 00:17:10.067 ' 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:10.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.067 --rc genhtml_branch_coverage=1 00:17:10.067 --rc genhtml_function_coverage=1 00:17:10.067 --rc genhtml_legend=1 00:17:10.067 --rc geninfo_all_blocks=1 00:17:10.067 --rc geninfo_unexecuted_blocks=1 00:17:10.067 00:17:10.067 ' 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:10.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.067 --rc genhtml_branch_coverage=1 00:17:10.067 --rc genhtml_function_coverage=1 00:17:10.067 --rc genhtml_legend=1 00:17:10.067 --rc geninfo_all_blocks=1 00:17:10.067 --rc geninfo_unexecuted_blocks=1 00:17:10.067 00:17:10.067 ' 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:10.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.067 --rc genhtml_branch_coverage=1 00:17:10.067 --rc genhtml_function_coverage=1 00:17:10.067 --rc genhtml_legend=1 00:17:10.067 --rc geninfo_all_blocks=1 00:17:10.067 --rc geninfo_unexecuted_blocks=1 00:17:10.067 00:17:10.067 ' 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:10.067 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:10.068 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:10.068 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:10.068 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:17:10.068 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:10.068 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:10.068 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:10.068 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.068 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.068 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.068 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:10.068 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.068 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:17:10.068 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:10.068 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:10.068 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:10.068 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:10.068 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:10.068 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:10.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:10.068 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:10.068 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:10.068 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:10.068 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:10.068 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:17:10.068 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:10.068 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:10.068 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:10.068 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:10.068 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:10.068 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:10.068 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:10.068 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.068 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:10.068 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:10.068 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:17:10.068 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:17:12.678 Found 0000:82:00.0 (0x8086 - 0x159b) 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:17:12.678 Found 0000:82:00.1 (0x8086 - 0x159b) 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:17:12.678 Found net devices under 0000:82:00.0: cvl_0_0 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:17:12.678 Found net devices under 0000:82:00.1: cvl_0_1 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:12.678 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:12.678 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:12.678 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:17:12.678 00:17:12.678 --- 10.0.0.2 ping statistics --- 00:17:12.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:12.678 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:17:12.679 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:12.679 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:12.679 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:17:12.679 00:17:12.679 --- 10.0.0.1 ping statistics --- 00:17:12.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:12.679 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:17:12.679 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:12.679 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:17:12.679 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:12.679 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:12.679 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:12.679 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:12.679 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:12.679 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:12.679 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:12.679 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:12.679 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:12.679 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:12.679 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:12.679 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2621957 00:17:12.679 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:12.679 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2621957 00:17:12.679 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2621957 ']' 00:17:12.679 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.679 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:12.679 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.679 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:12.679 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:12.679 [2024-11-19 11:19:08.043553] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:17:12.679 [2024-11-19 11:19:08.043641] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:12.679 [2024-11-19 11:19:08.130229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.937 [2024-11-19 11:19:08.190021] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:12.937 [2024-11-19 11:19:08.190072] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:12.937 [2024-11-19 11:19:08.190086] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:12.937 [2024-11-19 11:19:08.190097] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:12.937 [2024-11-19 11:19:08.190107] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:12.937 [2024-11-19 11:19:08.190764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:12.937 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:12.937 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:12.937 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:12.937 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:12.937 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:12.937 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:12.937 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:17:12.937 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:13.195 true 00:17:13.195 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:13.195 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:17:13.453 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:17:13.453 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:17:13.453 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:13.712 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:13.712 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:17:13.969 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:17:13.969 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:17:13.969 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:14.227 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:14.227 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:17:14.485 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:17:14.485 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:17:14.485 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:14.485 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:17:15.050 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:17:15.050 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:17:15.050 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:15.308 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:15.308 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:17:15.566 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:17:15.566 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:17:15.566 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:15.824 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:15.824 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:17:16.082 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:17:16.082 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:17:16.082 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:16.082 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:16.082 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:16.082 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:16.082 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:17:16.082 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:16.082 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:16.082 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:16.082 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:16.082 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:16.082 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:16.082 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:16.082 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:17:16.082 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:16.082 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:16.082 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:16.082 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:16.082 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.F9SrhzHNIc 00:17:16.082 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:17:16.082 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.72pnXGyUsz 00:17:16.082 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:16.082 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:16.082 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.F9SrhzHNIc 00:17:16.082 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.72pnXGyUsz 00:17:16.082 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:16.341 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:16.907 11:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.F9SrhzHNIc 00:17:16.907 11:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.F9SrhzHNIc 00:17:16.907 11:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:16.907 [2024-11-19 11:19:12.369474] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:16.907 11:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:17.165 11:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:17.423 [2024-11-19 11:19:12.906931] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:17.423 [2024-11-19 11:19:12.907177] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:17.682 11:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:17.940 malloc0 00:17:17.940 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:18.198 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.F9SrhzHNIc 00:17:18.456 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:18.714 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.F9SrhzHNIc 00:17:30.916 Initializing NVMe Controllers 00:17:30.916 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:30.916 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:30.916 Initialization complete. Launching workers. 00:17:30.916 ======================================================== 00:17:30.916 Latency(us) 00:17:30.916 Device Information : IOPS MiB/s Average min max 00:17:30.916 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8556.90 33.43 7480.67 990.10 8774.77 00:17:30.916 ======================================================== 00:17:30.916 Total : 8556.90 33.43 7480.67 990.10 8774.77 00:17:30.916 00:17:30.916 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.F9SrhzHNIc 00:17:30.916 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:30.916 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:30.916 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:30.916 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.F9SrhzHNIc 00:17:30.916 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:30.916 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2623860 00:17:30.916 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:30.916 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:30.916 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2623860 /var/tmp/bdevperf.sock 00:17:30.916 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2623860 ']' 00:17:30.916 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:30.916 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:30.916 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:30.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:30.916 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:30.916 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:30.916 [2024-11-19 11:19:24.282873] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:17:30.916 [2024-11-19 11:19:24.282945] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2623860 ] 00:17:30.916 [2024-11-19 11:19:24.356610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.916 [2024-11-19 11:19:24.413177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:30.916 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:30.916 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:30.916 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.F9SrhzHNIc 00:17:30.916 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:30.916 [2024-11-19 11:19:25.064008] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:30.916 TLSTESTn1 00:17:30.916 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:30.916 Running I/O for 10 seconds... 00:17:31.850 3416.00 IOPS, 13.34 MiB/s [2024-11-19T10:19:28.281Z] 3415.00 IOPS, 13.34 MiB/s [2024-11-19T10:19:29.655Z] 3448.33 IOPS, 13.47 MiB/s [2024-11-19T10:19:30.589Z] 3484.25 IOPS, 13.61 MiB/s [2024-11-19T10:19:31.523Z] 3482.20 IOPS, 13.60 MiB/s [2024-11-19T10:19:32.456Z] 3473.83 IOPS, 13.57 MiB/s [2024-11-19T10:19:33.389Z] 3478.86 IOPS, 13.59 MiB/s [2024-11-19T10:19:34.324Z] 3494.88 IOPS, 13.65 MiB/s [2024-11-19T10:19:35.698Z] 3502.00 IOPS, 13.68 MiB/s [2024-11-19T10:19:35.698Z] 3508.80 IOPS, 13.71 MiB/s 00:17:40.201 Latency(us) 00:17:40.201 [2024-11-19T10:19:35.698Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:40.201 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:40.201 Verification LBA range: start 0x0 length 0x2000 00:17:40.201 TLSTESTn1 : 10.03 3510.16 13.71 0.00 0.00 36391.96 5655.51 37865.24 00:17:40.201 [2024-11-19T10:19:35.698Z] =================================================================================================================== 00:17:40.201 [2024-11-19T10:19:35.698Z] Total : 3510.16 13.71 0.00 0.00 36391.96 5655.51 37865.24 00:17:40.201 { 00:17:40.201 "results": [ 00:17:40.201 { 00:17:40.201 "job": "TLSTESTn1", 00:17:40.201 "core_mask": "0x4", 00:17:40.201 "workload": "verify", 00:17:40.201 "status": "finished", 00:17:40.201 "verify_range": { 00:17:40.201 "start": 0, 00:17:40.201 "length": 8192 00:17:40.201 }, 00:17:40.201 "queue_depth": 128, 00:17:40.201 "io_size": 4096, 00:17:40.201 "runtime": 10.032318, 00:17:40.201 "iops": 3510.1558782327274, 00:17:40.201 "mibps": 13.711546399346592, 00:17:40.201 "io_failed": 0, 00:17:40.201 "io_timeout": 0, 00:17:40.201 "avg_latency_us": 36391.96458849081, 00:17:40.201 "min_latency_us": 5655.514074074074, 00:17:40.201 "max_latency_us": 37865.24444444444 00:17:40.201 } 00:17:40.201 ], 00:17:40.201 "core_count": 1 00:17:40.201 } 00:17:40.201 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:40.201 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2623860 00:17:40.201 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2623860 ']' 00:17:40.201 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2623860 00:17:40.201 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:40.201 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:40.201 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2623860 00:17:40.201 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:40.201 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:40.201 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2623860' 00:17:40.201 killing process with pid 2623860 00:17:40.201 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2623860 00:17:40.201 Received shutdown signal, test time was about 10.000000 seconds 00:17:40.201 00:17:40.201 Latency(us) 00:17:40.201 [2024-11-19T10:19:35.698Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:40.201 [2024-11-19T10:19:35.698Z] =================================================================================================================== 00:17:40.201 [2024-11-19T10:19:35.698Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:40.201 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2623860 00:17:40.201 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.72pnXGyUsz 00:17:40.202 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:40.202 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.72pnXGyUsz 00:17:40.202 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:40.202 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:40.202 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:40.202 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:40.202 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.72pnXGyUsz 00:17:40.202 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:40.202 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:40.202 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:40.202 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.72pnXGyUsz 00:17:40.202 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:40.202 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2625182 00:17:40.202 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:40.202 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:40.202 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2625182 /var/tmp/bdevperf.sock 00:17:40.202 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2625182 ']' 00:17:40.202 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:40.202 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:40.202 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:40.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:40.202 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:40.202 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:40.202 [2024-11-19 11:19:35.642047] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:17:40.202 [2024-11-19 11:19:35.642127] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2625182 ] 00:17:40.510 [2024-11-19 11:19:35.717069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.510 [2024-11-19 11:19:35.771128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:40.510 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:40.510 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:40.510 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.72pnXGyUsz 00:17:40.788 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:41.046 [2024-11-19 11:19:36.412635] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:41.046 [2024-11-19 11:19:36.419762] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:41.046 [2024-11-19 11:19:36.419839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb1b2c0 (107): Transport endpoint is not connected 00:17:41.046 [2024-11-19 11:19:36.420791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb1b2c0 (9): Bad file descriptor 00:17:41.046 [2024-11-19 11:19:36.421790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:17:41.046 [2024-11-19 11:19:36.421810] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:41.046 [2024-11-19 11:19:36.421839] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:17:41.046 [2024-11-19 11:19:36.421858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:17:41.046 request: 00:17:41.046 { 00:17:41.046 "name": "TLSTEST", 00:17:41.046 "trtype": "tcp", 00:17:41.046 "traddr": "10.0.0.2", 00:17:41.046 "adrfam": "ipv4", 00:17:41.046 "trsvcid": "4420", 00:17:41.046 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:41.046 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:41.046 "prchk_reftag": false, 00:17:41.046 "prchk_guard": false, 00:17:41.046 "hdgst": false, 00:17:41.046 "ddgst": false, 00:17:41.046 "psk": "key0", 00:17:41.046 "allow_unrecognized_csi": false, 00:17:41.046 "method": "bdev_nvme_attach_controller", 00:17:41.046 "req_id": 1 00:17:41.046 } 00:17:41.046 Got JSON-RPC error response 00:17:41.046 response: 00:17:41.046 { 00:17:41.046 "code": -5, 00:17:41.046 "message": "Input/output error" 00:17:41.046 } 00:17:41.046 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2625182 00:17:41.046 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2625182 ']' 00:17:41.046 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2625182 00:17:41.046 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:41.046 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:41.046 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2625182 00:17:41.046 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:41.046 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:41.046 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2625182' 00:17:41.046 killing process with pid 2625182 00:17:41.046 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2625182 00:17:41.046 Received shutdown signal, test time was about 10.000000 seconds 00:17:41.046 00:17:41.046 Latency(us) 00:17:41.046 [2024-11-19T10:19:36.543Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.046 [2024-11-19T10:19:36.543Z] =================================================================================================================== 00:17:41.046 [2024-11-19T10:19:36.543Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:41.046 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2625182 00:17:41.304 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:41.304 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:41.304 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:41.304 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:41.304 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:41.304 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.F9SrhzHNIc 00:17:41.304 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:41.304 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.F9SrhzHNIc 00:17:41.304 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:41.304 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:41.304 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:41.304 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:41.304 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.F9SrhzHNIc 00:17:41.304 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:41.304 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:41.304 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:41.304 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.F9SrhzHNIc 00:17:41.304 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:41.304 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2625327 00:17:41.304 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:41.304 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:41.304 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2625327 /var/tmp/bdevperf.sock 00:17:41.304 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2625327 ']' 00:17:41.304 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:41.304 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:41.304 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:41.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:41.304 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:41.304 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:41.304 [2024-11-19 11:19:36.709176] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:17:41.304 [2024-11-19 11:19:36.709252] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2625327 ] 00:17:41.304 [2024-11-19 11:19:36.785066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.563 [2024-11-19 11:19:36.846442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:41.563 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:41.563 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:41.563 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.F9SrhzHNIc 00:17:41.821 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:17:42.079 [2024-11-19 11:19:37.481426] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:42.079 [2024-11-19 11:19:37.492467] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:42.079 [2024-11-19 11:19:37.492497] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:42.079 [2024-11-19 11:19:37.492555] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:42.079 [2024-11-19 11:19:37.492712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe502c0 (107): Transport endpoint is not connected 00:17:42.079 [2024-11-19 11:19:37.493703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe502c0 (9): Bad file descriptor 00:17:42.079 [2024-11-19 11:19:37.494703] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:17:42.079 [2024-11-19 11:19:37.494727] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:42.079 [2024-11-19 11:19:37.494756] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:17:42.079 [2024-11-19 11:19:37.494775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:17:42.079 request: 00:17:42.079 { 00:17:42.079 "name": "TLSTEST", 00:17:42.079 "trtype": "tcp", 00:17:42.079 "traddr": "10.0.0.2", 00:17:42.079 "adrfam": "ipv4", 00:17:42.079 "trsvcid": "4420", 00:17:42.079 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:42.079 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:42.079 "prchk_reftag": false, 00:17:42.079 "prchk_guard": false, 00:17:42.079 "hdgst": false, 00:17:42.079 "ddgst": false, 00:17:42.079 "psk": "key0", 00:17:42.079 "allow_unrecognized_csi": false, 00:17:42.079 "method": "bdev_nvme_attach_controller", 00:17:42.079 "req_id": 1 00:17:42.079 } 00:17:42.079 Got JSON-RPC error response 00:17:42.079 response: 00:17:42.079 { 00:17:42.079 "code": -5, 00:17:42.079 "message": "Input/output error" 00:17:42.079 } 00:17:42.079 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2625327 00:17:42.079 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2625327 ']' 00:17:42.079 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2625327 00:17:42.079 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:42.079 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:42.079 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2625327 00:17:42.079 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:42.079 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:42.079 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2625327' 00:17:42.079 killing process with pid 2625327 00:17:42.079 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2625327 00:17:42.079 Received shutdown signal, test time was about 10.000000 seconds 00:17:42.079 00:17:42.079 Latency(us) 00:17:42.079 [2024-11-19T10:19:37.576Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.079 [2024-11-19T10:19:37.576Z] =================================================================================================================== 00:17:42.079 [2024-11-19T10:19:37.576Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:42.079 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2625327 00:17:42.338 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:42.338 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:42.338 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:42.338 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:42.338 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:42.338 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.F9SrhzHNIc 00:17:42.338 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:42.338 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.F9SrhzHNIc 00:17:42.338 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:42.338 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:42.338 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:42.338 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:42.338 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.F9SrhzHNIc 00:17:42.338 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:42.338 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:42.338 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:42.338 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.F9SrhzHNIc 00:17:42.338 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:42.338 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2625466 00:17:42.338 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:42.338 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:42.338 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2625466 /var/tmp/bdevperf.sock 00:17:42.338 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2625466 ']' 00:17:42.338 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:42.338 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:42.338 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:42.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:42.338 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:42.338 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:42.338 [2024-11-19 11:19:37.824253] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:17:42.338 [2024-11-19 11:19:37.824334] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2625466 ] 00:17:42.596 [2024-11-19 11:19:37.899964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.596 [2024-11-19 11:19:37.957934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:42.596 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:42.596 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:42.596 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.F9SrhzHNIc 00:17:43.162 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:43.162 [2024-11-19 11:19:38.626847] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:43.162 [2024-11-19 11:19:38.635463] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:43.162 [2024-11-19 11:19:38.635493] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:43.162 [2024-11-19 11:19:38.635549] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:43.162 [2024-11-19 11:19:38.636058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x233a2c0 (107): Transport endpoint is not connected 00:17:43.162 [2024-11-19 11:19:38.637048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x233a2c0 (9): Bad file descriptor 00:17:43.162 [2024-11-19 11:19:38.638048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:17:43.162 [2024-11-19 11:19:38.638068] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:43.162 [2024-11-19 11:19:38.638097] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:17:43.163 [2024-11-19 11:19:38.638115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:17:43.163 request: 00:17:43.163 { 00:17:43.163 "name": "TLSTEST", 00:17:43.163 "trtype": "tcp", 00:17:43.163 "traddr": "10.0.0.2", 00:17:43.163 "adrfam": "ipv4", 00:17:43.163 "trsvcid": "4420", 00:17:43.163 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:43.163 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:43.163 "prchk_reftag": false, 00:17:43.163 "prchk_guard": false, 00:17:43.163 "hdgst": false, 00:17:43.163 "ddgst": false, 00:17:43.163 "psk": "key0", 00:17:43.163 "allow_unrecognized_csi": false, 00:17:43.163 "method": "bdev_nvme_attach_controller", 00:17:43.163 "req_id": 1 00:17:43.163 } 00:17:43.163 Got JSON-RPC error response 00:17:43.163 response: 00:17:43.163 { 00:17:43.163 "code": -5, 00:17:43.163 "message": "Input/output error" 00:17:43.163 } 00:17:43.163 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2625466 00:17:43.163 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2625466 ']' 00:17:43.163 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2625466 00:17:43.163 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:43.421 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:43.421 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2625466 00:17:43.421 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:43.421 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:43.421 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2625466' 00:17:43.421 killing process with pid 2625466 00:17:43.421 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2625466 00:17:43.421 Received shutdown signal, test time was about 10.000000 seconds 00:17:43.421 00:17:43.421 Latency(us) 00:17:43.421 [2024-11-19T10:19:38.918Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:43.421 [2024-11-19T10:19:38.918Z] =================================================================================================================== 00:17:43.421 [2024-11-19T10:19:38.918Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:43.421 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2625466 00:17:43.421 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:43.421 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:43.421 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:43.421 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:43.422 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:43.422 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:43.422 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:43.422 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:43.422 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:43.422 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:43.422 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:43.422 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:43.422 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:43.422 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:43.422 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:43.422 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:43.422 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:17:43.422 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:43.422 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2625607 00:17:43.422 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:43.422 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:43.422 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2625607 /var/tmp/bdevperf.sock 00:17:43.422 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2625607 ']' 00:17:43.422 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:43.422 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:43.680 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:43.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:43.680 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:43.680 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:43.680 [2024-11-19 11:19:38.962768] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:17:43.680 [2024-11-19 11:19:38.962845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2625607 ] 00:17:43.680 [2024-11-19 11:19:39.036703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.680 [2024-11-19 11:19:39.090483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:43.937 11:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:43.937 11:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:43.937 11:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:17:44.196 [2024-11-19 11:19:39.451146] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:17:44.196 [2024-11-19 11:19:39.451187] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:17:44.196 request: 00:17:44.196 { 00:17:44.196 "name": "key0", 00:17:44.196 "path": "", 00:17:44.196 "method": "keyring_file_add_key", 00:17:44.196 "req_id": 1 00:17:44.196 } 00:17:44.196 Got JSON-RPC error response 00:17:44.196 response: 00:17:44.196 { 00:17:44.196 "code": -1, 00:17:44.196 "message": "Operation not permitted" 00:17:44.196 } 00:17:44.196 11:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:44.455 [2024-11-19 11:19:39.740058] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:44.455 [2024-11-19 11:19:39.740133] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:17:44.455 request: 00:17:44.455 { 00:17:44.455 "name": "TLSTEST", 00:17:44.455 "trtype": "tcp", 00:17:44.455 "traddr": "10.0.0.2", 00:17:44.455 "adrfam": "ipv4", 00:17:44.455 "trsvcid": "4420", 00:17:44.455 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:44.455 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:44.455 "prchk_reftag": false, 00:17:44.455 "prchk_guard": false, 00:17:44.455 "hdgst": false, 00:17:44.455 "ddgst": false, 00:17:44.455 "psk": "key0", 00:17:44.455 "allow_unrecognized_csi": false, 00:17:44.455 "method": "bdev_nvme_attach_controller", 00:17:44.455 "req_id": 1 00:17:44.455 } 00:17:44.455 Got JSON-RPC error response 00:17:44.455 response: 00:17:44.455 { 00:17:44.455 "code": -126, 00:17:44.455 "message": "Required key not available" 00:17:44.455 } 00:17:44.455 11:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2625607 00:17:44.455 11:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2625607 ']' 00:17:44.455 11:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2625607 00:17:44.455 11:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:44.455 11:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:44.455 11:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2625607 00:17:44.455 11:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:44.455 11:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:44.455 11:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2625607' 00:17:44.455 killing process with pid 2625607 00:17:44.455 11:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2625607 00:17:44.455 Received shutdown signal, test time was about 10.000000 seconds 00:17:44.455 00:17:44.455 Latency(us) 00:17:44.455 [2024-11-19T10:19:39.952Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.455 [2024-11-19T10:19:39.952Z] =================================================================================================================== 00:17:44.455 [2024-11-19T10:19:39.952Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:44.455 11:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2625607 00:17:44.712 11:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:44.713 11:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:44.713 11:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:44.713 11:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:44.713 11:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:44.713 11:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2621957 00:17:44.713 11:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2621957 ']' 00:17:44.713 11:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2621957 00:17:44.713 11:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:44.713 11:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:44.713 11:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2621957 00:17:44.713 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:44.713 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:44.713 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2621957' 00:17:44.713 killing process with pid 2621957 00:17:44.713 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2621957 00:17:44.713 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2621957 00:17:44.970 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:17:44.970 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:17:44.970 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:44.970 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:44.970 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:44.970 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:17:44.970 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:44.970 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:44.970 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:17:44.970 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.lenRvSW2hN 00:17:44.970 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:44.970 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.lenRvSW2hN 00:17:44.970 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:17:44.970 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:44.970 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:44.970 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:44.971 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2625764 00:17:44.971 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:44.971 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2625764 00:17:44.971 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2625764 ']' 00:17:44.971 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.971 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:44.971 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.971 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:44.971 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:44.971 [2024-11-19 11:19:40.369564] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:17:44.971 [2024-11-19 11:19:40.369657] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:44.971 [2024-11-19 11:19:40.455515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.229 [2024-11-19 11:19:40.511785] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:45.229 [2024-11-19 11:19:40.511837] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:45.229 [2024-11-19 11:19:40.511865] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:45.229 [2024-11-19 11:19:40.511877] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:45.230 [2024-11-19 11:19:40.511888] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:45.230 [2024-11-19 11:19:40.512531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:45.230 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:45.230 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:45.230 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:45.230 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:45.230 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:45.230 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:45.230 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.lenRvSW2hN 00:17:45.230 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.lenRvSW2hN 00:17:45.230 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:45.488 [2024-11-19 11:19:40.895532] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:45.488 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:45.746 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:46.004 [2024-11-19 11:19:41.416930] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:46.004 [2024-11-19 11:19:41.417185] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:46.004 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:46.262 malloc0 00:17:46.520 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:46.777 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.lenRvSW2hN 00:17:47.035 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:47.294 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lenRvSW2hN 00:17:47.294 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:47.294 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:47.294 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:47.294 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.lenRvSW2hN 00:17:47.294 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:47.294 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2626054 00:17:47.294 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:47.294 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:47.294 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2626054 /var/tmp/bdevperf.sock 00:17:47.294 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2626054 ']' 00:17:47.294 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:47.294 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:47.294 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:47.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:47.294 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:47.294 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:47.294 [2024-11-19 11:19:42.757006] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:17:47.294 [2024-11-19 11:19:42.757076] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2626054 ] 00:17:47.553 [2024-11-19 11:19:42.840222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.553 [2024-11-19 11:19:42.900618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:47.553 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:47.553 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:47.553 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lenRvSW2hN 00:17:47.811 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:48.070 [2024-11-19 11:19:43.517409] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:48.328 TLSTESTn1 00:17:48.328 11:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:48.328 Running I/O for 10 seconds... 00:17:50.637 3448.00 IOPS, 13.47 MiB/s [2024-11-19T10:19:47.068Z] 3507.00 IOPS, 13.70 MiB/s [2024-11-19T10:19:47.999Z] 3552.67 IOPS, 13.88 MiB/s [2024-11-19T10:19:48.931Z] 3548.50 IOPS, 13.86 MiB/s [2024-11-19T10:19:49.864Z] 3577.80 IOPS, 13.98 MiB/s [2024-11-19T10:19:50.799Z] 3581.33 IOPS, 13.99 MiB/s [2024-11-19T10:19:51.733Z] 3569.43 IOPS, 13.94 MiB/s [2024-11-19T10:19:53.108Z] 3587.50 IOPS, 14.01 MiB/s [2024-11-19T10:19:54.042Z] 3589.56 IOPS, 14.02 MiB/s [2024-11-19T10:19:54.042Z] 3592.50 IOPS, 14.03 MiB/s 00:17:58.545 Latency(us) 00:17:58.545 [2024-11-19T10:19:54.042Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.545 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:58.545 Verification LBA range: start 0x0 length 0x2000 00:17:58.545 TLSTESTn1 : 10.02 3598.00 14.05 0.00 0.00 35518.47 7815.77 33010.73 00:17:58.545 [2024-11-19T10:19:54.042Z] =================================================================================================================== 00:17:58.545 [2024-11-19T10:19:54.042Z] Total : 3598.00 14.05 0.00 0.00 35518.47 7815.77 33010.73 00:17:58.545 { 00:17:58.545 "results": [ 00:17:58.545 { 00:17:58.545 "job": "TLSTESTn1", 00:17:58.545 "core_mask": "0x4", 00:17:58.545 "workload": "verify", 00:17:58.545 "status": "finished", 00:17:58.545 "verify_range": { 00:17:58.545 "start": 0, 00:17:58.545 "length": 8192 00:17:58.545 }, 00:17:58.545 "queue_depth": 128, 00:17:58.545 "io_size": 4096, 00:17:58.545 "runtime": 10.019456, 00:17:58.545 "iops": 3597.999731721962, 00:17:58.545 "mibps": 14.054686452038913, 00:17:58.545 "io_failed": 0, 00:17:58.545 "io_timeout": 0, 00:17:58.545 "avg_latency_us": 35518.46987036524, 00:17:58.545 "min_latency_us": 7815.774814814815, 00:17:58.545 "max_latency_us": 33010.72592592592 00:17:58.545 } 00:17:58.545 ], 00:17:58.545 "core_count": 1 00:17:58.545 } 00:17:58.545 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:58.545 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2626054 00:17:58.545 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2626054 ']' 00:17:58.545 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2626054 00:17:58.545 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:58.545 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:58.545 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2626054 00:17:58.545 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:58.545 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:58.545 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2626054' 00:17:58.545 killing process with pid 2626054 00:17:58.545 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2626054 00:17:58.545 Received shutdown signal, test time was about 10.000000 seconds 00:17:58.545 00:17:58.545 Latency(us) 00:17:58.545 [2024-11-19T10:19:54.042Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.545 [2024-11-19T10:19:54.042Z] =================================================================================================================== 00:17:58.545 [2024-11-19T10:19:54.042Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:58.545 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2626054 00:17:58.545 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.lenRvSW2hN 00:17:58.545 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lenRvSW2hN 00:17:58.545 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:58.545 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lenRvSW2hN 00:17:58.804 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:58.804 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:58.804 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:58.804 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:58.804 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lenRvSW2hN 00:17:58.804 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:58.804 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:58.804 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:58.804 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.lenRvSW2hN 00:17:58.804 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:58.804 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2627385 00:17:58.804 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:58.804 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:58.804 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2627385 /var/tmp/bdevperf.sock 00:17:58.804 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2627385 ']' 00:17:58.804 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:58.804 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:58.804 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:58.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:58.804 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:58.804 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:58.804 [2024-11-19 11:19:54.094797] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:17:58.804 [2024-11-19 11:19:54.094879] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2627385 ] 00:17:58.804 [2024-11-19 11:19:54.172118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.804 [2024-11-19 11:19:54.228895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:59.063 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:59.063 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:59.063 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lenRvSW2hN 00:17:59.321 [2024-11-19 11:19:54.586389] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.lenRvSW2hN': 0100666 00:17:59.321 [2024-11-19 11:19:54.586433] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:17:59.321 request: 00:17:59.321 { 00:17:59.321 "name": "key0", 00:17:59.321 "path": "/tmp/tmp.lenRvSW2hN", 00:17:59.321 "method": "keyring_file_add_key", 00:17:59.321 "req_id": 1 00:17:59.321 } 00:17:59.321 Got JSON-RPC error response 00:17:59.321 response: 00:17:59.321 { 00:17:59.321 "code": -1, 00:17:59.321 "message": "Operation not permitted" 00:17:59.321 } 00:17:59.321 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:59.579 [2024-11-19 11:19:54.851198] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:59.579 [2024-11-19 11:19:54.851271] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:17:59.579 request: 00:17:59.579 { 00:17:59.579 "name": "TLSTEST", 00:17:59.579 "trtype": "tcp", 00:17:59.579 "traddr": "10.0.0.2", 00:17:59.579 "adrfam": "ipv4", 00:17:59.579 "trsvcid": "4420", 00:17:59.579 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:59.579 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:59.579 "prchk_reftag": false, 00:17:59.579 "prchk_guard": false, 00:17:59.579 "hdgst": false, 00:17:59.579 "ddgst": false, 00:17:59.579 "psk": "key0", 00:17:59.579 "allow_unrecognized_csi": false, 00:17:59.579 "method": "bdev_nvme_attach_controller", 00:17:59.579 "req_id": 1 00:17:59.579 } 00:17:59.579 Got JSON-RPC error response 00:17:59.579 response: 00:17:59.579 { 00:17:59.579 "code": -126, 00:17:59.579 "message": "Required key not available" 00:17:59.579 } 00:17:59.579 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2627385 00:17:59.579 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2627385 ']' 00:17:59.579 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2627385 00:17:59.579 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:59.579 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:59.579 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2627385 00:17:59.579 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:59.579 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:59.580 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2627385' 00:17:59.580 killing process with pid 2627385 00:17:59.580 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2627385 00:17:59.580 Received shutdown signal, test time was about 10.000000 seconds 00:17:59.580 00:17:59.580 Latency(us) 00:17:59.580 [2024-11-19T10:19:55.077Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.580 [2024-11-19T10:19:55.077Z] =================================================================================================================== 00:17:59.580 [2024-11-19T10:19:55.077Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:59.580 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2627385 00:17:59.838 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:59.838 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:59.838 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:59.839 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:59.839 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:59.839 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2625764 00:17:59.839 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2625764 ']' 00:17:59.839 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2625764 00:17:59.839 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:59.839 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:59.839 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2625764 00:17:59.839 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:59.839 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:59.839 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2625764' 00:17:59.839 killing process with pid 2625764 00:17:59.839 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2625764 00:17:59.839 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2625764 00:18:00.096 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:18:00.097 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:00.097 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:00.097 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:00.097 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2627646 00:18:00.097 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:00.097 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2627646 00:18:00.097 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2627646 ']' 00:18:00.097 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.097 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:00.097 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.097 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:00.097 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:00.097 [2024-11-19 11:19:55.456116] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:18:00.097 [2024-11-19 11:19:55.456200] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.097 [2024-11-19 11:19:55.542087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.355 [2024-11-19 11:19:55.599202] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:00.355 [2024-11-19 11:19:55.599247] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:00.355 [2024-11-19 11:19:55.599274] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:00.355 [2024-11-19 11:19:55.599285] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:00.355 [2024-11-19 11:19:55.599295] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:00.355 [2024-11-19 11:19:55.599897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:00.355 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:00.355 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:00.355 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:00.355 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:00.355 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:00.355 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:00.355 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.lenRvSW2hN 00:18:00.355 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:00.355 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.lenRvSW2hN 00:18:00.355 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:18:00.355 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:00.355 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:18:00.355 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:00.355 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.lenRvSW2hN 00:18:00.355 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.lenRvSW2hN 00:18:00.355 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:00.643 [2024-11-19 11:19:55.979460] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:00.643 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:00.925 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:01.183 [2024-11-19 11:19:56.593202] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:01.183 [2024-11-19 11:19:56.593506] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:01.183 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:01.442 malloc0 00:18:01.442 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:02.008 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.lenRvSW2hN 00:18:02.266 [2024-11-19 11:19:57.530928] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.lenRvSW2hN': 0100666 00:18:02.266 [2024-11-19 11:19:57.530987] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:02.266 request: 00:18:02.266 { 00:18:02.266 "name": "key0", 00:18:02.266 "path": "/tmp/tmp.lenRvSW2hN", 00:18:02.266 "method": "keyring_file_add_key", 00:18:02.266 "req_id": 1 00:18:02.266 } 00:18:02.266 Got JSON-RPC error response 00:18:02.266 response: 00:18:02.266 { 00:18:02.266 "code": -1, 00:18:02.266 "message": "Operation not permitted" 00:18:02.266 } 00:18:02.266 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:02.524 [2024-11-19 11:19:57.815774] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:18:02.524 [2024-11-19 11:19:57.815857] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:02.524 request: 00:18:02.524 { 00:18:02.524 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:02.524 "host": "nqn.2016-06.io.spdk:host1", 00:18:02.524 "psk": "key0", 00:18:02.524 "method": "nvmf_subsystem_add_host", 00:18:02.524 "req_id": 1 00:18:02.524 } 00:18:02.524 Got JSON-RPC error response 00:18:02.524 response: 00:18:02.524 { 00:18:02.524 "code": -32603, 00:18:02.524 "message": "Internal error" 00:18:02.524 } 00:18:02.524 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:02.524 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:02.524 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:02.524 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:02.524 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2627646 00:18:02.524 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2627646 ']' 00:18:02.524 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2627646 00:18:02.524 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:02.524 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:02.524 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2627646 00:18:02.524 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:02.524 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:02.524 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2627646' 00:18:02.524 killing process with pid 2627646 00:18:02.524 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2627646 00:18:02.524 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2627646 00:18:02.782 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.lenRvSW2hN 00:18:02.782 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:18:02.782 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:02.782 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:02.782 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:02.782 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2627959 00:18:02.782 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:02.782 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2627959 00:18:02.782 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2627959 ']' 00:18:02.782 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.782 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:02.782 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.782 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:02.782 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:02.782 [2024-11-19 11:19:58.167632] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:18:02.782 [2024-11-19 11:19:58.167736] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:02.782 [2024-11-19 11:19:58.247529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.041 [2024-11-19 11:19:58.303360] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:03.041 [2024-11-19 11:19:58.303431] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:03.041 [2024-11-19 11:19:58.303445] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:03.041 [2024-11-19 11:19:58.303471] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:03.041 [2024-11-19 11:19:58.303480] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:03.041 [2024-11-19 11:19:58.304052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:03.041 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:03.041 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:03.041 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:03.041 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:03.041 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:03.041 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:03.041 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.lenRvSW2hN 00:18:03.041 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.lenRvSW2hN 00:18:03.041 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:03.299 [2024-11-19 11:19:58.698313] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:03.299 11:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:03.557 11:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:03.815 [2024-11-19 11:19:59.275889] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:03.815 [2024-11-19 11:19:59.276187] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:03.815 11:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:04.382 malloc0 00:18:04.382 11:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:04.382 11:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.lenRvSW2hN 00:18:04.949 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:04.949 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2628245 00:18:04.949 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:04.949 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:04.949 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2628245 /var/tmp/bdevperf.sock 00:18:04.949 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2628245 ']' 00:18:04.949 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:04.949 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:04.949 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:04.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:04.949 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:04.949 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:05.208 [2024-11-19 11:20:00.462336] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:18:05.208 [2024-11-19 11:20:00.462437] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2628245 ] 00:18:05.208 [2024-11-19 11:20:00.539477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.208 [2024-11-19 11:20:00.597785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:05.466 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:05.466 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:05.466 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lenRvSW2hN 00:18:05.724 11:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:05.982 [2024-11-19 11:20:01.249891] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:05.982 TLSTESTn1 00:18:05.982 11:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:06.239 11:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:18:06.239 "subsystems": [ 00:18:06.239 { 00:18:06.239 "subsystem": "keyring", 00:18:06.239 "config": [ 00:18:06.239 { 00:18:06.239 "method": "keyring_file_add_key", 00:18:06.239 "params": { 00:18:06.239 "name": "key0", 00:18:06.239 "path": "/tmp/tmp.lenRvSW2hN" 00:18:06.239 } 00:18:06.239 } 00:18:06.239 ] 00:18:06.239 }, 00:18:06.239 { 00:18:06.239 "subsystem": "iobuf", 00:18:06.239 "config": [ 00:18:06.239 { 00:18:06.239 "method": "iobuf_set_options", 00:18:06.239 "params": { 00:18:06.239 "small_pool_count": 8192, 00:18:06.239 "large_pool_count": 1024, 00:18:06.240 "small_bufsize": 8192, 00:18:06.240 "large_bufsize": 135168, 00:18:06.240 "enable_numa": false 00:18:06.240 } 00:18:06.240 } 00:18:06.240 ] 00:18:06.240 }, 00:18:06.240 { 00:18:06.240 "subsystem": "sock", 00:18:06.240 "config": [ 00:18:06.240 { 00:18:06.240 "method": "sock_set_default_impl", 00:18:06.240 "params": { 00:18:06.240 "impl_name": "posix" 00:18:06.240 } 00:18:06.240 }, 00:18:06.240 { 00:18:06.240 "method": "sock_impl_set_options", 00:18:06.240 "params": { 00:18:06.240 "impl_name": "ssl", 00:18:06.240 "recv_buf_size": 4096, 00:18:06.240 "send_buf_size": 4096, 00:18:06.240 "enable_recv_pipe": true, 00:18:06.240 "enable_quickack": false, 00:18:06.240 "enable_placement_id": 0, 00:18:06.240 "enable_zerocopy_send_server": true, 00:18:06.240 "enable_zerocopy_send_client": false, 00:18:06.240 "zerocopy_threshold": 0, 00:18:06.240 "tls_version": 0, 00:18:06.240 "enable_ktls": false 00:18:06.240 } 00:18:06.240 }, 00:18:06.240 { 00:18:06.240 "method": "sock_impl_set_options", 00:18:06.240 "params": { 00:18:06.240 "impl_name": "posix", 00:18:06.240 "recv_buf_size": 2097152, 00:18:06.240 "send_buf_size": 2097152, 00:18:06.240 "enable_recv_pipe": true, 00:18:06.240 "enable_quickack": false, 00:18:06.240 "enable_placement_id": 0, 00:18:06.240 "enable_zerocopy_send_server": true, 00:18:06.240 "enable_zerocopy_send_client": false, 00:18:06.240 "zerocopy_threshold": 0, 00:18:06.240 "tls_version": 0, 00:18:06.240 "enable_ktls": false 00:18:06.240 } 00:18:06.240 } 00:18:06.240 ] 00:18:06.240 }, 00:18:06.240 { 00:18:06.240 "subsystem": "vmd", 00:18:06.240 "config": [] 00:18:06.240 }, 00:18:06.240 { 00:18:06.240 "subsystem": "accel", 00:18:06.240 "config": [ 00:18:06.240 { 00:18:06.240 "method": "accel_set_options", 00:18:06.240 "params": { 00:18:06.240 "small_cache_size": 128, 00:18:06.240 "large_cache_size": 16, 00:18:06.240 "task_count": 2048, 00:18:06.240 "sequence_count": 2048, 00:18:06.240 "buf_count": 2048 00:18:06.240 } 00:18:06.240 } 00:18:06.240 ] 00:18:06.240 }, 00:18:06.240 { 00:18:06.240 "subsystem": "bdev", 00:18:06.240 "config": [ 00:18:06.240 { 00:18:06.240 "method": "bdev_set_options", 00:18:06.240 "params": { 00:18:06.240 "bdev_io_pool_size": 65535, 00:18:06.240 "bdev_io_cache_size": 256, 00:18:06.240 "bdev_auto_examine": true, 00:18:06.240 "iobuf_small_cache_size": 128, 00:18:06.240 "iobuf_large_cache_size": 16 00:18:06.240 } 00:18:06.240 }, 00:18:06.240 { 00:18:06.240 "method": "bdev_raid_set_options", 00:18:06.240 "params": { 00:18:06.240 "process_window_size_kb": 1024, 00:18:06.240 "process_max_bandwidth_mb_sec": 0 00:18:06.240 } 00:18:06.240 }, 00:18:06.240 { 00:18:06.240 "method": "bdev_iscsi_set_options", 00:18:06.240 "params": { 00:18:06.240 "timeout_sec": 30 00:18:06.240 } 00:18:06.240 }, 00:18:06.240 { 00:18:06.240 "method": "bdev_nvme_set_options", 00:18:06.240 "params": { 00:18:06.240 "action_on_timeout": "none", 00:18:06.240 "timeout_us": 0, 00:18:06.240 "timeout_admin_us": 0, 00:18:06.240 "keep_alive_timeout_ms": 10000, 00:18:06.240 "arbitration_burst": 0, 00:18:06.240 "low_priority_weight": 0, 00:18:06.240 "medium_priority_weight": 0, 00:18:06.240 "high_priority_weight": 0, 00:18:06.240 "nvme_adminq_poll_period_us": 10000, 00:18:06.240 "nvme_ioq_poll_period_us": 0, 00:18:06.240 "io_queue_requests": 0, 00:18:06.240 "delay_cmd_submit": true, 00:18:06.240 "transport_retry_count": 4, 00:18:06.240 "bdev_retry_count": 3, 00:18:06.240 "transport_ack_timeout": 0, 00:18:06.240 "ctrlr_loss_timeout_sec": 0, 00:18:06.240 "reconnect_delay_sec": 0, 00:18:06.240 "fast_io_fail_timeout_sec": 0, 00:18:06.240 "disable_auto_failback": false, 00:18:06.240 "generate_uuids": false, 00:18:06.240 "transport_tos": 0, 00:18:06.240 "nvme_error_stat": false, 00:18:06.240 "rdma_srq_size": 0, 00:18:06.240 "io_path_stat": false, 00:18:06.240 "allow_accel_sequence": false, 00:18:06.240 "rdma_max_cq_size": 0, 00:18:06.240 "rdma_cm_event_timeout_ms": 0, 00:18:06.240 "dhchap_digests": [ 00:18:06.240 "sha256", 00:18:06.240 "sha384", 00:18:06.240 "sha512" 00:18:06.240 ], 00:18:06.240 "dhchap_dhgroups": [ 00:18:06.240 "null", 00:18:06.240 "ffdhe2048", 00:18:06.240 "ffdhe3072", 00:18:06.240 "ffdhe4096", 00:18:06.240 "ffdhe6144", 00:18:06.240 "ffdhe8192" 00:18:06.240 ] 00:18:06.240 } 00:18:06.240 }, 00:18:06.240 { 00:18:06.240 "method": "bdev_nvme_set_hotplug", 00:18:06.240 "params": { 00:18:06.240 "period_us": 100000, 00:18:06.240 "enable": false 00:18:06.240 } 00:18:06.240 }, 00:18:06.240 { 00:18:06.240 "method": "bdev_malloc_create", 00:18:06.240 "params": { 00:18:06.240 "name": "malloc0", 00:18:06.240 "num_blocks": 8192, 00:18:06.240 "block_size": 4096, 00:18:06.240 "physical_block_size": 4096, 00:18:06.240 "uuid": "6d68f742-6c43-49ce-9a76-f830cb527dda", 00:18:06.240 "optimal_io_boundary": 0, 00:18:06.240 "md_size": 0, 00:18:06.240 "dif_type": 0, 00:18:06.240 "dif_is_head_of_md": false, 00:18:06.240 "dif_pi_format": 0 00:18:06.240 } 00:18:06.240 }, 00:18:06.240 { 00:18:06.240 "method": "bdev_wait_for_examine" 00:18:06.240 } 00:18:06.240 ] 00:18:06.240 }, 00:18:06.240 { 00:18:06.240 "subsystem": "nbd", 00:18:06.240 "config": [] 00:18:06.240 }, 00:18:06.240 { 00:18:06.240 "subsystem": "scheduler", 00:18:06.240 "config": [ 00:18:06.240 { 00:18:06.240 "method": "framework_set_scheduler", 00:18:06.240 "params": { 00:18:06.240 "name": "static" 00:18:06.240 } 00:18:06.240 } 00:18:06.240 ] 00:18:06.240 }, 00:18:06.240 { 00:18:06.240 "subsystem": "nvmf", 00:18:06.240 "config": [ 00:18:06.240 { 00:18:06.240 "method": "nvmf_set_config", 00:18:06.240 "params": { 00:18:06.240 "discovery_filter": "match_any", 00:18:06.240 "admin_cmd_passthru": { 00:18:06.240 "identify_ctrlr": false 00:18:06.240 }, 00:18:06.240 "dhchap_digests": [ 00:18:06.240 "sha256", 00:18:06.240 "sha384", 00:18:06.240 "sha512" 00:18:06.240 ], 00:18:06.240 "dhchap_dhgroups": [ 00:18:06.240 "null", 00:18:06.240 "ffdhe2048", 00:18:06.240 "ffdhe3072", 00:18:06.240 "ffdhe4096", 00:18:06.240 "ffdhe6144", 00:18:06.240 "ffdhe8192" 00:18:06.240 ] 00:18:06.240 } 00:18:06.240 }, 00:18:06.240 { 00:18:06.240 "method": "nvmf_set_max_subsystems", 00:18:06.240 "params": { 00:18:06.240 "max_subsystems": 1024 00:18:06.240 } 00:18:06.240 }, 00:18:06.240 { 00:18:06.240 "method": "nvmf_set_crdt", 00:18:06.240 "params": { 00:18:06.240 "crdt1": 0, 00:18:06.240 "crdt2": 0, 00:18:06.240 "crdt3": 0 00:18:06.240 } 00:18:06.240 }, 00:18:06.240 { 00:18:06.240 "method": "nvmf_create_transport", 00:18:06.240 "params": { 00:18:06.240 "trtype": "TCP", 00:18:06.240 "max_queue_depth": 128, 00:18:06.240 "max_io_qpairs_per_ctrlr": 127, 00:18:06.240 "in_capsule_data_size": 4096, 00:18:06.240 "max_io_size": 131072, 00:18:06.240 "io_unit_size": 131072, 00:18:06.240 "max_aq_depth": 128, 00:18:06.240 "num_shared_buffers": 511, 00:18:06.241 "buf_cache_size": 4294967295, 00:18:06.241 "dif_insert_or_strip": false, 00:18:06.241 "zcopy": false, 00:18:06.241 "c2h_success": false, 00:18:06.241 "sock_priority": 0, 00:18:06.241 "abort_timeout_sec": 1, 00:18:06.241 "ack_timeout": 0, 00:18:06.241 "data_wr_pool_size": 0 00:18:06.241 } 00:18:06.241 }, 00:18:06.241 { 00:18:06.241 "method": "nvmf_create_subsystem", 00:18:06.241 "params": { 00:18:06.241 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:06.241 "allow_any_host": false, 00:18:06.241 "serial_number": "SPDK00000000000001", 00:18:06.241 "model_number": "SPDK bdev Controller", 00:18:06.241 "max_namespaces": 10, 00:18:06.241 "min_cntlid": 1, 00:18:06.241 "max_cntlid": 65519, 00:18:06.241 "ana_reporting": false 00:18:06.241 } 00:18:06.241 }, 00:18:06.241 { 00:18:06.241 "method": "nvmf_subsystem_add_host", 00:18:06.241 "params": { 00:18:06.241 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:06.241 "host": "nqn.2016-06.io.spdk:host1", 00:18:06.241 "psk": "key0" 00:18:06.241 } 00:18:06.241 }, 00:18:06.241 { 00:18:06.241 "method": "nvmf_subsystem_add_ns", 00:18:06.241 "params": { 00:18:06.241 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:06.241 "namespace": { 00:18:06.241 "nsid": 1, 00:18:06.241 "bdev_name": "malloc0", 00:18:06.241 "nguid": "6D68F7426C4349CE9A76F830CB527DDA", 00:18:06.241 "uuid": "6d68f742-6c43-49ce-9a76-f830cb527dda", 00:18:06.241 "no_auto_visible": false 00:18:06.241 } 00:18:06.241 } 00:18:06.241 }, 00:18:06.241 { 00:18:06.241 "method": "nvmf_subsystem_add_listener", 00:18:06.241 "params": { 00:18:06.241 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:06.241 "listen_address": { 00:18:06.241 "trtype": "TCP", 00:18:06.241 "adrfam": "IPv4", 00:18:06.241 "traddr": "10.0.0.2", 00:18:06.241 "trsvcid": "4420" 00:18:06.241 }, 00:18:06.241 "secure_channel": true 00:18:06.241 } 00:18:06.241 } 00:18:06.241 ] 00:18:06.241 } 00:18:06.241 ] 00:18:06.241 }' 00:18:06.241 11:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:06.498 11:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:18:06.498 "subsystems": [ 00:18:06.498 { 00:18:06.498 "subsystem": "keyring", 00:18:06.498 "config": [ 00:18:06.498 { 00:18:06.498 "method": "keyring_file_add_key", 00:18:06.498 "params": { 00:18:06.498 "name": "key0", 00:18:06.498 "path": "/tmp/tmp.lenRvSW2hN" 00:18:06.498 } 00:18:06.498 } 00:18:06.498 ] 00:18:06.498 }, 00:18:06.498 { 00:18:06.498 "subsystem": "iobuf", 00:18:06.498 "config": [ 00:18:06.498 { 00:18:06.498 "method": "iobuf_set_options", 00:18:06.498 "params": { 00:18:06.498 "small_pool_count": 8192, 00:18:06.498 "large_pool_count": 1024, 00:18:06.498 "small_bufsize": 8192, 00:18:06.498 "large_bufsize": 135168, 00:18:06.498 "enable_numa": false 00:18:06.498 } 00:18:06.498 } 00:18:06.498 ] 00:18:06.498 }, 00:18:06.498 { 00:18:06.498 "subsystem": "sock", 00:18:06.498 "config": [ 00:18:06.498 { 00:18:06.498 "method": "sock_set_default_impl", 00:18:06.498 "params": { 00:18:06.498 "impl_name": "posix" 00:18:06.498 } 00:18:06.498 }, 00:18:06.498 { 00:18:06.498 "method": "sock_impl_set_options", 00:18:06.498 "params": { 00:18:06.498 "impl_name": "ssl", 00:18:06.498 "recv_buf_size": 4096, 00:18:06.498 "send_buf_size": 4096, 00:18:06.498 "enable_recv_pipe": true, 00:18:06.498 "enable_quickack": false, 00:18:06.499 "enable_placement_id": 0, 00:18:06.499 "enable_zerocopy_send_server": true, 00:18:06.499 "enable_zerocopy_send_client": false, 00:18:06.499 "zerocopy_threshold": 0, 00:18:06.499 "tls_version": 0, 00:18:06.499 "enable_ktls": false 00:18:06.499 } 00:18:06.499 }, 00:18:06.499 { 00:18:06.499 "method": "sock_impl_set_options", 00:18:06.499 "params": { 00:18:06.499 "impl_name": "posix", 00:18:06.499 "recv_buf_size": 2097152, 00:18:06.499 "send_buf_size": 2097152, 00:18:06.499 "enable_recv_pipe": true, 00:18:06.499 "enable_quickack": false, 00:18:06.499 "enable_placement_id": 0, 00:18:06.499 "enable_zerocopy_send_server": true, 00:18:06.499 "enable_zerocopy_send_client": false, 00:18:06.499 "zerocopy_threshold": 0, 00:18:06.499 "tls_version": 0, 00:18:06.499 "enable_ktls": false 00:18:06.499 } 00:18:06.499 } 00:18:06.499 ] 00:18:06.499 }, 00:18:06.499 { 00:18:06.499 "subsystem": "vmd", 00:18:06.499 "config": [] 00:18:06.499 }, 00:18:06.499 { 00:18:06.499 "subsystem": "accel", 00:18:06.499 "config": [ 00:18:06.499 { 00:18:06.499 "method": "accel_set_options", 00:18:06.499 "params": { 00:18:06.499 "small_cache_size": 128, 00:18:06.499 "large_cache_size": 16, 00:18:06.499 "task_count": 2048, 00:18:06.499 "sequence_count": 2048, 00:18:06.499 "buf_count": 2048 00:18:06.499 } 00:18:06.499 } 00:18:06.499 ] 00:18:06.499 }, 00:18:06.499 { 00:18:06.499 "subsystem": "bdev", 00:18:06.499 "config": [ 00:18:06.499 { 00:18:06.499 "method": "bdev_set_options", 00:18:06.499 "params": { 00:18:06.499 "bdev_io_pool_size": 65535, 00:18:06.499 "bdev_io_cache_size": 256, 00:18:06.499 "bdev_auto_examine": true, 00:18:06.499 "iobuf_small_cache_size": 128, 00:18:06.499 "iobuf_large_cache_size": 16 00:18:06.499 } 00:18:06.499 }, 00:18:06.499 { 00:18:06.499 "method": "bdev_raid_set_options", 00:18:06.499 "params": { 00:18:06.499 "process_window_size_kb": 1024, 00:18:06.499 "process_max_bandwidth_mb_sec": 0 00:18:06.499 } 00:18:06.499 }, 00:18:06.499 { 00:18:06.499 "method": "bdev_iscsi_set_options", 00:18:06.499 "params": { 00:18:06.499 "timeout_sec": 30 00:18:06.499 } 00:18:06.499 }, 00:18:06.499 { 00:18:06.499 "method": "bdev_nvme_set_options", 00:18:06.499 "params": { 00:18:06.499 "action_on_timeout": "none", 00:18:06.499 "timeout_us": 0, 00:18:06.499 "timeout_admin_us": 0, 00:18:06.499 "keep_alive_timeout_ms": 10000, 00:18:06.499 "arbitration_burst": 0, 00:18:06.499 "low_priority_weight": 0, 00:18:06.499 "medium_priority_weight": 0, 00:18:06.499 "high_priority_weight": 0, 00:18:06.499 "nvme_adminq_poll_period_us": 10000, 00:18:06.499 "nvme_ioq_poll_period_us": 0, 00:18:06.499 "io_queue_requests": 512, 00:18:06.499 "delay_cmd_submit": true, 00:18:06.499 "transport_retry_count": 4, 00:18:06.499 "bdev_retry_count": 3, 00:18:06.499 "transport_ack_timeout": 0, 00:18:06.499 "ctrlr_loss_timeout_sec": 0, 00:18:06.499 "reconnect_delay_sec": 0, 00:18:06.499 "fast_io_fail_timeout_sec": 0, 00:18:06.499 "disable_auto_failback": false, 00:18:06.499 "generate_uuids": false, 00:18:06.499 "transport_tos": 0, 00:18:06.499 "nvme_error_stat": false, 00:18:06.499 "rdma_srq_size": 0, 00:18:06.499 "io_path_stat": false, 00:18:06.499 "allow_accel_sequence": false, 00:18:06.499 "rdma_max_cq_size": 0, 00:18:06.499 "rdma_cm_event_timeout_ms": 0, 00:18:06.499 "dhchap_digests": [ 00:18:06.499 "sha256", 00:18:06.499 "sha384", 00:18:06.499 "sha512" 00:18:06.499 ], 00:18:06.499 "dhchap_dhgroups": [ 00:18:06.499 "null", 00:18:06.499 "ffdhe2048", 00:18:06.499 "ffdhe3072", 00:18:06.499 "ffdhe4096", 00:18:06.499 "ffdhe6144", 00:18:06.499 "ffdhe8192" 00:18:06.499 ] 00:18:06.499 } 00:18:06.499 }, 00:18:06.499 { 00:18:06.499 "method": "bdev_nvme_attach_controller", 00:18:06.499 "params": { 00:18:06.499 "name": "TLSTEST", 00:18:06.499 "trtype": "TCP", 00:18:06.499 "adrfam": "IPv4", 00:18:06.499 "traddr": "10.0.0.2", 00:18:06.499 "trsvcid": "4420", 00:18:06.499 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:06.499 "prchk_reftag": false, 00:18:06.499 "prchk_guard": false, 00:18:06.499 "ctrlr_loss_timeout_sec": 0, 00:18:06.499 "reconnect_delay_sec": 0, 00:18:06.499 "fast_io_fail_timeout_sec": 0, 00:18:06.499 "psk": "key0", 00:18:06.499 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:06.499 "hdgst": false, 00:18:06.499 "ddgst": false, 00:18:06.499 "multipath": "multipath" 00:18:06.499 } 00:18:06.499 }, 00:18:06.499 { 00:18:06.499 "method": "bdev_nvme_set_hotplug", 00:18:06.499 "params": { 00:18:06.499 "period_us": 100000, 00:18:06.499 "enable": false 00:18:06.499 } 00:18:06.499 }, 00:18:06.499 { 00:18:06.499 "method": "bdev_wait_for_examine" 00:18:06.499 } 00:18:06.499 ] 00:18:06.499 }, 00:18:06.499 { 00:18:06.499 "subsystem": "nbd", 00:18:06.499 "config": [] 00:18:06.499 } 00:18:06.499 ] 00:18:06.499 }' 00:18:06.499 11:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2628245 00:18:06.499 11:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2628245 ']' 00:18:06.499 11:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2628245 00:18:06.757 11:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:06.757 11:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:06.757 11:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2628245 00:18:06.757 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:06.757 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:06.757 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2628245' 00:18:06.757 killing process with pid 2628245 00:18:06.757 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2628245 00:18:06.757 Received shutdown signal, test time was about 10.000000 seconds 00:18:06.757 00:18:06.757 Latency(us) 00:18:06.757 [2024-11-19T10:20:02.254Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.757 [2024-11-19T10:20:02.254Z] =================================================================================================================== 00:18:06.757 [2024-11-19T10:20:02.254Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:06.757 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2628245 00:18:06.757 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2627959 00:18:06.757 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2627959 ']' 00:18:06.757 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2627959 00:18:06.757 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:06.757 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:07.015 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2627959 00:18:07.015 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:07.015 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:07.015 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2627959' 00:18:07.015 killing process with pid 2627959 00:18:07.015 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2627959 00:18:07.015 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2627959 00:18:07.275 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:07.275 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:07.275 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:07.275 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:18:07.275 "subsystems": [ 00:18:07.275 { 00:18:07.275 "subsystem": "keyring", 00:18:07.275 "config": [ 00:18:07.275 { 00:18:07.275 "method": "keyring_file_add_key", 00:18:07.275 "params": { 00:18:07.275 "name": "key0", 00:18:07.275 "path": "/tmp/tmp.lenRvSW2hN" 00:18:07.275 } 00:18:07.275 } 00:18:07.275 ] 00:18:07.275 }, 00:18:07.275 { 00:18:07.275 "subsystem": "iobuf", 00:18:07.275 "config": [ 00:18:07.275 { 00:18:07.275 "method": "iobuf_set_options", 00:18:07.275 "params": { 00:18:07.275 "small_pool_count": 8192, 00:18:07.275 "large_pool_count": 1024, 00:18:07.275 "small_bufsize": 8192, 00:18:07.275 "large_bufsize": 135168, 00:18:07.275 "enable_numa": false 00:18:07.275 } 00:18:07.275 } 00:18:07.275 ] 00:18:07.275 }, 00:18:07.275 { 00:18:07.275 "subsystem": "sock", 00:18:07.275 "config": [ 00:18:07.275 { 00:18:07.275 "method": "sock_set_default_impl", 00:18:07.275 "params": { 00:18:07.275 "impl_name": "posix" 00:18:07.275 } 00:18:07.275 }, 00:18:07.275 { 00:18:07.275 "method": "sock_impl_set_options", 00:18:07.275 "params": { 00:18:07.275 "impl_name": "ssl", 00:18:07.275 "recv_buf_size": 4096, 00:18:07.275 "send_buf_size": 4096, 00:18:07.275 "enable_recv_pipe": true, 00:18:07.275 "enable_quickack": false, 00:18:07.275 "enable_placement_id": 0, 00:18:07.275 "enable_zerocopy_send_server": true, 00:18:07.275 "enable_zerocopy_send_client": false, 00:18:07.275 "zerocopy_threshold": 0, 00:18:07.275 "tls_version": 0, 00:18:07.275 "enable_ktls": false 00:18:07.275 } 00:18:07.275 }, 00:18:07.275 { 00:18:07.275 "method": "sock_impl_set_options", 00:18:07.275 "params": { 00:18:07.275 "impl_name": "posix", 00:18:07.275 "recv_buf_size": 2097152, 00:18:07.275 "send_buf_size": 2097152, 00:18:07.275 "enable_recv_pipe": true, 00:18:07.275 "enable_quickack": false, 00:18:07.275 "enable_placement_id": 0, 00:18:07.275 "enable_zerocopy_send_server": true, 00:18:07.275 "enable_zerocopy_send_client": false, 00:18:07.275 "zerocopy_threshold": 0, 00:18:07.275 "tls_version": 0, 00:18:07.275 "enable_ktls": false 00:18:07.275 } 00:18:07.275 } 00:18:07.275 ] 00:18:07.275 }, 00:18:07.275 { 00:18:07.275 "subsystem": "vmd", 00:18:07.275 "config": [] 00:18:07.275 }, 00:18:07.275 { 00:18:07.275 "subsystem": "accel", 00:18:07.275 "config": [ 00:18:07.275 { 00:18:07.275 "method": "accel_set_options", 00:18:07.275 "params": { 00:18:07.275 "small_cache_size": 128, 00:18:07.275 "large_cache_size": 16, 00:18:07.275 "task_count": 2048, 00:18:07.275 "sequence_count": 2048, 00:18:07.275 "buf_count": 2048 00:18:07.275 } 00:18:07.275 } 00:18:07.275 ] 00:18:07.275 }, 00:18:07.275 { 00:18:07.275 "subsystem": "bdev", 00:18:07.275 "config": [ 00:18:07.275 { 00:18:07.275 "method": "bdev_set_options", 00:18:07.275 "params": { 00:18:07.275 "bdev_io_pool_size": 65535, 00:18:07.275 "bdev_io_cache_size": 256, 00:18:07.275 "bdev_auto_examine": true, 00:18:07.275 "iobuf_small_cache_size": 128, 00:18:07.275 "iobuf_large_cache_size": 16 00:18:07.275 } 00:18:07.275 }, 00:18:07.275 { 00:18:07.275 "method": "bdev_raid_set_options", 00:18:07.275 "params": { 00:18:07.275 "process_window_size_kb": 1024, 00:18:07.275 "process_max_bandwidth_mb_sec": 0 00:18:07.275 } 00:18:07.275 }, 00:18:07.275 { 00:18:07.275 "method": "bdev_iscsi_set_options", 00:18:07.275 "params": { 00:18:07.275 "timeout_sec": 30 00:18:07.275 } 00:18:07.275 }, 00:18:07.275 { 00:18:07.275 "method": "bdev_nvme_set_options", 00:18:07.275 "params": { 00:18:07.275 "action_on_timeout": "none", 00:18:07.275 "timeout_us": 0, 00:18:07.275 "timeout_admin_us": 0, 00:18:07.275 "keep_alive_timeout_ms": 10000, 00:18:07.275 "arbitration_burst": 0, 00:18:07.275 "low_priority_weight": 0, 00:18:07.275 "medium_priority_weight": 0, 00:18:07.275 "high_priority_weight": 0, 00:18:07.275 "nvme_adminq_poll_period_us": 10000, 00:18:07.275 "nvme_ioq_poll_period_us": 0, 00:18:07.275 "io_queue_requests": 0, 00:18:07.275 "delay_cmd_submit": true, 00:18:07.275 "transport_retry_count": 4, 00:18:07.275 "bdev_retry_count": 3, 00:18:07.275 "transport_ack_timeout": 0, 00:18:07.275 "ctrlr_loss_timeout_sec": 0, 00:18:07.275 "reconnect_delay_sec": 0, 00:18:07.275 "fast_io_fail_timeout_sec": 0, 00:18:07.275 "disable_auto_failback": false, 00:18:07.275 "generate_uuids": false, 00:18:07.275 "transport_tos": 0, 00:18:07.275 "nvme_error_stat": false, 00:18:07.276 "rdma_srq_size": 0, 00:18:07.276 "io_path_stat": false, 00:18:07.276 "allow_accel_sequence": false, 00:18:07.276 "rdma_max_cq_size": 0, 00:18:07.276 "rdma_cm_event_timeout_ms": 0, 00:18:07.276 "dhchap_digests": [ 00:18:07.276 "sha256", 00:18:07.276 "sha384", 00:18:07.276 "sha512" 00:18:07.276 ], 00:18:07.276 "dhchap_dhgroups": [ 00:18:07.276 "null", 00:18:07.276 "ffdhe2048", 00:18:07.276 "ffdhe3072", 00:18:07.276 "ffdhe4096", 00:18:07.276 "ffdhe6144", 00:18:07.276 "ffdhe8192" 00:18:07.276 ] 00:18:07.276 } 00:18:07.276 }, 00:18:07.276 { 00:18:07.276 "method": "bdev_nvme_set_hotplug", 00:18:07.276 "params": { 00:18:07.276 "period_us": 100000, 00:18:07.276 "enable": false 00:18:07.276 } 00:18:07.276 }, 00:18:07.276 { 00:18:07.276 "method": "bdev_malloc_create", 00:18:07.276 "params": { 00:18:07.276 "name": "malloc0", 00:18:07.276 "num_blocks": 8192, 00:18:07.276 "block_size": 4096, 00:18:07.276 "physical_block_size": 4096, 00:18:07.276 "uuid": "6d68f742-6c43-49ce-9a76-f830cb527dda", 00:18:07.276 "optimal_io_boundary": 0, 00:18:07.276 "md_size": 0, 00:18:07.276 "dif_type": 0, 00:18:07.276 "dif_is_head_of_md": false, 00:18:07.276 "dif_pi_format": 0 00:18:07.276 } 00:18:07.276 }, 00:18:07.276 { 00:18:07.276 "method": "bdev_wait_for_examine" 00:18:07.276 } 00:18:07.276 ] 00:18:07.276 }, 00:18:07.276 { 00:18:07.276 "subsystem": "nbd", 00:18:07.276 "config": [] 00:18:07.276 }, 00:18:07.276 { 00:18:07.276 "subsystem": "scheduler", 00:18:07.276 "config": [ 00:18:07.276 { 00:18:07.276 "method": "framework_set_scheduler", 00:18:07.276 "params": { 00:18:07.276 "name": "static" 00:18:07.276 } 00:18:07.276 } 00:18:07.276 ] 00:18:07.276 }, 00:18:07.276 { 00:18:07.276 "subsystem": "nvmf", 00:18:07.276 "config": [ 00:18:07.276 { 00:18:07.276 "method": "nvmf_set_config", 00:18:07.276 "params": { 00:18:07.276 "discovery_filter": "match_any", 00:18:07.276 "admin_cmd_passthru": { 00:18:07.276 "identify_ctrlr": false 00:18:07.276 }, 00:18:07.276 "dhchap_digests": [ 00:18:07.276 "sha256", 00:18:07.276 "sha384", 00:18:07.276 "sha512" 00:18:07.276 ], 00:18:07.276 "dhchap_dhgroups": [ 00:18:07.276 "null", 00:18:07.276 "ffdhe2048", 00:18:07.276 "ffdhe3072", 00:18:07.276 "ffdhe4096", 00:18:07.276 "ffdhe6144", 00:18:07.276 "ffdhe8192" 00:18:07.276 ] 00:18:07.276 } 00:18:07.276 }, 00:18:07.276 { 00:18:07.276 "method": "nvmf_set_max_subsystems", 00:18:07.276 "params": { 00:18:07.276 "max_subsystems": 1024 00:18:07.276 } 00:18:07.276 }, 00:18:07.276 { 00:18:07.276 "method": "nvmf_set_crdt", 00:18:07.276 "params": { 00:18:07.276 "crdt1": 0, 00:18:07.276 "crdt2": 0, 00:18:07.276 "crdt3": 0 00:18:07.276 } 00:18:07.276 }, 00:18:07.276 { 00:18:07.276 "method": "nvmf_create_transport", 00:18:07.276 "params": { 00:18:07.276 "trtype": "TCP", 00:18:07.276 "max_queue_depth": 128, 00:18:07.276 "max_io_qpairs_per_ctrlr": 127, 00:18:07.276 "in_capsule_data_size": 4096, 00:18:07.276 "max_io_size": 131072, 00:18:07.276 "io_unit_size": 131072, 00:18:07.276 "max_aq_depth": 128, 00:18:07.276 "num_shared_buffers": 511, 00:18:07.276 "buf_cache_size": 4294967295, 00:18:07.276 "dif_insert_or_strip": false, 00:18:07.276 "zcopy": false, 00:18:07.276 "c2h_success": false, 00:18:07.276 "sock_priority": 0, 00:18:07.276 "abort_timeout_sec": 1, 00:18:07.276 "ack_timeout": 0, 00:18:07.276 "data_wr_pool_size": 0 00:18:07.276 } 00:18:07.276 }, 00:18:07.276 { 00:18:07.276 "method": "nvmf_create_subsystem", 00:18:07.276 "params": { 00:18:07.276 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:07.276 "allow_any_host": false, 00:18:07.276 "serial_number": "SPDK00000000000001", 00:18:07.276 "model_number": "SPDK bdev Controller", 00:18:07.276 "max_namespaces": 10, 00:18:07.276 "min_cntlid": 1, 00:18:07.276 "max_cntlid": 65519, 00:18:07.276 "ana_reporting": false 00:18:07.276 } 00:18:07.276 }, 00:18:07.276 { 00:18:07.276 "method": "nvmf_subsystem_add_host", 00:18:07.276 "params": { 00:18:07.276 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:07.276 "host": "nqn.2016-06.io.spdk:host1", 00:18:07.276 "psk": "key0" 00:18:07.276 } 00:18:07.276 }, 00:18:07.276 { 00:18:07.276 "method": "nvmf_subsystem_add_ns", 00:18:07.276 "params": { 00:18:07.276 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:07.276 "namespace": { 00:18:07.276 "nsid": 1, 00:18:07.276 "bdev_name": "malloc0", 00:18:07.276 "nguid": "6D68F7426C4349CE9A76F830CB527DDA", 00:18:07.276 "uuid": "6d68f742-6c43-49ce-9a76-f830cb527dda", 00:18:07.276 "no_auto_visible": false 00:18:07.276 } 00:18:07.276 } 00:18:07.276 }, 00:18:07.276 { 00:18:07.276 "method": "nvmf_subsystem_add_listener", 00:18:07.276 "params": { 00:18:07.276 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:07.276 "listen_address": { 00:18:07.276 "trtype": "TCP", 00:18:07.276 "adrfam": "IPv4", 00:18:07.276 "traddr": "10.0.0.2", 00:18:07.276 "trsvcid": "4420" 00:18:07.276 }, 00:18:07.276 "secure_channel": true 00:18:07.276 } 00:18:07.276 } 00:18:07.276 ] 00:18:07.276 } 00:18:07.276 ] 00:18:07.276 }' 00:18:07.276 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:07.276 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2628524 00:18:07.276 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:07.276 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2628524 00:18:07.276 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2628524 ']' 00:18:07.276 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.276 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:07.276 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.276 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:07.276 11:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:07.276 [2024-11-19 11:20:02.570947] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:18:07.276 [2024-11-19 11:20:02.571038] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:07.276 [2024-11-19 11:20:02.652967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.276 [2024-11-19 11:20:02.706816] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:07.276 [2024-11-19 11:20:02.706879] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:07.276 [2024-11-19 11:20:02.706907] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:07.277 [2024-11-19 11:20:02.706919] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:07.277 [2024-11-19 11:20:02.706928] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:07.277 [2024-11-19 11:20:02.707572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:07.535 [2024-11-19 11:20:02.936599] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:07.535 [2024-11-19 11:20:02.968617] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:07.535 [2024-11-19 11:20:02.968869] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:08.102 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:08.102 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:08.102 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:08.102 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:08.102 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:08.102 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:08.102 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2628676 00:18:08.102 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2628676 /var/tmp/bdevperf.sock 00:18:08.102 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2628676 ']' 00:18:08.102 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:08.102 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:08.102 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:08.102 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:18:08.102 "subsystems": [ 00:18:08.102 { 00:18:08.102 "subsystem": "keyring", 00:18:08.102 "config": [ 00:18:08.102 { 00:18:08.102 "method": "keyring_file_add_key", 00:18:08.102 "params": { 00:18:08.102 "name": "key0", 00:18:08.102 "path": "/tmp/tmp.lenRvSW2hN" 00:18:08.102 } 00:18:08.102 } 00:18:08.102 ] 00:18:08.102 }, 00:18:08.102 { 00:18:08.102 "subsystem": "iobuf", 00:18:08.102 "config": [ 00:18:08.102 { 00:18:08.102 "method": "iobuf_set_options", 00:18:08.102 "params": { 00:18:08.103 "small_pool_count": 8192, 00:18:08.103 "large_pool_count": 1024, 00:18:08.103 "small_bufsize": 8192, 00:18:08.103 "large_bufsize": 135168, 00:18:08.103 "enable_numa": false 00:18:08.103 } 00:18:08.103 } 00:18:08.103 ] 00:18:08.103 }, 00:18:08.103 { 00:18:08.103 "subsystem": "sock", 00:18:08.103 "config": [ 00:18:08.103 { 00:18:08.103 "method": "sock_set_default_impl", 00:18:08.103 "params": { 00:18:08.103 "impl_name": "posix" 00:18:08.103 } 00:18:08.103 }, 00:18:08.103 { 00:18:08.103 "method": "sock_impl_set_options", 00:18:08.103 "params": { 00:18:08.103 "impl_name": "ssl", 00:18:08.103 "recv_buf_size": 4096, 00:18:08.103 "send_buf_size": 4096, 00:18:08.103 "enable_recv_pipe": true, 00:18:08.103 "enable_quickack": false, 00:18:08.103 "enable_placement_id": 0, 00:18:08.103 "enable_zerocopy_send_server": true, 00:18:08.103 "enable_zerocopy_send_client": false, 00:18:08.103 "zerocopy_threshold": 0, 00:18:08.103 "tls_version": 0, 00:18:08.103 "enable_ktls": false 00:18:08.103 } 00:18:08.103 }, 00:18:08.103 { 00:18:08.103 "method": "sock_impl_set_options", 00:18:08.103 "params": { 00:18:08.103 "impl_name": "posix", 00:18:08.103 "recv_buf_size": 2097152, 00:18:08.103 "send_buf_size": 2097152, 00:18:08.103 "enable_recv_pipe": true, 00:18:08.103 "enable_quickack": false, 00:18:08.103 "enable_placement_id": 0, 00:18:08.103 "enable_zerocopy_send_server": true, 00:18:08.103 "enable_zerocopy_send_client": false, 00:18:08.103 "zerocopy_threshold": 0, 00:18:08.103 "tls_version": 0, 00:18:08.103 "enable_ktls": false 00:18:08.103 } 00:18:08.103 } 00:18:08.103 ] 00:18:08.103 }, 00:18:08.103 { 00:18:08.103 "subsystem": "vmd", 00:18:08.103 "config": [] 00:18:08.103 }, 00:18:08.103 { 00:18:08.103 "subsystem": "accel", 00:18:08.103 "config": [ 00:18:08.103 { 00:18:08.103 "method": "accel_set_options", 00:18:08.103 "params": { 00:18:08.103 "small_cache_size": 128, 00:18:08.103 "large_cache_size": 16, 00:18:08.103 "task_count": 2048, 00:18:08.103 "sequence_count": 2048, 00:18:08.103 "buf_count": 2048 00:18:08.103 } 00:18:08.103 } 00:18:08.103 ] 00:18:08.103 }, 00:18:08.103 { 00:18:08.103 "subsystem": "bdev", 00:18:08.103 "config": [ 00:18:08.103 { 00:18:08.103 "method": "bdev_set_options", 00:18:08.103 "params": { 00:18:08.103 "bdev_io_pool_size": 65535, 00:18:08.103 "bdev_io_cache_size": 256, 00:18:08.103 "bdev_auto_examine": true, 00:18:08.103 "iobuf_small_cache_size": 128, 00:18:08.103 "iobuf_large_cache_size": 16 00:18:08.103 } 00:18:08.103 }, 00:18:08.103 { 00:18:08.103 "method": "bdev_raid_set_options", 00:18:08.103 "params": { 00:18:08.103 "process_window_size_kb": 1024, 00:18:08.103 "process_max_bandwidth_mb_sec": 0 00:18:08.103 } 00:18:08.103 }, 00:18:08.103 { 00:18:08.103 "method": "bdev_iscsi_set_options", 00:18:08.103 "params": { 00:18:08.103 "timeout_sec": 30 00:18:08.103 } 00:18:08.103 }, 00:18:08.103 { 00:18:08.103 "method": "bdev_nvme_set_options", 00:18:08.103 "params": { 00:18:08.103 "action_on_timeout": "none", 00:18:08.103 "timeout_us": 0, 00:18:08.103 "timeout_admin_us": 0, 00:18:08.103 "keep_alive_timeout_ms": 10000, 00:18:08.103 "arbitration_burst": 0, 00:18:08.103 "low_priority_weight": 0, 00:18:08.103 "medium_priority_weight": 0, 00:18:08.103 "high_priority_weight": 0, 00:18:08.103 "nvme_adminq_poll_period_us": 10000, 00:18:08.103 "nvme_ioq_poll_period_us": 0, 00:18:08.103 "io_queue_requests": 512, 00:18:08.103 "delay_cmd_submit": true, 00:18:08.103 "transport_retry_count": 4, 00:18:08.103 "bdev_retry_count": 3, 00:18:08.103 "transport_ack_timeout": 0, 00:18:08.103 "ctrlr_loss_timeout_sec": 0, 00:18:08.103 "reconnect_delay_sec": 0, 00:18:08.103 "fast_io_fail_timeout_sec": 0, 00:18:08.103 "disable_auto_failback": false, 00:18:08.103 "generate_uuids": false, 00:18:08.103 "transport_tos": 0, 00:18:08.103 "nvme_error_stat": false, 00:18:08.103 "rdma_srq_size": 0, 00:18:08.103 "io_path_stat": false, 00:18:08.103 "allow_accel_sequence": false, 00:18:08.103 "rdma_max_cq_size": 0, 00:18:08.103 "rdma_cm_event_timeout_ms": 0, 00:18:08.103 "dhchap_digests": [ 00:18:08.103 "sha256", 00:18:08.103 "sha384", 00:18:08.103 "sha512" 00:18:08.103 ], 00:18:08.103 "dhchap_dhgroups": [ 00:18:08.103 "null", 00:18:08.103 "ffdhe2048", 00:18:08.103 "ffdhe3072", 00:18:08.103 "ffdhe4096", 00:18:08.103 "ffdhe6144", 00:18:08.103 "ffdhe8192" 00:18:08.103 ] 00:18:08.103 } 00:18:08.103 }, 00:18:08.103 { 00:18:08.103 "method": "bdev_nvme_attach_controller", 00:18:08.103 "params": { 00:18:08.103 "name": "TLSTEST", 00:18:08.103 "trtype": "TCP", 00:18:08.103 "adrfam": "IPv4", 00:18:08.103 "traddr": "10.0.0.2", 00:18:08.103 "trsvcid": "4420", 00:18:08.103 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.103 "prchk_reftag": false, 00:18:08.103 "prchk_guard": false, 00:18:08.103 "ctrlr_loss_timeout_sec": 0, 00:18:08.103 "reconnect_delay_sec": 0, 00:18:08.103 "fast_io_fail_timeout_sec": 0, 00:18:08.103 "psk": "key0", 00:18:08.103 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:08.103 "hdgst": false, 00:18:08.103 "ddgst": false, 00:18:08.103 "multipath": "multipath" 00:18:08.103 } 00:18:08.103 }, 00:18:08.103 { 00:18:08.103 "method": "bdev_nvme_set_hotplug", 00:18:08.103 "params": { 00:18:08.103 "period_us": 100000, 00:18:08.103 "enable": false 00:18:08.103 } 00:18:08.103 }, 00:18:08.103 { 00:18:08.103 "method": "bdev_wait_for_examine" 00:18:08.103 } 00:18:08.103 ] 00:18:08.104 }, 00:18:08.104 { 00:18:08.104 "subsystem": "nbd", 00:18:08.104 "config": [] 00:18:08.104 } 00:18:08.104 ] 00:18:08.104 }' 00:18:08.104 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:08.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:08.104 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:08.104 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:08.363 [2024-11-19 11:20:03.636937] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:18:08.363 [2024-11-19 11:20:03.637014] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2628676 ] 00:18:08.363 [2024-11-19 11:20:03.712651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.363 [2024-11-19 11:20:03.769078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:08.621 [2024-11-19 11:20:03.950489] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:09.187 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:09.187 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:09.187 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:09.446 Running I/O for 10 seconds... 00:18:11.314 3434.00 IOPS, 13.41 MiB/s [2024-11-19T10:20:07.744Z] 3603.00 IOPS, 14.07 MiB/s [2024-11-19T10:20:09.119Z] 3548.33 IOPS, 13.86 MiB/s [2024-11-19T10:20:10.055Z] 3542.75 IOPS, 13.84 MiB/s [2024-11-19T10:20:10.989Z] 3574.00 IOPS, 13.96 MiB/s [2024-11-19T10:20:11.923Z] 3567.33 IOPS, 13.93 MiB/s [2024-11-19T10:20:12.858Z] 3549.57 IOPS, 13.87 MiB/s [2024-11-19T10:20:13.792Z] 3547.00 IOPS, 13.86 MiB/s [2024-11-19T10:20:15.166Z] 3548.78 IOPS, 13.86 MiB/s [2024-11-19T10:20:15.166Z] 3548.70 IOPS, 13.86 MiB/s 00:18:19.669 Latency(us) 00:18:19.669 [2024-11-19T10:20:15.166Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.669 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:19.669 Verification LBA range: start 0x0 length 0x2000 00:18:19.669 TLSTESTn1 : 10.02 3554.79 13.89 0.00 0.00 35952.49 5873.97 34758.35 00:18:19.669 [2024-11-19T10:20:15.166Z] =================================================================================================================== 00:18:19.669 [2024-11-19T10:20:15.166Z] Total : 3554.79 13.89 0.00 0.00 35952.49 5873.97 34758.35 00:18:19.669 { 00:18:19.669 "results": [ 00:18:19.669 { 00:18:19.669 "job": "TLSTESTn1", 00:18:19.669 "core_mask": "0x4", 00:18:19.669 "workload": "verify", 00:18:19.669 "status": "finished", 00:18:19.669 "verify_range": { 00:18:19.669 "start": 0, 00:18:19.669 "length": 8192 00:18:19.669 }, 00:18:19.669 "queue_depth": 128, 00:18:19.669 "io_size": 4096, 00:18:19.669 "runtime": 10.018027, 00:18:19.669 "iops": 3554.7917768638476, 00:18:19.669 "mibps": 13.885905378374405, 00:18:19.669 "io_failed": 0, 00:18:19.669 "io_timeout": 0, 00:18:19.669 "avg_latency_us": 35952.489241995, 00:18:19.669 "min_latency_us": 5873.967407407407, 00:18:19.669 "max_latency_us": 34758.35259259259 00:18:19.669 } 00:18:19.669 ], 00:18:19.669 "core_count": 1 00:18:19.669 } 00:18:19.669 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:19.669 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2628676 00:18:19.669 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2628676 ']' 00:18:19.669 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2628676 00:18:19.669 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:19.669 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:19.669 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2628676 00:18:19.669 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:19.669 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:19.669 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2628676' 00:18:19.669 killing process with pid 2628676 00:18:19.669 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2628676 00:18:19.669 Received shutdown signal, test time was about 10.000000 seconds 00:18:19.669 00:18:19.669 Latency(us) 00:18:19.669 [2024-11-19T10:20:15.166Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.669 [2024-11-19T10:20:15.166Z] =================================================================================================================== 00:18:19.669 [2024-11-19T10:20:15.166Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:19.669 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2628676 00:18:19.669 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2628524 00:18:19.669 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2628524 ']' 00:18:19.669 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2628524 00:18:19.669 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:19.669 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:19.669 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2628524 00:18:19.669 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:19.669 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:19.669 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2628524' 00:18:19.669 killing process with pid 2628524 00:18:19.669 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2628524 00:18:19.669 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2628524 00:18:19.927 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:18:19.927 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:19.927 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:19.927 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.927 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2630009 00:18:19.927 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:19.927 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2630009 00:18:19.927 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2630009 ']' 00:18:19.927 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.927 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:19.927 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.927 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:19.927 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.927 [2024-11-19 11:20:15.346433] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:18:19.927 [2024-11-19 11:20:15.346511] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:19.927 [2024-11-19 11:20:15.423500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.186 [2024-11-19 11:20:15.477495] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:20.186 [2024-11-19 11:20:15.477556] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:20.186 [2024-11-19 11:20:15.477584] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:20.186 [2024-11-19 11:20:15.477595] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:20.186 [2024-11-19 11:20:15.477604] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:20.186 [2024-11-19 11:20:15.478183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.186 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:20.186 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:20.186 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:20.186 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:20.186 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:20.186 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:20.186 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.lenRvSW2hN 00:18:20.186 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.lenRvSW2hN 00:18:20.186 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:20.753 [2024-11-19 11:20:15.948746] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:20.753 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:21.017 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:21.017 [2024-11-19 11:20:16.494232] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:21.017 [2024-11-19 11:20:16.494529] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:21.360 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:21.360 malloc0 00:18:21.360 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:21.618 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.lenRvSW2hN 00:18:21.876 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:22.133 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2630300 00:18:22.133 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:22.133 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:22.133 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2630300 /var/tmp/bdevperf.sock 00:18:22.133 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2630300 ']' 00:18:22.133 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:22.133 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:22.133 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:22.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:22.133 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:22.133 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:22.392 [2024-11-19 11:20:17.670141] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:18:22.392 [2024-11-19 11:20:17.670227] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2630300 ] 00:18:22.392 [2024-11-19 11:20:17.745866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.392 [2024-11-19 11:20:17.803809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:22.650 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:22.650 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:22.650 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lenRvSW2hN 00:18:22.909 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:23.167 [2024-11-19 11:20:18.443910] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:23.167 nvme0n1 00:18:23.167 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:23.167 Running I/O for 1 seconds... 00:18:24.539 3504.00 IOPS, 13.69 MiB/s 00:18:24.539 Latency(us) 00:18:24.539 [2024-11-19T10:20:20.036Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.539 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:24.539 Verification LBA range: start 0x0 length 0x2000 00:18:24.539 nvme0n1 : 1.02 3552.94 13.88 0.00 0.00 35709.13 6990.51 44467.39 00:18:24.539 [2024-11-19T10:20:20.036Z] =================================================================================================================== 00:18:24.539 [2024-11-19T10:20:20.036Z] Total : 3552.94 13.88 0.00 0.00 35709.13 6990.51 44467.39 00:18:24.539 { 00:18:24.539 "results": [ 00:18:24.539 { 00:18:24.539 "job": "nvme0n1", 00:18:24.539 "core_mask": "0x2", 00:18:24.539 "workload": "verify", 00:18:24.539 "status": "finished", 00:18:24.539 "verify_range": { 00:18:24.539 "start": 0, 00:18:24.539 "length": 8192 00:18:24.539 }, 00:18:24.539 "queue_depth": 128, 00:18:24.539 "io_size": 4096, 00:18:24.539 "runtime": 1.022251, 00:18:24.539 "iops": 3552.9434551788163, 00:18:24.539 "mibps": 13.878685371792251, 00:18:24.539 "io_failed": 0, 00:18:24.539 "io_timeout": 0, 00:18:24.539 "avg_latency_us": 35709.12715287975, 00:18:24.539 "min_latency_us": 6990.506666666667, 00:18:24.539 "max_latency_us": 44467.38962962963 00:18:24.539 } 00:18:24.539 ], 00:18:24.539 "core_count": 1 00:18:24.539 } 00:18:24.539 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2630300 00:18:24.539 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2630300 ']' 00:18:24.539 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2630300 00:18:24.539 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:24.539 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:24.539 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2630300 00:18:24.539 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:24.539 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:24.539 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2630300' 00:18:24.539 killing process with pid 2630300 00:18:24.539 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2630300 00:18:24.539 Received shutdown signal, test time was about 1.000000 seconds 00:18:24.539 00:18:24.539 Latency(us) 00:18:24.539 [2024-11-19T10:20:20.036Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.539 [2024-11-19T10:20:20.036Z] =================================================================================================================== 00:18:24.539 [2024-11-19T10:20:20.036Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:24.539 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2630300 00:18:24.539 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2630009 00:18:24.539 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2630009 ']' 00:18:24.539 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2630009 00:18:24.539 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:24.539 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:24.539 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2630009 00:18:24.539 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:24.539 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:24.539 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2630009' 00:18:24.539 killing process with pid 2630009 00:18:24.539 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2630009 00:18:24.539 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2630009 00:18:24.797 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:18:24.797 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:24.797 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:24.797 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:24.797 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2630588 00:18:24.797 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:24.797 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2630588 00:18:24.797 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2630588 ']' 00:18:24.797 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.797 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:24.797 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:24.797 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:24.797 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:24.797 [2024-11-19 11:20:20.229187] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:18:24.797 [2024-11-19 11:20:20.229271] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:25.055 [2024-11-19 11:20:20.315984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.055 [2024-11-19 11:20:20.370695] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:25.055 [2024-11-19 11:20:20.370757] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:25.055 [2024-11-19 11:20:20.370786] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:25.055 [2024-11-19 11:20:20.370797] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:25.055 [2024-11-19 11:20:20.370807] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:25.055 [2024-11-19 11:20:20.371457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.055 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:25.055 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:25.055 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:25.055 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:25.055 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:25.055 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:25.055 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:18:25.055 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.055 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:25.055 [2024-11-19 11:20:20.513059] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:25.055 malloc0 00:18:25.055 [2024-11-19 11:20:20.545058] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:25.055 [2024-11-19 11:20:20.545350] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:25.314 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.314 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2630722 00:18:25.314 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:25.314 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2630722 /var/tmp/bdevperf.sock 00:18:25.314 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2630722 ']' 00:18:25.314 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:25.314 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:25.314 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:25.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:25.314 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:25.314 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:25.314 [2024-11-19 11:20:20.616925] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:18:25.314 [2024-11-19 11:20:20.617002] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2630722 ] 00:18:25.314 [2024-11-19 11:20:20.690988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.314 [2024-11-19 11:20:20.747573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:25.572 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:25.572 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:25.572 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lenRvSW2hN 00:18:25.830 11:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:26.088 [2024-11-19 11:20:21.382172] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:26.088 nvme0n1 00:18:26.088 11:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:26.088 Running I/O for 1 seconds... 00:18:27.464 3349.00 IOPS, 13.08 MiB/s 00:18:27.465 Latency(us) 00:18:27.465 [2024-11-19T10:20:22.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.465 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:27.465 Verification LBA range: start 0x0 length 0x2000 00:18:27.465 nvme0n1 : 1.02 3401.22 13.29 0.00 0.00 37264.52 5339.97 39418.69 00:18:27.465 [2024-11-19T10:20:22.962Z] =================================================================================================================== 00:18:27.465 [2024-11-19T10:20:22.962Z] Total : 3401.22 13.29 0.00 0.00 37264.52 5339.97 39418.69 00:18:27.465 { 00:18:27.465 "results": [ 00:18:27.465 { 00:18:27.465 "job": "nvme0n1", 00:18:27.465 "core_mask": "0x2", 00:18:27.465 "workload": "verify", 00:18:27.465 "status": "finished", 00:18:27.465 "verify_range": { 00:18:27.465 "start": 0, 00:18:27.465 "length": 8192 00:18:27.465 }, 00:18:27.465 "queue_depth": 128, 00:18:27.465 "io_size": 4096, 00:18:27.465 "runtime": 1.022279, 00:18:27.465 "iops": 3401.2241276598656, 00:18:27.465 "mibps": 13.28603174867135, 00:18:27.465 "io_failed": 0, 00:18:27.465 "io_timeout": 0, 00:18:27.465 "avg_latency_us": 37264.52046570585, 00:18:27.465 "min_latency_us": 5339.970370370371, 00:18:27.465 "max_latency_us": 39418.69037037037 00:18:27.465 } 00:18:27.465 ], 00:18:27.465 "core_count": 1 00:18:27.465 } 00:18:27.465 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:18:27.465 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.465 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:27.465 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.465 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:18:27.465 "subsystems": [ 00:18:27.465 { 00:18:27.465 "subsystem": "keyring", 00:18:27.465 "config": [ 00:18:27.465 { 00:18:27.465 "method": "keyring_file_add_key", 00:18:27.465 "params": { 00:18:27.465 "name": "key0", 00:18:27.465 "path": "/tmp/tmp.lenRvSW2hN" 00:18:27.465 } 00:18:27.465 } 00:18:27.465 ] 00:18:27.465 }, 00:18:27.465 { 00:18:27.465 "subsystem": "iobuf", 00:18:27.465 "config": [ 00:18:27.465 { 00:18:27.465 "method": "iobuf_set_options", 00:18:27.465 "params": { 00:18:27.465 "small_pool_count": 8192, 00:18:27.465 "large_pool_count": 1024, 00:18:27.465 "small_bufsize": 8192, 00:18:27.465 "large_bufsize": 135168, 00:18:27.465 "enable_numa": false 00:18:27.465 } 00:18:27.465 } 00:18:27.465 ] 00:18:27.465 }, 00:18:27.465 { 00:18:27.465 "subsystem": "sock", 00:18:27.465 "config": [ 00:18:27.465 { 00:18:27.465 "method": "sock_set_default_impl", 00:18:27.465 "params": { 00:18:27.465 "impl_name": "posix" 00:18:27.465 } 00:18:27.465 }, 00:18:27.465 { 00:18:27.465 "method": "sock_impl_set_options", 00:18:27.465 "params": { 00:18:27.465 "impl_name": "ssl", 00:18:27.465 "recv_buf_size": 4096, 00:18:27.465 "send_buf_size": 4096, 00:18:27.465 "enable_recv_pipe": true, 00:18:27.465 "enable_quickack": false, 00:18:27.465 "enable_placement_id": 0, 00:18:27.465 "enable_zerocopy_send_server": true, 00:18:27.465 "enable_zerocopy_send_client": false, 00:18:27.465 "zerocopy_threshold": 0, 00:18:27.465 "tls_version": 0, 00:18:27.465 "enable_ktls": false 00:18:27.465 } 00:18:27.465 }, 00:18:27.465 { 00:18:27.465 "method": "sock_impl_set_options", 00:18:27.465 "params": { 00:18:27.465 "impl_name": "posix", 00:18:27.465 "recv_buf_size": 2097152, 00:18:27.465 "send_buf_size": 2097152, 00:18:27.465 "enable_recv_pipe": true, 00:18:27.465 "enable_quickack": false, 00:18:27.465 "enable_placement_id": 0, 00:18:27.465 "enable_zerocopy_send_server": true, 00:18:27.465 "enable_zerocopy_send_client": false, 00:18:27.465 "zerocopy_threshold": 0, 00:18:27.465 "tls_version": 0, 00:18:27.465 "enable_ktls": false 00:18:27.465 } 00:18:27.465 } 00:18:27.465 ] 00:18:27.465 }, 00:18:27.465 { 00:18:27.465 "subsystem": "vmd", 00:18:27.465 "config": [] 00:18:27.465 }, 00:18:27.465 { 00:18:27.465 "subsystem": "accel", 00:18:27.465 "config": [ 00:18:27.465 { 00:18:27.465 "method": "accel_set_options", 00:18:27.465 "params": { 00:18:27.465 "small_cache_size": 128, 00:18:27.465 "large_cache_size": 16, 00:18:27.465 "task_count": 2048, 00:18:27.465 "sequence_count": 2048, 00:18:27.465 "buf_count": 2048 00:18:27.465 } 00:18:27.465 } 00:18:27.465 ] 00:18:27.465 }, 00:18:27.465 { 00:18:27.465 "subsystem": "bdev", 00:18:27.465 "config": [ 00:18:27.465 { 00:18:27.465 "method": "bdev_set_options", 00:18:27.465 "params": { 00:18:27.465 "bdev_io_pool_size": 65535, 00:18:27.465 "bdev_io_cache_size": 256, 00:18:27.465 "bdev_auto_examine": true, 00:18:27.465 "iobuf_small_cache_size": 128, 00:18:27.465 "iobuf_large_cache_size": 16 00:18:27.465 } 00:18:27.465 }, 00:18:27.465 { 00:18:27.465 "method": "bdev_raid_set_options", 00:18:27.465 "params": { 00:18:27.465 "process_window_size_kb": 1024, 00:18:27.465 "process_max_bandwidth_mb_sec": 0 00:18:27.465 } 00:18:27.465 }, 00:18:27.465 { 00:18:27.465 "method": "bdev_iscsi_set_options", 00:18:27.465 "params": { 00:18:27.465 "timeout_sec": 30 00:18:27.465 } 00:18:27.465 }, 00:18:27.465 { 00:18:27.465 "method": "bdev_nvme_set_options", 00:18:27.465 "params": { 00:18:27.465 "action_on_timeout": "none", 00:18:27.465 "timeout_us": 0, 00:18:27.465 "timeout_admin_us": 0, 00:18:27.465 "keep_alive_timeout_ms": 10000, 00:18:27.465 "arbitration_burst": 0, 00:18:27.465 "low_priority_weight": 0, 00:18:27.465 "medium_priority_weight": 0, 00:18:27.465 "high_priority_weight": 0, 00:18:27.465 "nvme_adminq_poll_period_us": 10000, 00:18:27.465 "nvme_ioq_poll_period_us": 0, 00:18:27.465 "io_queue_requests": 0, 00:18:27.465 "delay_cmd_submit": true, 00:18:27.465 "transport_retry_count": 4, 00:18:27.465 "bdev_retry_count": 3, 00:18:27.465 "transport_ack_timeout": 0, 00:18:27.465 "ctrlr_loss_timeout_sec": 0, 00:18:27.465 "reconnect_delay_sec": 0, 00:18:27.465 "fast_io_fail_timeout_sec": 0, 00:18:27.465 "disable_auto_failback": false, 00:18:27.465 "generate_uuids": false, 00:18:27.465 "transport_tos": 0, 00:18:27.465 "nvme_error_stat": false, 00:18:27.465 "rdma_srq_size": 0, 00:18:27.465 "io_path_stat": false, 00:18:27.465 "allow_accel_sequence": false, 00:18:27.465 "rdma_max_cq_size": 0, 00:18:27.465 "rdma_cm_event_timeout_ms": 0, 00:18:27.465 "dhchap_digests": [ 00:18:27.465 "sha256", 00:18:27.465 "sha384", 00:18:27.466 "sha512" 00:18:27.466 ], 00:18:27.466 "dhchap_dhgroups": [ 00:18:27.466 "null", 00:18:27.466 "ffdhe2048", 00:18:27.466 "ffdhe3072", 00:18:27.466 "ffdhe4096", 00:18:27.466 "ffdhe6144", 00:18:27.466 "ffdhe8192" 00:18:27.466 ] 00:18:27.466 } 00:18:27.466 }, 00:18:27.466 { 00:18:27.466 "method": "bdev_nvme_set_hotplug", 00:18:27.466 "params": { 00:18:27.466 "period_us": 100000, 00:18:27.466 "enable": false 00:18:27.466 } 00:18:27.466 }, 00:18:27.466 { 00:18:27.466 "method": "bdev_malloc_create", 00:18:27.466 "params": { 00:18:27.466 "name": "malloc0", 00:18:27.466 "num_blocks": 8192, 00:18:27.466 "block_size": 4096, 00:18:27.466 "physical_block_size": 4096, 00:18:27.466 "uuid": "5f4c4445-ec17-467f-8539-cec33d258253", 00:18:27.466 "optimal_io_boundary": 0, 00:18:27.466 "md_size": 0, 00:18:27.466 "dif_type": 0, 00:18:27.466 "dif_is_head_of_md": false, 00:18:27.466 "dif_pi_format": 0 00:18:27.466 } 00:18:27.466 }, 00:18:27.466 { 00:18:27.466 "method": "bdev_wait_for_examine" 00:18:27.466 } 00:18:27.466 ] 00:18:27.466 }, 00:18:27.466 { 00:18:27.466 "subsystem": "nbd", 00:18:27.466 "config": [] 00:18:27.466 }, 00:18:27.466 { 00:18:27.466 "subsystem": "scheduler", 00:18:27.466 "config": [ 00:18:27.466 { 00:18:27.466 "method": "framework_set_scheduler", 00:18:27.466 "params": { 00:18:27.466 "name": "static" 00:18:27.466 } 00:18:27.466 } 00:18:27.466 ] 00:18:27.466 }, 00:18:27.466 { 00:18:27.466 "subsystem": "nvmf", 00:18:27.466 "config": [ 00:18:27.466 { 00:18:27.466 "method": "nvmf_set_config", 00:18:27.466 "params": { 00:18:27.466 "discovery_filter": "match_any", 00:18:27.466 "admin_cmd_passthru": { 00:18:27.466 "identify_ctrlr": false 00:18:27.466 }, 00:18:27.466 "dhchap_digests": [ 00:18:27.466 "sha256", 00:18:27.466 "sha384", 00:18:27.466 "sha512" 00:18:27.466 ], 00:18:27.466 "dhchap_dhgroups": [ 00:18:27.466 "null", 00:18:27.466 "ffdhe2048", 00:18:27.466 "ffdhe3072", 00:18:27.466 "ffdhe4096", 00:18:27.466 "ffdhe6144", 00:18:27.466 "ffdhe8192" 00:18:27.466 ] 00:18:27.466 } 00:18:27.466 }, 00:18:27.466 { 00:18:27.466 "method": "nvmf_set_max_subsystems", 00:18:27.466 "params": { 00:18:27.466 "max_subsystems": 1024 00:18:27.466 } 00:18:27.466 }, 00:18:27.466 { 00:18:27.466 "method": "nvmf_set_crdt", 00:18:27.466 "params": { 00:18:27.466 "crdt1": 0, 00:18:27.466 "crdt2": 0, 00:18:27.466 "crdt3": 0 00:18:27.466 } 00:18:27.466 }, 00:18:27.466 { 00:18:27.466 "method": "nvmf_create_transport", 00:18:27.466 "params": { 00:18:27.466 "trtype": "TCP", 00:18:27.466 "max_queue_depth": 128, 00:18:27.466 "max_io_qpairs_per_ctrlr": 127, 00:18:27.466 "in_capsule_data_size": 4096, 00:18:27.466 "max_io_size": 131072, 00:18:27.466 "io_unit_size": 131072, 00:18:27.466 "max_aq_depth": 128, 00:18:27.466 "num_shared_buffers": 511, 00:18:27.466 "buf_cache_size": 4294967295, 00:18:27.466 "dif_insert_or_strip": false, 00:18:27.466 "zcopy": false, 00:18:27.466 "c2h_success": false, 00:18:27.466 "sock_priority": 0, 00:18:27.466 "abort_timeout_sec": 1, 00:18:27.466 "ack_timeout": 0, 00:18:27.466 "data_wr_pool_size": 0 00:18:27.466 } 00:18:27.466 }, 00:18:27.466 { 00:18:27.466 "method": "nvmf_create_subsystem", 00:18:27.466 "params": { 00:18:27.466 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:27.466 "allow_any_host": false, 00:18:27.466 "serial_number": "00000000000000000000", 00:18:27.466 "model_number": "SPDK bdev Controller", 00:18:27.466 "max_namespaces": 32, 00:18:27.466 "min_cntlid": 1, 00:18:27.466 "max_cntlid": 65519, 00:18:27.466 "ana_reporting": false 00:18:27.466 } 00:18:27.466 }, 00:18:27.466 { 00:18:27.466 "method": "nvmf_subsystem_add_host", 00:18:27.466 "params": { 00:18:27.466 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:27.466 "host": "nqn.2016-06.io.spdk:host1", 00:18:27.466 "psk": "key0" 00:18:27.466 } 00:18:27.466 }, 00:18:27.466 { 00:18:27.466 "method": "nvmf_subsystem_add_ns", 00:18:27.466 "params": { 00:18:27.466 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:27.466 "namespace": { 00:18:27.466 "nsid": 1, 00:18:27.466 "bdev_name": "malloc0", 00:18:27.466 "nguid": "5F4C4445EC17467F8539CEC33D258253", 00:18:27.466 "uuid": "5f4c4445-ec17-467f-8539-cec33d258253", 00:18:27.466 "no_auto_visible": false 00:18:27.466 } 00:18:27.466 } 00:18:27.466 }, 00:18:27.466 { 00:18:27.466 "method": "nvmf_subsystem_add_listener", 00:18:27.466 "params": { 00:18:27.466 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:27.466 "listen_address": { 00:18:27.466 "trtype": "TCP", 00:18:27.466 "adrfam": "IPv4", 00:18:27.466 "traddr": "10.0.0.2", 00:18:27.466 "trsvcid": "4420" 00:18:27.466 }, 00:18:27.466 "secure_channel": false, 00:18:27.466 "sock_impl": "ssl" 00:18:27.466 } 00:18:27.466 } 00:18:27.466 ] 00:18:27.466 } 00:18:27.466 ] 00:18:27.466 }' 00:18:27.466 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:27.726 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:18:27.726 "subsystems": [ 00:18:27.726 { 00:18:27.726 "subsystem": "keyring", 00:18:27.726 "config": [ 00:18:27.726 { 00:18:27.726 "method": "keyring_file_add_key", 00:18:27.726 "params": { 00:18:27.726 "name": "key0", 00:18:27.726 "path": "/tmp/tmp.lenRvSW2hN" 00:18:27.726 } 00:18:27.726 } 00:18:27.726 ] 00:18:27.726 }, 00:18:27.726 { 00:18:27.726 "subsystem": "iobuf", 00:18:27.726 "config": [ 00:18:27.726 { 00:18:27.726 "method": "iobuf_set_options", 00:18:27.726 "params": { 00:18:27.726 "small_pool_count": 8192, 00:18:27.726 "large_pool_count": 1024, 00:18:27.726 "small_bufsize": 8192, 00:18:27.726 "large_bufsize": 135168, 00:18:27.726 "enable_numa": false 00:18:27.726 } 00:18:27.726 } 00:18:27.726 ] 00:18:27.726 }, 00:18:27.726 { 00:18:27.726 "subsystem": "sock", 00:18:27.726 "config": [ 00:18:27.726 { 00:18:27.726 "method": "sock_set_default_impl", 00:18:27.726 "params": { 00:18:27.726 "impl_name": "posix" 00:18:27.726 } 00:18:27.726 }, 00:18:27.726 { 00:18:27.726 "method": "sock_impl_set_options", 00:18:27.726 "params": { 00:18:27.726 "impl_name": "ssl", 00:18:27.726 "recv_buf_size": 4096, 00:18:27.726 "send_buf_size": 4096, 00:18:27.726 "enable_recv_pipe": true, 00:18:27.726 "enable_quickack": false, 00:18:27.726 "enable_placement_id": 0, 00:18:27.726 "enable_zerocopy_send_server": true, 00:18:27.726 "enable_zerocopy_send_client": false, 00:18:27.726 "zerocopy_threshold": 0, 00:18:27.726 "tls_version": 0, 00:18:27.726 "enable_ktls": false 00:18:27.726 } 00:18:27.726 }, 00:18:27.726 { 00:18:27.726 "method": "sock_impl_set_options", 00:18:27.726 "params": { 00:18:27.726 "impl_name": "posix", 00:18:27.726 "recv_buf_size": 2097152, 00:18:27.726 "send_buf_size": 2097152, 00:18:27.726 "enable_recv_pipe": true, 00:18:27.726 "enable_quickack": false, 00:18:27.726 "enable_placement_id": 0, 00:18:27.726 "enable_zerocopy_send_server": true, 00:18:27.726 "enable_zerocopy_send_client": false, 00:18:27.726 "zerocopy_threshold": 0, 00:18:27.726 "tls_version": 0, 00:18:27.726 "enable_ktls": false 00:18:27.726 } 00:18:27.726 } 00:18:27.726 ] 00:18:27.726 }, 00:18:27.726 { 00:18:27.726 "subsystem": "vmd", 00:18:27.726 "config": [] 00:18:27.726 }, 00:18:27.726 { 00:18:27.726 "subsystem": "accel", 00:18:27.726 "config": [ 00:18:27.726 { 00:18:27.726 "method": "accel_set_options", 00:18:27.726 "params": { 00:18:27.726 "small_cache_size": 128, 00:18:27.726 "large_cache_size": 16, 00:18:27.726 "task_count": 2048, 00:18:27.726 "sequence_count": 2048, 00:18:27.726 "buf_count": 2048 00:18:27.726 } 00:18:27.726 } 00:18:27.726 ] 00:18:27.726 }, 00:18:27.726 { 00:18:27.726 "subsystem": "bdev", 00:18:27.726 "config": [ 00:18:27.726 { 00:18:27.726 "method": "bdev_set_options", 00:18:27.726 "params": { 00:18:27.726 "bdev_io_pool_size": 65535, 00:18:27.726 "bdev_io_cache_size": 256, 00:18:27.726 "bdev_auto_examine": true, 00:18:27.726 "iobuf_small_cache_size": 128, 00:18:27.726 "iobuf_large_cache_size": 16 00:18:27.726 } 00:18:27.726 }, 00:18:27.726 { 00:18:27.726 "method": "bdev_raid_set_options", 00:18:27.726 "params": { 00:18:27.726 "process_window_size_kb": 1024, 00:18:27.726 "process_max_bandwidth_mb_sec": 0 00:18:27.726 } 00:18:27.726 }, 00:18:27.726 { 00:18:27.726 "method": "bdev_iscsi_set_options", 00:18:27.726 "params": { 00:18:27.726 "timeout_sec": 30 00:18:27.726 } 00:18:27.726 }, 00:18:27.726 { 00:18:27.726 "method": "bdev_nvme_set_options", 00:18:27.726 "params": { 00:18:27.726 "action_on_timeout": "none", 00:18:27.726 "timeout_us": 0, 00:18:27.726 "timeout_admin_us": 0, 00:18:27.726 "keep_alive_timeout_ms": 10000, 00:18:27.726 "arbitration_burst": 0, 00:18:27.726 "low_priority_weight": 0, 00:18:27.726 "medium_priority_weight": 0, 00:18:27.726 "high_priority_weight": 0, 00:18:27.726 "nvme_adminq_poll_period_us": 10000, 00:18:27.726 "nvme_ioq_poll_period_us": 0, 00:18:27.726 "io_queue_requests": 512, 00:18:27.726 "delay_cmd_submit": true, 00:18:27.726 "transport_retry_count": 4, 00:18:27.726 "bdev_retry_count": 3, 00:18:27.726 "transport_ack_timeout": 0, 00:18:27.726 "ctrlr_loss_timeout_sec": 0, 00:18:27.726 "reconnect_delay_sec": 0, 00:18:27.726 "fast_io_fail_timeout_sec": 0, 00:18:27.726 "disable_auto_failback": false, 00:18:27.726 "generate_uuids": false, 00:18:27.726 "transport_tos": 0, 00:18:27.726 "nvme_error_stat": false, 00:18:27.726 "rdma_srq_size": 0, 00:18:27.726 "io_path_stat": false, 00:18:27.726 "allow_accel_sequence": false, 00:18:27.726 "rdma_max_cq_size": 0, 00:18:27.726 "rdma_cm_event_timeout_ms": 0, 00:18:27.727 "dhchap_digests": [ 00:18:27.727 "sha256", 00:18:27.727 "sha384", 00:18:27.727 "sha512" 00:18:27.727 ], 00:18:27.727 "dhchap_dhgroups": [ 00:18:27.727 "null", 00:18:27.727 "ffdhe2048", 00:18:27.727 "ffdhe3072", 00:18:27.727 "ffdhe4096", 00:18:27.727 "ffdhe6144", 00:18:27.727 "ffdhe8192" 00:18:27.727 ] 00:18:27.727 } 00:18:27.727 }, 00:18:27.727 { 00:18:27.727 "method": "bdev_nvme_attach_controller", 00:18:27.727 "params": { 00:18:27.727 "name": "nvme0", 00:18:27.727 "trtype": "TCP", 00:18:27.727 "adrfam": "IPv4", 00:18:27.727 "traddr": "10.0.0.2", 00:18:27.727 "trsvcid": "4420", 00:18:27.727 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:27.727 "prchk_reftag": false, 00:18:27.727 "prchk_guard": false, 00:18:27.727 "ctrlr_loss_timeout_sec": 0, 00:18:27.727 "reconnect_delay_sec": 0, 00:18:27.727 "fast_io_fail_timeout_sec": 0, 00:18:27.727 "psk": "key0", 00:18:27.727 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:27.727 "hdgst": false, 00:18:27.727 "ddgst": false, 00:18:27.727 "multipath": "multipath" 00:18:27.727 } 00:18:27.727 }, 00:18:27.727 { 00:18:27.727 "method": "bdev_nvme_set_hotplug", 00:18:27.727 "params": { 00:18:27.727 "period_us": 100000, 00:18:27.727 "enable": false 00:18:27.727 } 00:18:27.727 }, 00:18:27.727 { 00:18:27.727 "method": "bdev_enable_histogram", 00:18:27.727 "params": { 00:18:27.727 "name": "nvme0n1", 00:18:27.727 "enable": true 00:18:27.727 } 00:18:27.727 }, 00:18:27.727 { 00:18:27.727 "method": "bdev_wait_for_examine" 00:18:27.727 } 00:18:27.727 ] 00:18:27.727 }, 00:18:27.727 { 00:18:27.727 "subsystem": "nbd", 00:18:27.727 "config": [] 00:18:27.727 } 00:18:27.727 ] 00:18:27.727 }' 00:18:27.727 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2630722 00:18:27.727 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2630722 ']' 00:18:27.727 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2630722 00:18:27.727 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:27.727 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:27.727 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2630722 00:18:27.727 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:27.727 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:27.727 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2630722' 00:18:27.727 killing process with pid 2630722 00:18:27.727 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2630722 00:18:27.727 Received shutdown signal, test time was about 1.000000 seconds 00:18:27.727 00:18:27.727 Latency(us) 00:18:27.727 [2024-11-19T10:20:23.224Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.727 [2024-11-19T10:20:23.224Z] =================================================================================================================== 00:18:27.727 [2024-11-19T10:20:23.224Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:27.727 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2630722 00:18:27.986 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2630588 00:18:27.986 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2630588 ']' 00:18:27.986 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2630588 00:18:27.986 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:27.986 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:27.986 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2630588 00:18:27.986 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:27.986 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:27.986 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2630588' 00:18:27.986 killing process with pid 2630588 00:18:27.986 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2630588 00:18:27.986 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2630588 00:18:28.245 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:18:28.245 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:28.245 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:18:28.245 "subsystems": [ 00:18:28.245 { 00:18:28.245 "subsystem": "keyring", 00:18:28.245 "config": [ 00:18:28.245 { 00:18:28.245 "method": "keyring_file_add_key", 00:18:28.245 "params": { 00:18:28.245 "name": "key0", 00:18:28.245 "path": "/tmp/tmp.lenRvSW2hN" 00:18:28.245 } 00:18:28.245 } 00:18:28.245 ] 00:18:28.245 }, 00:18:28.245 { 00:18:28.245 "subsystem": "iobuf", 00:18:28.245 "config": [ 00:18:28.245 { 00:18:28.245 "method": "iobuf_set_options", 00:18:28.245 "params": { 00:18:28.245 "small_pool_count": 8192, 00:18:28.245 "large_pool_count": 1024, 00:18:28.245 "small_bufsize": 8192, 00:18:28.245 "large_bufsize": 135168, 00:18:28.245 "enable_numa": false 00:18:28.245 } 00:18:28.245 } 00:18:28.245 ] 00:18:28.245 }, 00:18:28.245 { 00:18:28.245 "subsystem": "sock", 00:18:28.245 "config": [ 00:18:28.245 { 00:18:28.245 "method": "sock_set_default_impl", 00:18:28.245 "params": { 00:18:28.245 "impl_name": "posix" 00:18:28.245 } 00:18:28.245 }, 00:18:28.245 { 00:18:28.245 "method": "sock_impl_set_options", 00:18:28.245 "params": { 00:18:28.245 "impl_name": "ssl", 00:18:28.245 "recv_buf_size": 4096, 00:18:28.245 "send_buf_size": 4096, 00:18:28.245 "enable_recv_pipe": true, 00:18:28.245 "enable_quickack": false, 00:18:28.245 "enable_placement_id": 0, 00:18:28.245 "enable_zerocopy_send_server": true, 00:18:28.245 "enable_zerocopy_send_client": false, 00:18:28.245 "zerocopy_threshold": 0, 00:18:28.245 "tls_version": 0, 00:18:28.245 "enable_ktls": false 00:18:28.245 } 00:18:28.245 }, 00:18:28.245 { 00:18:28.245 "method": "sock_impl_set_options", 00:18:28.245 "params": { 00:18:28.245 "impl_name": "posix", 00:18:28.245 "recv_buf_size": 2097152, 00:18:28.245 "send_buf_size": 2097152, 00:18:28.245 "enable_recv_pipe": true, 00:18:28.245 "enable_quickack": false, 00:18:28.245 "enable_placement_id": 0, 00:18:28.245 "enable_zerocopy_send_server": true, 00:18:28.245 "enable_zerocopy_send_client": false, 00:18:28.245 "zerocopy_threshold": 0, 00:18:28.245 "tls_version": 0, 00:18:28.245 "enable_ktls": false 00:18:28.245 } 00:18:28.245 } 00:18:28.245 ] 00:18:28.245 }, 00:18:28.245 { 00:18:28.245 "subsystem": "vmd", 00:18:28.245 "config": [] 00:18:28.245 }, 00:18:28.245 { 00:18:28.245 "subsystem": "accel", 00:18:28.245 "config": [ 00:18:28.245 { 00:18:28.245 "method": "accel_set_options", 00:18:28.245 "params": { 00:18:28.245 "small_cache_size": 128, 00:18:28.245 "large_cache_size": 16, 00:18:28.245 "task_count": 2048, 00:18:28.245 "sequence_count": 2048, 00:18:28.245 "buf_count": 2048 00:18:28.245 } 00:18:28.245 } 00:18:28.245 ] 00:18:28.245 }, 00:18:28.245 { 00:18:28.245 "subsystem": "bdev", 00:18:28.245 "config": [ 00:18:28.245 { 00:18:28.245 "method": "bdev_set_options", 00:18:28.245 "params": { 00:18:28.245 "bdev_io_pool_size": 65535, 00:18:28.245 "bdev_io_cache_size": 256, 00:18:28.245 "bdev_auto_examine": true, 00:18:28.245 "iobuf_small_cache_size": 128, 00:18:28.245 "iobuf_large_cache_size": 16 00:18:28.245 } 00:18:28.245 }, 00:18:28.245 { 00:18:28.245 "method": "bdev_raid_set_options", 00:18:28.245 "params": { 00:18:28.245 "process_window_size_kb": 1024, 00:18:28.245 "process_max_bandwidth_mb_sec": 0 00:18:28.245 } 00:18:28.245 }, 00:18:28.245 { 00:18:28.245 "method": "bdev_iscsi_set_options", 00:18:28.245 "params": { 00:18:28.245 "timeout_sec": 30 00:18:28.245 } 00:18:28.245 }, 00:18:28.245 { 00:18:28.245 "method": "bdev_nvme_set_options", 00:18:28.245 "params": { 00:18:28.245 "action_on_timeout": "none", 00:18:28.245 "timeout_us": 0, 00:18:28.245 "timeout_admin_us": 0, 00:18:28.245 "keep_alive_timeout_ms": 10000, 00:18:28.245 "arbitration_burst": 0, 00:18:28.245 "low_priority_weight": 0, 00:18:28.245 "medium_priority_weight": 0, 00:18:28.245 "high_priority_weight": 0, 00:18:28.245 "nvme_adminq_poll_period_us": 10000, 00:18:28.245 "nvme_ioq_poll_period_us": 0, 00:18:28.245 "io_queue_requests": 0, 00:18:28.245 "delay_cmd_submit": true, 00:18:28.245 "transport_retry_count": 4, 00:18:28.245 "bdev_retry_count": 3, 00:18:28.245 "transport_ack_timeout": 0, 00:18:28.245 "ctrlr_loss_timeout_sec": 0, 00:18:28.245 "reconnect_delay_sec": 0, 00:18:28.245 "fast_io_fail_timeout_sec": 0, 00:18:28.245 "disable_auto_failback": false, 00:18:28.245 "generate_uuids": false, 00:18:28.245 "transport_tos": 0, 00:18:28.245 "nvme_error_stat": false, 00:18:28.245 "rdma_srq_size": 0, 00:18:28.245 "io_path_stat": false, 00:18:28.245 "allow_accel_sequence": false, 00:18:28.245 "rdma_max_cq_size": 0, 00:18:28.245 "rdma_cm_event_timeout_ms": 0, 00:18:28.245 "dhchap_digests": [ 00:18:28.245 "sha256", 00:18:28.245 "sha384", 00:18:28.245 "sha512" 00:18:28.245 ], 00:18:28.245 "dhchap_dhgroups": [ 00:18:28.245 "null", 00:18:28.245 "ffdhe2048", 00:18:28.245 "ffdhe3072", 00:18:28.245 "ffdhe4096", 00:18:28.245 "ffdhe6144", 00:18:28.245 "ffdhe8192" 00:18:28.245 ] 00:18:28.245 } 00:18:28.245 }, 00:18:28.245 { 00:18:28.245 "method": "bdev_nvme_set_hotplug", 00:18:28.245 "params": { 00:18:28.245 "period_us": 100000, 00:18:28.245 "enable": false 00:18:28.245 } 00:18:28.245 }, 00:18:28.245 { 00:18:28.245 "method": "bdev_malloc_create", 00:18:28.245 "params": { 00:18:28.245 "name": "malloc0", 00:18:28.245 "num_blocks": 8192, 00:18:28.245 "block_size": 4096, 00:18:28.245 "physical_block_size": 4096, 00:18:28.245 "uuid": "5f4c4445-ec17-467f-8539-cec33d258253", 00:18:28.245 "optimal_io_boundary": 0, 00:18:28.245 "md_size": 0, 00:18:28.246 "dif_type": 0, 00:18:28.246 "dif_is_head_of_md": false, 00:18:28.246 "dif_pi_format": 0 00:18:28.246 } 00:18:28.246 }, 00:18:28.246 { 00:18:28.246 "method": "bdev_wait_for_examine" 00:18:28.246 } 00:18:28.246 ] 00:18:28.246 }, 00:18:28.246 { 00:18:28.246 "subsystem": "nbd", 00:18:28.246 "config": [] 00:18:28.246 }, 00:18:28.246 { 00:18:28.246 "subsystem": "scheduler", 00:18:28.246 "config": [ 00:18:28.246 { 00:18:28.246 "method": "framework_set_scheduler", 00:18:28.246 "params": { 00:18:28.246 "name": "static" 00:18:28.246 } 00:18:28.246 } 00:18:28.246 ] 00:18:28.246 }, 00:18:28.246 { 00:18:28.246 "subsystem": "nvmf", 00:18:28.246 "config": [ 00:18:28.246 { 00:18:28.246 "method": "nvmf_set_config", 00:18:28.246 "params": { 00:18:28.246 "discovery_filter": "match_any", 00:18:28.246 "admin_cmd_passthru": { 00:18:28.246 "identify_ctrlr": false 00:18:28.246 }, 00:18:28.246 "dhchap_digests": [ 00:18:28.246 "sha256", 00:18:28.246 "sha384", 00:18:28.246 "sha512" 00:18:28.246 ], 00:18:28.246 "dhchap_dhgroups": [ 00:18:28.246 "null", 00:18:28.246 "ffdhe2048", 00:18:28.246 "ffdhe3072", 00:18:28.246 "ffdhe4096", 00:18:28.246 "ffdhe6144", 00:18:28.246 "ffdhe8192" 00:18:28.246 ] 00:18:28.246 } 00:18:28.246 }, 00:18:28.246 { 00:18:28.246 "method": "nvmf_set_max_subsystems", 00:18:28.246 "params": { 00:18:28.246 "max_subsystems": 1024 00:18:28.246 } 00:18:28.246 }, 00:18:28.246 { 00:18:28.246 "method": "nvmf_set_crdt", 00:18:28.246 "params": { 00:18:28.246 "crdt1": 0, 00:18:28.246 "crdt2": 0, 00:18:28.246 "crdt3": 0 00:18:28.246 } 00:18:28.246 }, 00:18:28.246 { 00:18:28.246 "method": "nvmf_create_transport", 00:18:28.246 "params": { 00:18:28.246 "trtype": "TCP", 00:18:28.246 "max_queue_depth": 128, 00:18:28.246 "max_io_qpairs_per_ctrlr": 127, 00:18:28.246 "in_capsule_data_size": 4096, 00:18:28.246 "max_io_size": 131072, 00:18:28.246 "io_unit_size": 131072, 00:18:28.246 "max_aq_depth": 128, 00:18:28.246 "num_shared_buffers": 511, 00:18:28.246 "buf_cache_size": 4294967295, 00:18:28.246 "dif_insert_or_strip": false, 00:18:28.246 "zcopy": false, 00:18:28.246 "c2h_success": false, 00:18:28.246 "sock_priority": 0, 00:18:28.246 "abort_timeout_sec": 1, 00:18:28.246 "ack_timeout": 0, 00:18:28.246 "data_wr_pool_size": 0 00:18:28.246 } 00:18:28.246 }, 00:18:28.246 { 00:18:28.246 "method": "nvmf_create_subsystem", 00:18:28.246 "params": { 00:18:28.246 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.246 "allow_any_host": false, 00:18:28.246 "serial_number": "00000000000000000000", 00:18:28.246 "model_number": "SPDK bdev Controller", 00:18:28.246 "max_namespaces": 32, 00:18:28.246 "min_cntlid": 1, 00:18:28.246 "max_cntlid": 65519, 00:18:28.246 "ana_reporting": false 00:18:28.246 } 00:18:28.246 }, 00:18:28.246 { 00:18:28.246 "method": "nvmf_subsystem_add_host", 00:18:28.246 "params": { 00:18:28.246 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.246 "host": "nqn.2016-06.io.spdk:host1", 00:18:28.246 "psk": "key0" 00:18:28.246 } 00:18:28.246 }, 00:18:28.246 { 00:18:28.246 "method": "nvmf_subsystem_add_ns", 00:18:28.246 "params": { 00:18:28.246 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.246 "namespace": { 00:18:28.246 "nsid": 1, 00:18:28.246 "bdev_name": "malloc0", 00:18:28.246 "nguid": "5F4C4445EC17467F8539CEC33D258253", 00:18:28.246 "uuid": "5f4c4445-ec17-467f-8539-cec33d258253", 00:18:28.246 "no_auto_visible": false 00:18:28.246 } 00:18:28.246 } 00:18:28.246 }, 00:18:28.246 { 00:18:28.246 "method": "nvmf_subsystem_add_listener", 00:18:28.246 "params": { 00:18:28.246 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.246 "listen_address": { 00:18:28.246 "trtype": "TCP", 00:18:28.246 "adrfam": "IPv4", 00:18:28.246 "traddr": "10.0.0.2", 00:18:28.246 "trsvcid": "4420" 00:18:28.246 }, 00:18:28.246 "secure_channel": false, 00:18:28.246 "sock_impl": "ssl" 00:18:28.246 } 00:18:28.246 } 00:18:28.246 ] 00:18:28.246 } 00:18:28.246 ] 00:18:28.246 }' 00:18:28.246 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:28.246 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.246 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2631023 00:18:28.246 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:28.246 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2631023 00:18:28.246 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2631023 ']' 00:18:28.246 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.246 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:28.246 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.246 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:28.246 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.246 [2024-11-19 11:20:23.649793] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:18:28.246 [2024-11-19 11:20:23.649898] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:28.505 [2024-11-19 11:20:23.745902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.505 [2024-11-19 11:20:23.803664] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:28.505 [2024-11-19 11:20:23.803732] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:28.505 [2024-11-19 11:20:23.803745] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:28.505 [2024-11-19 11:20:23.803757] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:28.505 [2024-11-19 11:20:23.803767] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:28.505 [2024-11-19 11:20:23.804461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.764 [2024-11-19 11:20:24.045681] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:28.764 [2024-11-19 11:20:24.077711] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:28.764 [2024-11-19 11:20:24.077929] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:29.331 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:29.331 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:29.331 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:29.331 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:29.331 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:29.331 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:29.331 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2631168 00:18:29.331 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2631168 /var/tmp/bdevperf.sock 00:18:29.331 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2631168 ']' 00:18:29.331 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:29.331 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:29.331 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:29.331 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:29.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:29.331 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:29.331 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:18:29.331 "subsystems": [ 00:18:29.331 { 00:18:29.331 "subsystem": "keyring", 00:18:29.331 "config": [ 00:18:29.331 { 00:18:29.331 "method": "keyring_file_add_key", 00:18:29.331 "params": { 00:18:29.331 "name": "key0", 00:18:29.331 "path": "/tmp/tmp.lenRvSW2hN" 00:18:29.331 } 00:18:29.331 } 00:18:29.331 ] 00:18:29.331 }, 00:18:29.331 { 00:18:29.331 "subsystem": "iobuf", 00:18:29.331 "config": [ 00:18:29.331 { 00:18:29.331 "method": "iobuf_set_options", 00:18:29.331 "params": { 00:18:29.331 "small_pool_count": 8192, 00:18:29.331 "large_pool_count": 1024, 00:18:29.331 "small_bufsize": 8192, 00:18:29.331 "large_bufsize": 135168, 00:18:29.331 "enable_numa": false 00:18:29.331 } 00:18:29.331 } 00:18:29.331 ] 00:18:29.331 }, 00:18:29.331 { 00:18:29.331 "subsystem": "sock", 00:18:29.331 "config": [ 00:18:29.331 { 00:18:29.331 "method": "sock_set_default_impl", 00:18:29.331 "params": { 00:18:29.331 "impl_name": "posix" 00:18:29.331 } 00:18:29.331 }, 00:18:29.331 { 00:18:29.331 "method": "sock_impl_set_options", 00:18:29.331 "params": { 00:18:29.331 "impl_name": "ssl", 00:18:29.331 "recv_buf_size": 4096, 00:18:29.331 "send_buf_size": 4096, 00:18:29.331 "enable_recv_pipe": true, 00:18:29.331 "enable_quickack": false, 00:18:29.331 "enable_placement_id": 0, 00:18:29.331 "enable_zerocopy_send_server": true, 00:18:29.331 "enable_zerocopy_send_client": false, 00:18:29.331 "zerocopy_threshold": 0, 00:18:29.331 "tls_version": 0, 00:18:29.331 "enable_ktls": false 00:18:29.331 } 00:18:29.331 }, 00:18:29.331 { 00:18:29.331 "method": "sock_impl_set_options", 00:18:29.331 "params": { 00:18:29.331 "impl_name": "posix", 00:18:29.331 "recv_buf_size": 2097152, 00:18:29.331 "send_buf_size": 2097152, 00:18:29.331 "enable_recv_pipe": true, 00:18:29.331 "enable_quickack": false, 00:18:29.331 "enable_placement_id": 0, 00:18:29.331 "enable_zerocopy_send_server": true, 00:18:29.331 "enable_zerocopy_send_client": false, 00:18:29.331 "zerocopy_threshold": 0, 00:18:29.331 "tls_version": 0, 00:18:29.331 "enable_ktls": false 00:18:29.331 } 00:18:29.331 } 00:18:29.331 ] 00:18:29.331 }, 00:18:29.331 { 00:18:29.331 "subsystem": "vmd", 00:18:29.331 "config": [] 00:18:29.331 }, 00:18:29.331 { 00:18:29.331 "subsystem": "accel", 00:18:29.331 "config": [ 00:18:29.331 { 00:18:29.331 "method": "accel_set_options", 00:18:29.331 "params": { 00:18:29.331 "small_cache_size": 128, 00:18:29.331 "large_cache_size": 16, 00:18:29.331 "task_count": 2048, 00:18:29.331 "sequence_count": 2048, 00:18:29.331 "buf_count": 2048 00:18:29.331 } 00:18:29.331 } 00:18:29.331 ] 00:18:29.331 }, 00:18:29.331 { 00:18:29.331 "subsystem": "bdev", 00:18:29.331 "config": [ 00:18:29.332 { 00:18:29.332 "method": "bdev_set_options", 00:18:29.332 "params": { 00:18:29.332 "bdev_io_pool_size": 65535, 00:18:29.332 "bdev_io_cache_size": 256, 00:18:29.332 "bdev_auto_examine": true, 00:18:29.332 "iobuf_small_cache_size": 128, 00:18:29.332 "iobuf_large_cache_size": 16 00:18:29.332 } 00:18:29.332 }, 00:18:29.332 { 00:18:29.332 "method": "bdev_raid_set_options", 00:18:29.332 "params": { 00:18:29.332 "process_window_size_kb": 1024, 00:18:29.332 "process_max_bandwidth_mb_sec": 0 00:18:29.332 } 00:18:29.332 }, 00:18:29.332 { 00:18:29.332 "method": "bdev_iscsi_set_options", 00:18:29.332 "params": { 00:18:29.332 "timeout_sec": 30 00:18:29.332 } 00:18:29.332 }, 00:18:29.332 { 00:18:29.332 "method": "bdev_nvme_set_options", 00:18:29.332 "params": { 00:18:29.332 "action_on_timeout": "none", 00:18:29.332 "timeout_us": 0, 00:18:29.332 "timeout_admin_us": 0, 00:18:29.332 "keep_alive_timeout_ms": 10000, 00:18:29.332 "arbitration_burst": 0, 00:18:29.332 "low_priority_weight": 0, 00:18:29.332 "medium_priority_weight": 0, 00:18:29.332 "high_priority_weight": 0, 00:18:29.332 "nvme_adminq_poll_period_us": 10000, 00:18:29.332 "nvme_ioq_poll_period_us": 0, 00:18:29.332 "io_queue_requests": 512, 00:18:29.332 "delay_cmd_submit": true, 00:18:29.332 "transport_retry_count": 4, 00:18:29.332 "bdev_retry_count": 3, 00:18:29.332 "transport_ack_timeout": 0, 00:18:29.332 "ctrlr_loss_timeout_sec": 0, 00:18:29.332 "reconnect_delay_sec": 0, 00:18:29.332 "fast_io_fail_timeout_sec": 0, 00:18:29.332 "disable_auto_failback": false, 00:18:29.332 "generate_uuids": false, 00:18:29.332 "transport_tos": 0, 00:18:29.332 "nvme_error_stat": false, 00:18:29.332 "rdma_srq_size": 0, 00:18:29.332 "io_path_stat": false, 00:18:29.332 "allow_accel_sequence": false, 00:18:29.332 "rdma_max_cq_size": 0, 00:18:29.332 "rdma_cm_event_timeout_ms": 0, 00:18:29.332 "dhchap_digests": [ 00:18:29.332 "sha256", 00:18:29.332 "sha384", 00:18:29.332 "sha512" 00:18:29.332 ], 00:18:29.332 "dhchap_dhgroups": [ 00:18:29.332 "null", 00:18:29.332 "ffdhe2048", 00:18:29.332 "ffdhe3072", 00:18:29.332 "ffdhe4096", 00:18:29.332 "ffdhe6144", 00:18:29.332 "ffdhe8192" 00:18:29.332 ] 00:18:29.332 } 00:18:29.332 }, 00:18:29.332 { 00:18:29.332 "method": "bdev_nvme_attach_controller", 00:18:29.332 "params": { 00:18:29.332 "name": "nvme0", 00:18:29.332 "trtype": "TCP", 00:18:29.332 "adrfam": "IPv4", 00:18:29.332 "traddr": "10.0.0.2", 00:18:29.332 "trsvcid": "4420", 00:18:29.332 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:29.332 "prchk_reftag": false, 00:18:29.332 "prchk_guard": false, 00:18:29.332 "ctrlr_loss_timeout_sec": 0, 00:18:29.332 "reconnect_delay_sec": 0, 00:18:29.332 "fast_io_fail_timeout_sec": 0, 00:18:29.332 "psk": "key0", 00:18:29.332 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:29.332 "hdgst": false, 00:18:29.332 "ddgst": false, 00:18:29.332 "multipath": "multipath" 00:18:29.332 } 00:18:29.332 }, 00:18:29.332 { 00:18:29.332 "method": "bdev_nvme_set_hotplug", 00:18:29.332 "params": { 00:18:29.332 "period_us": 100000, 00:18:29.332 "enable": false 00:18:29.332 } 00:18:29.332 }, 00:18:29.332 { 00:18:29.332 "method": "bdev_enable_histogram", 00:18:29.332 "params": { 00:18:29.332 "name": "nvme0n1", 00:18:29.332 "enable": true 00:18:29.332 } 00:18:29.332 }, 00:18:29.332 { 00:18:29.332 "method": "bdev_wait_for_examine" 00:18:29.332 } 00:18:29.332 ] 00:18:29.332 }, 00:18:29.332 { 00:18:29.332 "subsystem": "nbd", 00:18:29.332 "config": [] 00:18:29.332 } 00:18:29.332 ] 00:18:29.332 }' 00:18:29.332 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:29.332 [2024-11-19 11:20:24.719419] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:18:29.332 [2024-11-19 11:20:24.719514] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2631168 ] 00:18:29.332 [2024-11-19 11:20:24.800522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.591 [2024-11-19 11:20:24.862108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:29.591 [2024-11-19 11:20:25.036420] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:29.849 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:29.849 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:29.849 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:29.849 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:18:30.108 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.108 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:30.108 Running I/O for 1 seconds... 00:18:31.302 3621.00 IOPS, 14.14 MiB/s 00:18:31.303 Latency(us) 00:18:31.303 [2024-11-19T10:20:26.800Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.303 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:31.303 Verification LBA range: start 0x0 length 0x2000 00:18:31.303 nvme0n1 : 1.02 3678.03 14.37 0.00 0.00 34500.19 6844.87 30486.38 00:18:31.303 [2024-11-19T10:20:26.800Z] =================================================================================================================== 00:18:31.303 [2024-11-19T10:20:26.800Z] Total : 3678.03 14.37 0.00 0.00 34500.19 6844.87 30486.38 00:18:31.303 { 00:18:31.303 "results": [ 00:18:31.303 { 00:18:31.303 "job": "nvme0n1", 00:18:31.303 "core_mask": "0x2", 00:18:31.303 "workload": "verify", 00:18:31.303 "status": "finished", 00:18:31.303 "verify_range": { 00:18:31.303 "start": 0, 00:18:31.303 "length": 8192 00:18:31.303 }, 00:18:31.303 "queue_depth": 128, 00:18:31.303 "io_size": 4096, 00:18:31.303 "runtime": 1.019296, 00:18:31.303 "iops": 3678.0287571029417, 00:18:31.303 "mibps": 14.367299832433366, 00:18:31.303 "io_failed": 0, 00:18:31.303 "io_timeout": 0, 00:18:31.303 "avg_latency_us": 34500.185180048014, 00:18:31.303 "min_latency_us": 6844.8711111111115, 00:18:31.303 "max_latency_us": 30486.376296296297 00:18:31.303 } 00:18:31.303 ], 00:18:31.303 "core_count": 1 00:18:31.303 } 00:18:31.303 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:18:31.303 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:18:31.303 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:31.303 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:18:31.303 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:18:31.303 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:18:31.303 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:31.303 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:18:31.303 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:18:31.303 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:18:31.303 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:31.303 nvmf_trace.0 00:18:31.303 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:18:31.303 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2631168 00:18:31.303 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2631168 ']' 00:18:31.303 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2631168 00:18:31.303 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:31.303 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:31.303 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2631168 00:18:31.303 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:31.303 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:31.303 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2631168' 00:18:31.303 killing process with pid 2631168 00:18:31.303 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2631168 00:18:31.303 Received shutdown signal, test time was about 1.000000 seconds 00:18:31.303 00:18:31.303 Latency(us) 00:18:31.303 [2024-11-19T10:20:26.800Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.303 [2024-11-19T10:20:26.800Z] =================================================================================================================== 00:18:31.303 [2024-11-19T10:20:26.800Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:31.303 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2631168 00:18:31.561 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:31.561 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:31.561 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:18:31.561 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:31.561 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:18:31.561 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:31.561 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:31.561 rmmod nvme_tcp 00:18:31.561 rmmod nvme_fabrics 00:18:31.561 rmmod nvme_keyring 00:18:31.561 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:31.561 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:18:31.561 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:18:31.561 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 2631023 ']' 00:18:31.561 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 2631023 00:18:31.561 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2631023 ']' 00:18:31.561 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2631023 00:18:31.561 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:31.561 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:31.561 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2631023 00:18:31.561 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:31.561 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:31.561 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2631023' 00:18:31.561 killing process with pid 2631023 00:18:31.561 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2631023 00:18:31.561 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2631023 00:18:31.821 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:31.821 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:31.822 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:31.822 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:18:31.822 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:18:31.822 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:31.822 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:18:31.822 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:31.822 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:31.822 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.822 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:31.822 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:34.359 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:34.359 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.F9SrhzHNIc /tmp/tmp.72pnXGyUsz /tmp/tmp.lenRvSW2hN 00:18:34.359 00:18:34.359 real 1m24.198s 00:18:34.359 user 2m17.141s 00:18:34.359 sys 0m29.498s 00:18:34.359 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:34.359 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.359 ************************************ 00:18:34.359 END TEST nvmf_tls 00:18:34.359 ************************************ 00:18:34.359 11:20:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:34.359 11:20:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:34.359 11:20:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:34.359 11:20:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:34.359 ************************************ 00:18:34.359 START TEST nvmf_fips 00:18:34.359 ************************************ 00:18:34.359 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:34.359 * Looking for test storage... 00:18:34.359 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:34.359 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:34.359 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:18:34.359 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:34.359 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:34.359 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:34.359 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:34.359 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:34.359 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:34.359 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:34.359 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:34.359 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:34.359 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:18:34.359 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:18:34.359 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:18:34.359 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:34.359 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:34.359 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:34.359 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:34.359 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:34.359 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:34.359 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:34.359 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:34.359 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:34.359 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:34.359 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:18:34.359 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:18:34.359 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:34.359 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:34.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.360 --rc genhtml_branch_coverage=1 00:18:34.360 --rc genhtml_function_coverage=1 00:18:34.360 --rc genhtml_legend=1 00:18:34.360 --rc geninfo_all_blocks=1 00:18:34.360 --rc geninfo_unexecuted_blocks=1 00:18:34.360 00:18:34.360 ' 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:34.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.360 --rc genhtml_branch_coverage=1 00:18:34.360 --rc genhtml_function_coverage=1 00:18:34.360 --rc genhtml_legend=1 00:18:34.360 --rc geninfo_all_blocks=1 00:18:34.360 --rc geninfo_unexecuted_blocks=1 00:18:34.360 00:18:34.360 ' 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:34.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.360 --rc genhtml_branch_coverage=1 00:18:34.360 --rc genhtml_function_coverage=1 00:18:34.360 --rc genhtml_legend=1 00:18:34.360 --rc geninfo_all_blocks=1 00:18:34.360 --rc geninfo_unexecuted_blocks=1 00:18:34.360 00:18:34.360 ' 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:34.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.360 --rc genhtml_branch_coverage=1 00:18:34.360 --rc genhtml_function_coverage=1 00:18:34.360 --rc genhtml_legend=1 00:18:34.360 --rc geninfo_all_blocks=1 00:18:34.360 --rc geninfo_unexecuted_blocks=1 00:18:34.360 00:18:34.360 ' 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:34.360 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:34.360 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:18:34.361 Error setting digest 00:18:34.361 406209C1327F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:18:34.361 406209C1327F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:18:34.361 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:18:36.895 Found 0000:82:00.0 (0x8086 - 0x159b) 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:18:36.895 Found 0000:82:00.1 (0x8086 - 0x159b) 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:18:36.895 Found net devices under 0000:82:00.0: cvl_0_0 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:18:36.895 Found net devices under 0000:82:00.1: cvl_0_1 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:36.895 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:36.896 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:36.896 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:36.896 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:36.896 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:36.896 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:36.896 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:36.896 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:37.154 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:37.154 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:37.154 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:37.154 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:37.154 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:37.154 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:37.154 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:37.154 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:18:37.154 00:18:37.154 --- 10.0.0.2 ping statistics --- 00:18:37.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:37.154 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:18:37.154 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:37.154 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:37.154 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:18:37.154 00:18:37.154 --- 10.0.0.1 ping statistics --- 00:18:37.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:37.154 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:18:37.154 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:37.154 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:18:37.154 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:37.154 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:37.154 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:37.154 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:37.154 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:37.154 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:37.154 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:37.154 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:18:37.154 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:37.154 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:37.154 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:37.154 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=2633821 00:18:37.154 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:37.154 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 2633821 00:18:37.154 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2633821 ']' 00:18:37.154 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.154 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:37.154 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.154 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:37.154 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:37.154 [2024-11-19 11:20:32.565825] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:18:37.154 [2024-11-19 11:20:32.565929] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:37.154 [2024-11-19 11:20:32.647330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.413 [2024-11-19 11:20:32.706303] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:37.413 [2024-11-19 11:20:32.706382] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:37.413 [2024-11-19 11:20:32.706399] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:37.413 [2024-11-19 11:20:32.706411] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:37.413 [2024-11-19 11:20:32.706422] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:37.413 [2024-11-19 11:20:32.707027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:37.413 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:37.413 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:18:37.413 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:37.413 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:37.413 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:37.413 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:37.413 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:18:37.413 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:37.413 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:18:37.413 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.xXP 00:18:37.413 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:37.413 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.xXP 00:18:37.413 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.xXP 00:18:37.413 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.xXP 00:18:37.413 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:37.671 [2024-11-19 11:20:33.091103] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:37.671 [2024-11-19 11:20:33.107119] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:37.671 [2024-11-19 11:20:33.107331] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:37.671 malloc0 00:18:37.671 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:37.671 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2633970 00:18:37.671 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:37.671 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2633970 /var/tmp/bdevperf.sock 00:18:37.671 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2633970 ']' 00:18:37.671 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:37.671 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:37.671 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:37.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:37.671 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:37.671 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:37.930 [2024-11-19 11:20:33.232451] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:18:37.930 [2024-11-19 11:20:33.232543] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2633970 ] 00:18:37.930 [2024-11-19 11:20:33.307942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.930 [2024-11-19 11:20:33.368833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:38.188 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:38.188 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:18:38.188 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.xXP 00:18:38.445 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:38.703 [2024-11-19 11:20:34.005466] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:38.703 TLSTESTn1 00:18:38.703 11:20:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:38.703 Running I/O for 10 seconds... 00:18:41.010 3329.00 IOPS, 13.00 MiB/s [2024-11-19T10:20:37.441Z] 3439.50 IOPS, 13.44 MiB/s [2024-11-19T10:20:38.406Z] 3444.67 IOPS, 13.46 MiB/s [2024-11-19T10:20:39.340Z] 3510.25 IOPS, 13.71 MiB/s [2024-11-19T10:20:40.274Z] 3513.80 IOPS, 13.73 MiB/s [2024-11-19T10:20:41.646Z] 3492.50 IOPS, 13.64 MiB/s [2024-11-19T10:20:42.577Z] 3501.29 IOPS, 13.68 MiB/s [2024-11-19T10:20:43.244Z] 3501.62 IOPS, 13.68 MiB/s [2024-11-19T10:20:44.617Z] 3502.44 IOPS, 13.68 MiB/s [2024-11-19T10:20:44.617Z] 3508.10 IOPS, 13.70 MiB/s 00:18:49.120 Latency(us) 00:18:49.120 [2024-11-19T10:20:44.617Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.120 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:49.120 Verification LBA range: start 0x0 length 0x2000 00:18:49.120 TLSTESTn1 : 10.02 3513.75 13.73 0.00 0.00 36368.92 7815.77 53205.52 00:18:49.120 [2024-11-19T10:20:44.617Z] =================================================================================================================== 00:18:49.120 [2024-11-19T10:20:44.617Z] Total : 3513.75 13.73 0.00 0.00 36368.92 7815.77 53205.52 00:18:49.120 { 00:18:49.120 "results": [ 00:18:49.120 { 00:18:49.120 "job": "TLSTESTn1", 00:18:49.120 "core_mask": "0x4", 00:18:49.120 "workload": "verify", 00:18:49.120 "status": "finished", 00:18:49.120 "verify_range": { 00:18:49.120 "start": 0, 00:18:49.120 "length": 8192 00:18:49.120 }, 00:18:49.120 "queue_depth": 128, 00:18:49.120 "io_size": 4096, 00:18:49.120 "runtime": 10.01978, 00:18:49.120 "iops": 3513.749802889884, 00:18:49.120 "mibps": 13.72558516753861, 00:18:49.120 "io_failed": 0, 00:18:49.120 "io_timeout": 0, 00:18:49.120 "avg_latency_us": 36368.922221317516, 00:18:49.120 "min_latency_us": 7815.774814814815, 00:18:49.120 "max_latency_us": 53205.52296296296 00:18:49.120 } 00:18:49.120 ], 00:18:49.120 "core_count": 1 00:18:49.120 } 00:18:49.120 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:18:49.120 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:18:49.120 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:18:49.121 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:18:49.121 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:18:49.121 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:49.121 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:18:49.121 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:18:49.121 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:18:49.121 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:49.121 nvmf_trace.0 00:18:49.121 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:18:49.121 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2633970 00:18:49.121 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2633970 ']' 00:18:49.121 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2633970 00:18:49.121 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:18:49.121 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:49.121 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2633970 00:18:49.121 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:49.121 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:49.121 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2633970' 00:18:49.121 killing process with pid 2633970 00:18:49.121 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2633970 00:18:49.121 Received shutdown signal, test time was about 10.000000 seconds 00:18:49.121 00:18:49.121 Latency(us) 00:18:49.121 [2024-11-19T10:20:44.618Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.121 [2024-11-19T10:20:44.618Z] =================================================================================================================== 00:18:49.121 [2024-11-19T10:20:44.618Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:49.121 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2633970 00:18:49.121 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:18:49.121 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:49.121 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:18:49.121 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:49.121 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:18:49.121 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:49.121 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:49.121 rmmod nvme_tcp 00:18:49.379 rmmod nvme_fabrics 00:18:49.379 rmmod nvme_keyring 00:18:49.379 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:49.379 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:18:49.379 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:18:49.379 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 2633821 ']' 00:18:49.379 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 2633821 00:18:49.379 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2633821 ']' 00:18:49.379 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2633821 00:18:49.379 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:18:49.379 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:49.379 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2633821 00:18:49.379 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:49.379 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:49.379 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2633821' 00:18:49.379 killing process with pid 2633821 00:18:49.379 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2633821 00:18:49.379 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2633821 00:18:49.637 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:49.637 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:49.637 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:49.637 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:18:49.637 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:18:49.637 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:49.637 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:18:49.637 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:49.637 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:49.637 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.637 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:49.637 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:51.543 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:51.543 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.xXP 00:18:51.543 00:18:51.543 real 0m17.691s 00:18:51.543 user 0m21.375s 00:18:51.543 sys 0m7.310s 00:18:51.543 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:51.543 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:51.543 ************************************ 00:18:51.543 END TEST nvmf_fips 00:18:51.543 ************************************ 00:18:51.543 11:20:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:18:51.543 11:20:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:51.543 11:20:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:51.543 11:20:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:51.802 ************************************ 00:18:51.802 START TEST nvmf_control_msg_list 00:18:51.802 ************************************ 00:18:51.802 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:18:51.802 * Looking for test storage... 00:18:51.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:51.802 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:51.802 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:18:51.802 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:51.802 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:51.802 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:51.802 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:51.802 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:51.802 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:18:51.802 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:18:51.802 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:18:51.802 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:18:51.802 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:18:51.802 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:18:51.802 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:18:51.802 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:51.802 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:18:51.802 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:18:51.802 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:51.802 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:51.802 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:18:51.802 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:18:51.802 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:51.802 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:18:51.802 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:18:51.802 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:18:51.802 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:18:51.802 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:51.802 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:18:51.802 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:18:51.802 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:51.802 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:51.802 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:18:51.802 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:51.802 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:51.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.802 --rc genhtml_branch_coverage=1 00:18:51.802 --rc genhtml_function_coverage=1 00:18:51.802 --rc genhtml_legend=1 00:18:51.802 --rc geninfo_all_blocks=1 00:18:51.802 --rc geninfo_unexecuted_blocks=1 00:18:51.802 00:18:51.802 ' 00:18:51.802 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:51.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.802 --rc genhtml_branch_coverage=1 00:18:51.802 --rc genhtml_function_coverage=1 00:18:51.802 --rc genhtml_legend=1 00:18:51.802 --rc geninfo_all_blocks=1 00:18:51.802 --rc geninfo_unexecuted_blocks=1 00:18:51.802 00:18:51.802 ' 00:18:51.802 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:51.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.802 --rc genhtml_branch_coverage=1 00:18:51.802 --rc genhtml_function_coverage=1 00:18:51.802 --rc genhtml_legend=1 00:18:51.802 --rc geninfo_all_blocks=1 00:18:51.802 --rc geninfo_unexecuted_blocks=1 00:18:51.802 00:18:51.802 ' 00:18:51.802 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:51.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.802 --rc genhtml_branch_coverage=1 00:18:51.802 --rc genhtml_function_coverage=1 00:18:51.802 --rc genhtml_legend=1 00:18:51.802 --rc geninfo_all_blocks=1 00:18:51.802 --rc geninfo_unexecuted_blocks=1 00:18:51.802 00:18:51.802 ' 00:18:51.802 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:51.802 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:18:51.802 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:51.803 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:51.803 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:51.803 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:51.803 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:51.803 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:51.803 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:51.803 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:51.803 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:51.803 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:51.803 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:18:51.803 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:18:51.803 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:51.803 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:51.803 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:51.803 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:51.803 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:51.803 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:18:51.803 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:51.803 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:51.803 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:51.803 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.803 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.803 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.803 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:18:51.803 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.803 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:18:51.803 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:51.803 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:51.803 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:51.803 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:51.803 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:51.803 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:51.803 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:51.803 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:51.803 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:51.803 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:51.803 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:18:51.803 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:51.803 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:51.803 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:51.803 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:51.803 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:51.803 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:51.803 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:51.803 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:51.803 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:51.803 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:51.803 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:18:51.803 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:18:55.087 Found 0000:82:00.0 (0x8086 - 0x159b) 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:18:55.087 Found 0000:82:00.1 (0x8086 - 0x159b) 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:18:55.087 Found net devices under 0000:82:00.0: cvl_0_0 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:55.087 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:55.088 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:18:55.088 Found net devices under 0000:82:00.1: cvl_0_1 00:18:55.088 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:55.088 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:55.088 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:18:55.088 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:55.088 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:55.088 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:55.088 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:55.088 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:55.088 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:55.088 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:55.088 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:55.088 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:55.088 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:55.088 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:55.088 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:55.088 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:55.088 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:55.088 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:55.088 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:55.088 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:55.088 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:55.088 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:55.088 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:55.088 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:55.088 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:55.088 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:55.088 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:18:55.088 00:18:55.088 --- 10.0.0.2 ping statistics --- 00:18:55.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.088 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:55.088 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:55.088 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:18:55.088 00:18:55.088 --- 10.0.0.1 ping statistics --- 00:18:55.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.088 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2637646 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2637646 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 2637646 ']' 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:55.088 [2024-11-19 11:20:50.101050] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:18:55.088 [2024-11-19 11:20:50.101131] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:55.088 [2024-11-19 11:20:50.185902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.088 [2024-11-19 11:20:50.245690] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:55.088 [2024-11-19 11:20:50.245751] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:55.088 [2024-11-19 11:20:50.245764] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:55.088 [2024-11-19 11:20:50.245775] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:55.088 [2024-11-19 11:20:50.245784] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:55.088 [2024-11-19 11:20:50.246464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:55.088 [2024-11-19 11:20:50.397857] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:55.088 Malloc0 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:55.088 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.089 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:55.089 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.089 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:55.089 [2024-11-19 11:20:50.438125] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:55.089 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.089 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2637673 00:18:55.089 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:55.089 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2637674 00:18:55.089 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:55.089 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2637675 00:18:55.089 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:55.089 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2637673 00:18:55.089 [2024-11-19 11:20:50.506747] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:55.089 [2024-11-19 11:20:50.517041] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:55.089 [2024-11-19 11:20:50.517312] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:56.461 Initializing NVMe Controllers 00:18:56.461 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:18:56.461 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:18:56.461 Initialization complete. Launching workers. 00:18:56.461 ======================================================== 00:18:56.461 Latency(us) 00:18:56.461 Device Information : IOPS MiB/s Average min max 00:18:56.461 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 26.00 0.10 39361.82 354.63 41865.30 00:18:56.461 ======================================================== 00:18:56.461 Total : 26.00 0.10 39361.82 354.63 41865.30 00:18:56.461 00:18:56.461 Initializing NVMe Controllers 00:18:56.461 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:18:56.461 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:18:56.461 Initialization complete. Launching workers. 00:18:56.461 ======================================================== 00:18:56.461 Latency(us) 00:18:56.461 Device Information : IOPS MiB/s Average min max 00:18:56.461 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40888.36 40679.01 40952.33 00:18:56.461 ======================================================== 00:18:56.461 Total : 25.00 0.10 40888.36 40679.01 40952.33 00:18:56.461 00:18:56.461 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2637674 00:18:56.461 Initializing NVMe Controllers 00:18:56.461 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:18:56.461 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:18:56.461 Initialization complete. Launching workers. 00:18:56.461 ======================================================== 00:18:56.461 Latency(us) 00:18:56.461 Device Information : IOPS MiB/s Average min max 00:18:56.461 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40917.11 40806.85 41308.68 00:18:56.461 ======================================================== 00:18:56.461 Total : 25.00 0.10 40917.11 40806.85 41308.68 00:18:56.461 00:18:56.461 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2637675 00:18:56.461 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:18:56.461 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:18:56.461 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:56.461 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:18:56.461 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:56.461 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:18:56.461 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:56.461 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:56.461 rmmod nvme_tcp 00:18:56.461 rmmod nvme_fabrics 00:18:56.461 rmmod nvme_keyring 00:18:56.461 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:56.461 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:18:56.461 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:18:56.461 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2637646 ']' 00:18:56.461 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2637646 00:18:56.461 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 2637646 ']' 00:18:56.461 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 2637646 00:18:56.461 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:18:56.462 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:56.462 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2637646 00:18:56.462 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:56.462 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:56.462 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2637646' 00:18:56.462 killing process with pid 2637646 00:18:56.462 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 2637646 00:18:56.462 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 2637646 00:18:56.720 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:56.720 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:56.720 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:56.720 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:18:56.720 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:18:56.720 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:56.720 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:18:56.720 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:56.720 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:56.720 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:56.720 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:56.720 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:58.626 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:58.626 00:18:58.626 real 0m6.973s 00:18:58.626 user 0m5.971s 00:18:58.626 sys 0m2.884s 00:18:58.626 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:58.626 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:58.626 ************************************ 00:18:58.626 END TEST nvmf_control_msg_list 00:18:58.626 ************************************ 00:18:58.626 11:20:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:18:58.626 11:20:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:58.626 11:20:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:58.626 11:20:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:58.626 ************************************ 00:18:58.626 START TEST nvmf_wait_for_buf 00:18:58.626 ************************************ 00:18:58.626 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:18:58.626 * Looking for test storage... 00:18:58.626 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:58.626 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:58.626 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:18:58.626 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:58.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.886 --rc genhtml_branch_coverage=1 00:18:58.886 --rc genhtml_function_coverage=1 00:18:58.886 --rc genhtml_legend=1 00:18:58.886 --rc geninfo_all_blocks=1 00:18:58.886 --rc geninfo_unexecuted_blocks=1 00:18:58.886 00:18:58.886 ' 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:58.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.886 --rc genhtml_branch_coverage=1 00:18:58.886 --rc genhtml_function_coverage=1 00:18:58.886 --rc genhtml_legend=1 00:18:58.886 --rc geninfo_all_blocks=1 00:18:58.886 --rc geninfo_unexecuted_blocks=1 00:18:58.886 00:18:58.886 ' 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:58.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.886 --rc genhtml_branch_coverage=1 00:18:58.886 --rc genhtml_function_coverage=1 00:18:58.886 --rc genhtml_legend=1 00:18:58.886 --rc geninfo_all_blocks=1 00:18:58.886 --rc geninfo_unexecuted_blocks=1 00:18:58.886 00:18:58.886 ' 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:58.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.886 --rc genhtml_branch_coverage=1 00:18:58.886 --rc genhtml_function_coverage=1 00:18:58.886 --rc genhtml_legend=1 00:18:58.886 --rc geninfo_all_blocks=1 00:18:58.886 --rc geninfo_unexecuted_blocks=1 00:18:58.886 00:18:58.886 ' 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:58.886 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:58.887 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.887 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.887 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.887 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:18:58.887 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.887 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:18:58.887 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:58.887 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:58.887 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:58.887 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:58.887 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:58.887 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:58.887 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:58.887 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:58.887 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:58.887 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:58.887 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:18:58.887 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:58.887 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:58.887 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:58.887 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:58.887 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:58.887 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:58.887 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:58.887 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:58.887 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:58.887 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:58.887 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:18:58.887 11:20:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:01.419 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:01.419 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:19:01.419 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:01.419 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:01.419 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:01.419 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:01.419 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:01.419 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:19:01.419 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:01.419 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:19:01.419 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:19:01.420 Found 0000:82:00.0 (0x8086 - 0x159b) 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:19:01.420 Found 0000:82:00.1 (0x8086 - 0x159b) 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:19:01.420 Found net devices under 0000:82:00.0: cvl_0_0 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:19:01.420 Found net devices under 0000:82:00.1: cvl_0_1 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:01.420 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:01.679 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:01.679 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:01.679 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:01.679 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:01.679 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:01.679 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:01.679 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:01.679 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:01.680 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:01.680 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:01.680 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:01.680 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:19:01.680 00:19:01.680 --- 10.0.0.2 ping statistics --- 00:19:01.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.680 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:19:01.680 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:01.680 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:01.680 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:19:01.680 00:19:01.680 --- 10.0.0.1 ping statistics --- 00:19:01.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.680 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:19:01.680 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:01.680 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:19:01.680 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:01.680 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:01.680 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:01.680 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:01.680 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:01.680 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:01.680 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:01.680 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:19:01.680 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:01.680 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:01.680 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:01.680 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2640158 00:19:01.680 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2640158 00:19:01.680 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 2640158 ']' 00:19:01.680 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.680 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:01.680 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:01.680 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.680 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:01.680 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:01.680 [2024-11-19 11:20:57.122981] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:19:01.680 [2024-11-19 11:20:57.123067] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:01.938 [2024-11-19 11:20:57.204599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.938 [2024-11-19 11:20:57.256940] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:01.938 [2024-11-19 11:20:57.256998] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:01.938 [2024-11-19 11:20:57.257026] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:01.938 [2024-11-19 11:20:57.257037] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:01.938 [2024-11-19 11:20:57.257046] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:01.938 [2024-11-19 11:20:57.257690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.938 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:01.938 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:19:01.938 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:01.938 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:01.938 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:01.938 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:01.938 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:01.938 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:01.938 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:19:01.938 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.938 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:01.938 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.938 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:19:01.938 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.938 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:01.939 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.939 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:19:01.939 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.939 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:02.196 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.196 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:02.196 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.196 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:02.196 Malloc0 00:19:02.196 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.196 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:19:02.196 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.196 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:02.196 [2024-11-19 11:20:57.488931] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:02.196 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.196 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:19:02.196 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.196 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:02.196 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.196 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:02.196 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.196 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:02.196 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.196 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:02.196 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.196 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:02.196 [2024-11-19 11:20:57.513128] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:02.196 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.196 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:02.196 [2024-11-19 11:20:57.601506] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:03.569 Initializing NVMe Controllers 00:19:03.569 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:03.569 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:19:03.569 Initialization complete. Launching workers. 00:19:03.569 ======================================================== 00:19:03.569 Latency(us) 00:19:03.569 Device Information : IOPS MiB/s Average min max 00:19:03.569 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 48.83 6.10 85428.02 31921.16 191528.16 00:19:03.569 ======================================================== 00:19:03.569 Total : 48.83 6.10 85428.02 31921.16 191528.16 00:19:03.569 00:19:03.569 11:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:19:03.569 11:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.569 11:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:03.569 11:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:19:03.569 11:20:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.569 11:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=758 00:19:03.569 11:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 758 -eq 0 ]] 00:19:03.569 11:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:03.569 11:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:19:03.569 11:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:03.569 11:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:19:03.569 11:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:03.569 11:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:19:03.569 11:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:03.569 11:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:03.569 rmmod nvme_tcp 00:19:03.569 rmmod nvme_fabrics 00:19:03.569 rmmod nvme_keyring 00:19:03.827 11:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:03.827 11:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:19:03.827 11:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:19:03.827 11:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2640158 ']' 00:19:03.827 11:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2640158 00:19:03.827 11:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 2640158 ']' 00:19:03.827 11:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 2640158 00:19:03.827 11:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:19:03.827 11:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:03.827 11:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2640158 00:19:03.827 11:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:03.827 11:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:03.827 11:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2640158' 00:19:03.827 killing process with pid 2640158 00:19:03.827 11:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 2640158 00:19:03.827 11:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 2640158 00:19:04.086 11:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:04.086 11:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:04.086 11:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:04.086 11:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:19:04.086 11:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:19:04.086 11:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:04.086 11:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:19:04.086 11:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:04.086 11:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:04.086 11:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:04.086 11:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:04.086 11:20:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:05.992 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:05.992 00:19:05.992 real 0m7.315s 00:19:05.992 user 0m3.334s 00:19:05.992 sys 0m2.460s 00:19:05.992 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:05.992 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:05.992 ************************************ 00:19:05.992 END TEST nvmf_wait_for_buf 00:19:05.992 ************************************ 00:19:05.992 11:21:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:19:05.992 11:21:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:19:05.992 11:21:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:19:05.992 11:21:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:19:05.992 11:21:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:19:05.992 11:21:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:08.559 11:21:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:08.559 11:21:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:19:08.559 11:21:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:08.559 11:21:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:08.559 11:21:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:08.559 11:21:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:08.559 11:21:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:08.559 11:21:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:19:08.559 11:21:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:08.559 11:21:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:19:08.559 11:21:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:19:08.559 11:21:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:19:08.559 11:21:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:19:08.559 11:21:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:19:08.559 11:21:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:19:08.559 11:21:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:08.559 11:21:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:08.559 11:21:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:19:08.559 Found 0000:82:00.0 (0x8086 - 0x159b) 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:19:08.559 Found 0000:82:00.1 (0x8086 - 0x159b) 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:19:08.559 Found net devices under 0000:82:00.0: cvl_0_0 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:19:08.559 Found net devices under 0000:82:00.1: cvl_0_1 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:08.559 11:21:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:08.846 ************************************ 00:19:08.846 START TEST nvmf_perf_adq 00:19:08.846 ************************************ 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:08.846 * Looking for test storage... 00:19:08.846 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:08.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.846 --rc genhtml_branch_coverage=1 00:19:08.846 --rc genhtml_function_coverage=1 00:19:08.846 --rc genhtml_legend=1 00:19:08.846 --rc geninfo_all_blocks=1 00:19:08.846 --rc geninfo_unexecuted_blocks=1 00:19:08.846 00:19:08.846 ' 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:08.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.846 --rc genhtml_branch_coverage=1 00:19:08.846 --rc genhtml_function_coverage=1 00:19:08.846 --rc genhtml_legend=1 00:19:08.846 --rc geninfo_all_blocks=1 00:19:08.846 --rc geninfo_unexecuted_blocks=1 00:19:08.846 00:19:08.846 ' 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:08.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.846 --rc genhtml_branch_coverage=1 00:19:08.846 --rc genhtml_function_coverage=1 00:19:08.846 --rc genhtml_legend=1 00:19:08.846 --rc geninfo_all_blocks=1 00:19:08.846 --rc geninfo_unexecuted_blocks=1 00:19:08.846 00:19:08.846 ' 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:08.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.846 --rc genhtml_branch_coverage=1 00:19:08.846 --rc genhtml_function_coverage=1 00:19:08.846 --rc genhtml_legend=1 00:19:08.846 --rc geninfo_all_blocks=1 00:19:08.846 --rc geninfo_unexecuted_blocks=1 00:19:08.846 00:19:08.846 ' 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.846 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.847 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.847 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:08.847 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.847 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:19:08.847 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:08.847 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:08.847 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:08.847 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:08.847 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:08.847 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:08.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:08.847 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:08.847 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:08.847 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:08.847 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:08.847 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:08.847 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:19:11.378 Found 0000:82:00.0 (0x8086 - 0x159b) 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:19:11.378 Found 0000:82:00.1 (0x8086 - 0x159b) 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:19:11.378 Found net devices under 0000:82:00.0: cvl_0_0 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:19:11.378 Found net devices under 0000:82:00.1: cvl_0_1 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:19:11.378 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:19:12.313 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:19:14.220 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:19:19.497 Found 0000:82:00.0 (0x8086 - 0x159b) 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:19:19.497 Found 0000:82:00.1 (0x8086 - 0x159b) 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:19.497 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:19:19.498 Found net devices under 0000:82:00.0: cvl_0_0 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:19:19.498 Found net devices under 0000:82:00.1: cvl_0_1 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:19.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:19.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.601 ms 00:19:19.498 00:19:19.498 --- 10.0.0.2 ping statistics --- 00:19:19.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:19.498 rtt min/avg/max/mdev = 0.601/0.601/0.601/0.000 ms 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:19.498 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:19.498 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:19:19.498 00:19:19.498 --- 10.0.0.1 ping statistics --- 00:19:19.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:19.498 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2646202 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2646202 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2646202 ']' 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:19.498 [2024-11-19 11:21:14.719979] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:19:19.498 [2024-11-19 11:21:14.720057] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:19.498 [2024-11-19 11:21:14.797162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:19.498 [2024-11-19 11:21:14.852254] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:19.498 [2024-11-19 11:21:14.852310] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:19.498 [2024-11-19 11:21:14.852338] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:19.498 [2024-11-19 11:21:14.852367] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:19.498 [2024-11-19 11:21:14.852379] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:19.498 [2024-11-19 11:21:14.854030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:19.498 [2024-11-19 11:21:14.854138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:19.498 [2024-11-19 11:21:14.854236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:19.498 [2024-11-19 11:21:14.854244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:19.498 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.757 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:19.757 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:19.757 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.757 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:19.757 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.757 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:19.757 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.757 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:19.757 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.757 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:19.757 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.757 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:19.757 [2024-11-19 11:21:15.120890] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:19.757 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.757 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:19.757 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.757 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:19.757 Malloc1 00:19:19.757 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.757 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:19.757 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.757 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:19.757 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.757 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:19.757 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.757 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:19.757 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.757 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:19.757 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.757 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:19.757 [2024-11-19 11:21:15.181484] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:19.757 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.757 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2646355 00:19:19.758 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:19.758 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:19:22.289 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:19:22.289 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.289 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:22.289 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.289 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:19:22.289 "tick_rate": 2700000000, 00:19:22.289 "poll_groups": [ 00:19:22.289 { 00:19:22.289 "name": "nvmf_tgt_poll_group_000", 00:19:22.289 "admin_qpairs": 1, 00:19:22.289 "io_qpairs": 1, 00:19:22.289 "current_admin_qpairs": 1, 00:19:22.289 "current_io_qpairs": 1, 00:19:22.289 "pending_bdev_io": 0, 00:19:22.289 "completed_nvme_io": 19130, 00:19:22.289 "transports": [ 00:19:22.289 { 00:19:22.289 "trtype": "TCP" 00:19:22.289 } 00:19:22.289 ] 00:19:22.289 }, 00:19:22.289 { 00:19:22.289 "name": "nvmf_tgt_poll_group_001", 00:19:22.289 "admin_qpairs": 0, 00:19:22.289 "io_qpairs": 1, 00:19:22.289 "current_admin_qpairs": 0, 00:19:22.289 "current_io_qpairs": 1, 00:19:22.289 "pending_bdev_io": 0, 00:19:22.289 "completed_nvme_io": 19504, 00:19:22.289 "transports": [ 00:19:22.289 { 00:19:22.289 "trtype": "TCP" 00:19:22.289 } 00:19:22.289 ] 00:19:22.289 }, 00:19:22.289 { 00:19:22.289 "name": "nvmf_tgt_poll_group_002", 00:19:22.289 "admin_qpairs": 0, 00:19:22.289 "io_qpairs": 1, 00:19:22.289 "current_admin_qpairs": 0, 00:19:22.289 "current_io_qpairs": 1, 00:19:22.289 "pending_bdev_io": 0, 00:19:22.289 "completed_nvme_io": 19670, 00:19:22.289 "transports": [ 00:19:22.289 { 00:19:22.289 "trtype": "TCP" 00:19:22.289 } 00:19:22.289 ] 00:19:22.289 }, 00:19:22.289 { 00:19:22.289 "name": "nvmf_tgt_poll_group_003", 00:19:22.289 "admin_qpairs": 0, 00:19:22.289 "io_qpairs": 1, 00:19:22.289 "current_admin_qpairs": 0, 00:19:22.289 "current_io_qpairs": 1, 00:19:22.289 "pending_bdev_io": 0, 00:19:22.289 "completed_nvme_io": 19173, 00:19:22.289 "transports": [ 00:19:22.289 { 00:19:22.289 "trtype": "TCP" 00:19:22.289 } 00:19:22.289 ] 00:19:22.289 } 00:19:22.289 ] 00:19:22.289 }' 00:19:22.289 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:22.289 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:19:22.289 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:19:22.289 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:19:22.289 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2646355 00:19:30.403 Initializing NVMe Controllers 00:19:30.403 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:30.403 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:30.403 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:30.403 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:30.403 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:30.403 Initialization complete. Launching workers. 00:19:30.403 ======================================================== 00:19:30.403 Latency(us) 00:19:30.403 Device Information : IOPS MiB/s Average min max 00:19:30.403 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10101.20 39.46 6335.10 2186.81 11075.16 00:19:30.403 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10210.70 39.89 6267.91 2324.53 10262.24 00:19:30.403 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10370.20 40.51 6173.41 2212.09 10270.67 00:19:30.403 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10105.20 39.47 6335.38 2354.02 10441.14 00:19:30.403 ======================================================== 00:19:30.403 Total : 40787.30 159.33 6277.24 2186.81 11075.16 00:19:30.403 00:19:30.403 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:19:30.403 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:30.403 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:19:30.403 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:30.403 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:19:30.403 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:30.403 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:30.403 rmmod nvme_tcp 00:19:30.403 rmmod nvme_fabrics 00:19:30.403 rmmod nvme_keyring 00:19:30.403 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:30.403 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:19:30.403 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:19:30.403 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2646202 ']' 00:19:30.403 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2646202 00:19:30.403 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2646202 ']' 00:19:30.403 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2646202 00:19:30.403 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:19:30.403 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:30.403 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2646202 00:19:30.403 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:30.403 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:30.403 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2646202' 00:19:30.403 killing process with pid 2646202 00:19:30.403 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2646202 00:19:30.403 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2646202 00:19:30.403 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:30.403 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:30.403 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:30.403 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:19:30.403 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:19:30.403 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:30.403 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:19:30.404 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:30.404 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:30.404 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.404 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:30.404 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.308 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:32.308 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:19:32.308 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:19:32.308 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:19:32.876 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:19:35.417 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:19:40.693 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:19:40.693 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:19:40.694 Found 0000:82:00.0 (0x8086 - 0x159b) 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:19:40.694 Found 0000:82:00.1 (0x8086 - 0x159b) 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:19:40.694 Found net devices under 0000:82:00.0: cvl_0_0 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:40.694 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:19:40.695 Found net devices under 0000:82:00.1: cvl_0_1 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:40.695 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:40.695 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.309 ms 00:19:40.695 00:19:40.695 --- 10.0.0.2 ping statistics --- 00:19:40.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.695 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:40.695 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:40.695 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:19:40.695 00:19:40.695 --- 10.0.0.1 ping statistics --- 00:19:40.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.695 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:19:40.695 net.core.busy_poll = 1 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:19:40.695 net.core.busy_read = 1 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:40.695 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:40.696 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2648909 00:19:40.696 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:40.696 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2648909 00:19:40.696 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2648909 ']' 00:19:40.696 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.696 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:40.696 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.696 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:40.696 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:40.696 [2024-11-19 11:21:35.659610] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:19:40.696 [2024-11-19 11:21:35.659706] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:40.696 [2024-11-19 11:21:35.744853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:40.696 [2024-11-19 11:21:35.802769] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:40.696 [2024-11-19 11:21:35.802825] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:40.696 [2024-11-19 11:21:35.802846] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:40.696 [2024-11-19 11:21:35.802864] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:40.696 [2024-11-19 11:21:35.802878] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:40.696 [2024-11-19 11:21:35.804547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:40.696 [2024-11-19 11:21:35.804607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:40.696 [2024-11-19 11:21:35.804749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:40.696 [2024-11-19 11:21:35.804755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.696 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:40.696 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:19:40.696 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:40.696 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:40.696 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:40.696 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:40.696 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:19:40.696 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:40.696 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:40.696 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.696 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:40.696 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.696 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:40.696 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:19:40.696 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.696 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:40.696 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.696 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:40.696 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.696 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:40.696 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.696 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:19:40.696 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.696 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:40.696 [2024-11-19 11:21:36.059175] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:40.696 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.696 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:40.696 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.696 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:40.696 Malloc1 00:19:40.696 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.696 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:40.696 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.696 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:40.696 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.696 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:40.696 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.696 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:40.696 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.696 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:40.696 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.696 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:40.696 [2024-11-19 11:21:36.124162] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:40.696 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.696 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2649007 00:19:40.696 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:40.696 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:19:43.295 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:19:43.295 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.295 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:43.295 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.295 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:19:43.295 "tick_rate": 2700000000, 00:19:43.295 "poll_groups": [ 00:19:43.295 { 00:19:43.295 "name": "nvmf_tgt_poll_group_000", 00:19:43.295 "admin_qpairs": 1, 00:19:43.295 "io_qpairs": 3, 00:19:43.295 "current_admin_qpairs": 1, 00:19:43.295 "current_io_qpairs": 3, 00:19:43.295 "pending_bdev_io": 0, 00:19:43.295 "completed_nvme_io": 26738, 00:19:43.295 "transports": [ 00:19:43.295 { 00:19:43.295 "trtype": "TCP" 00:19:43.295 } 00:19:43.295 ] 00:19:43.295 }, 00:19:43.295 { 00:19:43.295 "name": "nvmf_tgt_poll_group_001", 00:19:43.295 "admin_qpairs": 0, 00:19:43.295 "io_qpairs": 1, 00:19:43.295 "current_admin_qpairs": 0, 00:19:43.295 "current_io_qpairs": 1, 00:19:43.295 "pending_bdev_io": 0, 00:19:43.295 "completed_nvme_io": 24500, 00:19:43.295 "transports": [ 00:19:43.295 { 00:19:43.295 "trtype": "TCP" 00:19:43.295 } 00:19:43.295 ] 00:19:43.295 }, 00:19:43.295 { 00:19:43.295 "name": "nvmf_tgt_poll_group_002", 00:19:43.295 "admin_qpairs": 0, 00:19:43.295 "io_qpairs": 0, 00:19:43.295 "current_admin_qpairs": 0, 00:19:43.295 "current_io_qpairs": 0, 00:19:43.295 "pending_bdev_io": 0, 00:19:43.295 "completed_nvme_io": 0, 00:19:43.295 "transports": [ 00:19:43.295 { 00:19:43.295 "trtype": "TCP" 00:19:43.295 } 00:19:43.295 ] 00:19:43.295 }, 00:19:43.295 { 00:19:43.295 "name": "nvmf_tgt_poll_group_003", 00:19:43.295 "admin_qpairs": 0, 00:19:43.295 "io_qpairs": 0, 00:19:43.295 "current_admin_qpairs": 0, 00:19:43.295 "current_io_qpairs": 0, 00:19:43.295 "pending_bdev_io": 0, 00:19:43.295 "completed_nvme_io": 0, 00:19:43.295 "transports": [ 00:19:43.295 { 00:19:43.295 "trtype": "TCP" 00:19:43.295 } 00:19:43.295 ] 00:19:43.295 } 00:19:43.295 ] 00:19:43.295 }' 00:19:43.295 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:19:43.295 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:19:43.295 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:19:43.295 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:19:43.295 11:21:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2649007 00:19:51.407 Initializing NVMe Controllers 00:19:51.407 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:51.407 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:51.407 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:51.407 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:51.407 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:51.407 Initialization complete. Launching workers. 00:19:51.407 ======================================================== 00:19:51.407 Latency(us) 00:19:51.407 Device Information : IOPS MiB/s Average min max 00:19:51.407 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4510.70 17.62 14244.80 1895.75 59802.71 00:19:51.407 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 12859.20 50.23 4976.70 1839.31 46120.48 00:19:51.407 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4875.20 19.04 13179.20 2069.53 62114.69 00:19:51.407 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4389.10 17.14 14585.49 2184.10 59712.04 00:19:51.407 ======================================================== 00:19:51.407 Total : 26634.20 104.04 9631.18 1839.31 62114.69 00:19:51.407 00:19:51.407 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:19:51.407 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:51.407 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:19:51.407 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:51.407 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:19:51.407 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:51.407 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:51.407 rmmod nvme_tcp 00:19:51.407 rmmod nvme_fabrics 00:19:51.407 rmmod nvme_keyring 00:19:51.407 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:51.407 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:19:51.407 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:19:51.407 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2648909 ']' 00:19:51.407 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2648909 00:19:51.407 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2648909 ']' 00:19:51.407 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2648909 00:19:51.407 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:19:51.407 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:51.407 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2648909 00:19:51.407 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:51.407 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:51.407 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2648909' 00:19:51.407 killing process with pid 2648909 00:19:51.407 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2648909 00:19:51.407 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2648909 00:19:51.407 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:51.407 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:51.407 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:51.407 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:19:51.407 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:19:51.407 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:51.407 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:19:51.407 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:51.407 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:51.407 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:51.407 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:51.407 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:54.696 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:54.696 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:19:54.696 00:19:54.696 real 0m45.602s 00:19:54.696 user 2m40.771s 00:19:54.696 sys 0m9.853s 00:19:54.696 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:54.696 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:54.696 ************************************ 00:19:54.696 END TEST nvmf_perf_adq 00:19:54.696 ************************************ 00:19:54.696 11:21:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:54.696 11:21:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:54.696 11:21:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:54.696 11:21:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:54.696 ************************************ 00:19:54.696 START TEST nvmf_shutdown 00:19:54.696 ************************************ 00:19:54.696 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:54.696 * Looking for test storage... 00:19:54.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:54.696 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:54.696 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:19:54.696 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:54.696 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:54.696 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:54.696 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:54.696 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:54.696 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:19:54.696 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:19:54.696 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:54.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.697 --rc genhtml_branch_coverage=1 00:19:54.697 --rc genhtml_function_coverage=1 00:19:54.697 --rc genhtml_legend=1 00:19:54.697 --rc geninfo_all_blocks=1 00:19:54.697 --rc geninfo_unexecuted_blocks=1 00:19:54.697 00:19:54.697 ' 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:54.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.697 --rc genhtml_branch_coverage=1 00:19:54.697 --rc genhtml_function_coverage=1 00:19:54.697 --rc genhtml_legend=1 00:19:54.697 --rc geninfo_all_blocks=1 00:19:54.697 --rc geninfo_unexecuted_blocks=1 00:19:54.697 00:19:54.697 ' 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:54.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.697 --rc genhtml_branch_coverage=1 00:19:54.697 --rc genhtml_function_coverage=1 00:19:54.697 --rc genhtml_legend=1 00:19:54.697 --rc geninfo_all_blocks=1 00:19:54.697 --rc geninfo_unexecuted_blocks=1 00:19:54.697 00:19:54.697 ' 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:54.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.697 --rc genhtml_branch_coverage=1 00:19:54.697 --rc genhtml_function_coverage=1 00:19:54.697 --rc genhtml_legend=1 00:19:54.697 --rc geninfo_all_blocks=1 00:19:54.697 --rc geninfo_unexecuted_blocks=1 00:19:54.697 00:19:54.697 ' 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.697 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:19:54.698 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:54.698 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:54.698 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:54.698 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:54.698 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:54.698 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:54.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:54.698 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:54.698 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:54.698 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:54.698 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:54.698 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:54.698 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:19:54.698 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:54.698 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:54.698 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:54.698 ************************************ 00:19:54.698 START TEST nvmf_shutdown_tc1 00:19:54.698 ************************************ 00:19:54.698 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:19:54.698 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:19:54.698 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:19:54.698 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:54.698 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:54.698 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:54.698 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:54.698 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:54.698 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:54.698 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:54.698 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:54.698 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:54.698 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:54.698 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:19:54.698 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:19:57.234 Found 0000:82:00.0 (0x8086 - 0x159b) 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:19:57.234 Found 0000:82:00.1 (0x8086 - 0x159b) 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:19:57.234 Found net devices under 0000:82:00.0: cvl_0_0 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:57.234 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:19:57.235 Found net devices under 0000:82:00.1: cvl_0_1 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:57.235 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:57.235 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.311 ms 00:19:57.235 00:19:57.235 --- 10.0.0.2 ping statistics --- 00:19:57.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.235 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:57.235 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:57.235 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:19:57.235 00:19:57.235 --- 10.0.0.1 ping statistics --- 00:19:57.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.235 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2652602 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2652602 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2652602 ']' 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:57.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:57.235 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:57.235 [2024-11-19 11:21:52.686037] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:19:57.235 [2024-11-19 11:21:52.686123] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:57.494 [2024-11-19 11:21:52.769966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:57.494 [2024-11-19 11:21:52.827204] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:57.494 [2024-11-19 11:21:52.827275] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:57.494 [2024-11-19 11:21:52.827304] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:57.494 [2024-11-19 11:21:52.827324] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:57.494 [2024-11-19 11:21:52.827334] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:57.494 [2024-11-19 11:21:52.829056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:57.494 [2024-11-19 11:21:52.829161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:57.494 [2024-11-19 11:21:52.829233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:57.494 [2024-11-19 11:21:52.829236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:57.494 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:57.494 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:19:57.494 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:57.494 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:57.494 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:57.494 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:57.494 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:57.494 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.494 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:57.494 [2024-11-19 11:21:52.968357] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:57.494 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.494 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:19:57.494 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:19:57.494 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:57.494 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:57.494 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:57.494 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:57.494 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:57.494 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:57.494 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:57.494 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:57.494 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:57.494 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:57.494 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:57.752 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:57.752 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:57.752 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:57.752 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:57.752 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:57.752 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:57.752 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:57.752 11:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:57.752 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:57.752 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:57.752 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:57.752 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:19:57.752 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:19:57.752 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.752 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:57.752 Malloc1 00:19:57.753 [2024-11-19 11:21:53.066449] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:57.753 Malloc2 00:19:57.753 Malloc3 00:19:57.753 Malloc4 00:19:57.753 Malloc5 00:19:58.011 Malloc6 00:19:58.011 Malloc7 00:19:58.011 Malloc8 00:19:58.011 Malloc9 00:19:58.011 Malloc10 00:19:58.011 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.011 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:19:58.011 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:58.011 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:58.269 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2652783 00:19:58.269 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2652783 /var/tmp/bdevperf.sock 00:19:58.269 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2652783 ']' 00:19:58.269 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:19:58.269 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:58.269 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:58.269 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:58.269 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:19:58.269 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:58.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:58.269 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:19:58.270 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:58.270 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:58.270 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:58.270 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:58.270 { 00:19:58.270 "params": { 00:19:58.270 "name": "Nvme$subsystem", 00:19:58.270 "trtype": "$TEST_TRANSPORT", 00:19:58.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:58.270 "adrfam": "ipv4", 00:19:58.270 "trsvcid": "$NVMF_PORT", 00:19:58.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:58.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:58.270 "hdgst": ${hdgst:-false}, 00:19:58.270 "ddgst": ${ddgst:-false} 00:19:58.270 }, 00:19:58.270 "method": "bdev_nvme_attach_controller" 00:19:58.270 } 00:19:58.270 EOF 00:19:58.270 )") 00:19:58.270 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:58.270 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:58.270 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:58.270 { 00:19:58.270 "params": { 00:19:58.270 "name": "Nvme$subsystem", 00:19:58.270 "trtype": "$TEST_TRANSPORT", 00:19:58.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:58.270 "adrfam": "ipv4", 00:19:58.270 "trsvcid": "$NVMF_PORT", 00:19:58.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:58.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:58.270 "hdgst": ${hdgst:-false}, 00:19:58.270 "ddgst": ${ddgst:-false} 00:19:58.270 }, 00:19:58.270 "method": "bdev_nvme_attach_controller" 00:19:58.270 } 00:19:58.270 EOF 00:19:58.270 )") 00:19:58.270 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:58.270 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:58.270 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:58.270 { 00:19:58.270 "params": { 00:19:58.270 "name": "Nvme$subsystem", 00:19:58.270 "trtype": "$TEST_TRANSPORT", 00:19:58.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:58.270 "adrfam": "ipv4", 00:19:58.270 "trsvcid": "$NVMF_PORT", 00:19:58.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:58.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:58.270 "hdgst": ${hdgst:-false}, 00:19:58.270 "ddgst": ${ddgst:-false} 00:19:58.270 }, 00:19:58.270 "method": "bdev_nvme_attach_controller" 00:19:58.270 } 00:19:58.270 EOF 00:19:58.270 )") 00:19:58.270 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:58.270 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:58.270 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:58.270 { 00:19:58.270 "params": { 00:19:58.270 "name": "Nvme$subsystem", 00:19:58.270 "trtype": "$TEST_TRANSPORT", 00:19:58.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:58.270 "adrfam": "ipv4", 00:19:58.270 "trsvcid": "$NVMF_PORT", 00:19:58.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:58.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:58.270 "hdgst": ${hdgst:-false}, 00:19:58.270 "ddgst": ${ddgst:-false} 00:19:58.270 }, 00:19:58.270 "method": "bdev_nvme_attach_controller" 00:19:58.270 } 00:19:58.270 EOF 00:19:58.270 )") 00:19:58.270 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:58.270 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:58.270 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:58.270 { 00:19:58.270 "params": { 00:19:58.270 "name": "Nvme$subsystem", 00:19:58.270 "trtype": "$TEST_TRANSPORT", 00:19:58.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:58.270 "adrfam": "ipv4", 00:19:58.270 "trsvcid": "$NVMF_PORT", 00:19:58.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:58.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:58.270 "hdgst": ${hdgst:-false}, 00:19:58.270 "ddgst": ${ddgst:-false} 00:19:58.270 }, 00:19:58.270 "method": "bdev_nvme_attach_controller" 00:19:58.270 } 00:19:58.270 EOF 00:19:58.270 )") 00:19:58.270 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:58.270 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:58.270 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:58.270 { 00:19:58.270 "params": { 00:19:58.270 "name": "Nvme$subsystem", 00:19:58.270 "trtype": "$TEST_TRANSPORT", 00:19:58.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:58.270 "adrfam": "ipv4", 00:19:58.270 "trsvcid": "$NVMF_PORT", 00:19:58.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:58.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:58.270 "hdgst": ${hdgst:-false}, 00:19:58.270 "ddgst": ${ddgst:-false} 00:19:58.270 }, 00:19:58.270 "method": "bdev_nvme_attach_controller" 00:19:58.270 } 00:19:58.270 EOF 00:19:58.270 )") 00:19:58.270 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:58.270 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:58.270 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:58.270 { 00:19:58.270 "params": { 00:19:58.270 "name": "Nvme$subsystem", 00:19:58.270 "trtype": "$TEST_TRANSPORT", 00:19:58.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:58.270 "adrfam": "ipv4", 00:19:58.270 "trsvcid": "$NVMF_PORT", 00:19:58.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:58.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:58.270 "hdgst": ${hdgst:-false}, 00:19:58.270 "ddgst": ${ddgst:-false} 00:19:58.270 }, 00:19:58.270 "method": "bdev_nvme_attach_controller" 00:19:58.270 } 00:19:58.270 EOF 00:19:58.270 )") 00:19:58.270 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:58.270 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:58.270 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:58.270 { 00:19:58.270 "params": { 00:19:58.270 "name": "Nvme$subsystem", 00:19:58.270 "trtype": "$TEST_TRANSPORT", 00:19:58.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:58.270 "adrfam": "ipv4", 00:19:58.270 "trsvcid": "$NVMF_PORT", 00:19:58.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:58.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:58.270 "hdgst": ${hdgst:-false}, 00:19:58.270 "ddgst": ${ddgst:-false} 00:19:58.270 }, 00:19:58.270 "method": "bdev_nvme_attach_controller" 00:19:58.270 } 00:19:58.270 EOF 00:19:58.270 )") 00:19:58.270 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:58.270 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:58.270 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:58.270 { 00:19:58.270 "params": { 00:19:58.270 "name": "Nvme$subsystem", 00:19:58.270 "trtype": "$TEST_TRANSPORT", 00:19:58.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:58.270 "adrfam": "ipv4", 00:19:58.270 "trsvcid": "$NVMF_PORT", 00:19:58.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:58.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:58.270 "hdgst": ${hdgst:-false}, 00:19:58.270 "ddgst": ${ddgst:-false} 00:19:58.270 }, 00:19:58.270 "method": "bdev_nvme_attach_controller" 00:19:58.270 } 00:19:58.270 EOF 00:19:58.270 )") 00:19:58.270 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:58.270 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:58.270 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:58.270 { 00:19:58.270 "params": { 00:19:58.270 "name": "Nvme$subsystem", 00:19:58.270 "trtype": "$TEST_TRANSPORT", 00:19:58.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:58.270 "adrfam": "ipv4", 00:19:58.270 "trsvcid": "$NVMF_PORT", 00:19:58.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:58.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:58.270 "hdgst": ${hdgst:-false}, 00:19:58.270 "ddgst": ${ddgst:-false} 00:19:58.270 }, 00:19:58.270 "method": "bdev_nvme_attach_controller" 00:19:58.270 } 00:19:58.270 EOF 00:19:58.270 )") 00:19:58.270 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:19:58.271 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:19:58.271 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:19:58.271 11:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:58.271 "params": { 00:19:58.271 "name": "Nvme1", 00:19:58.271 "trtype": "tcp", 00:19:58.271 "traddr": "10.0.0.2", 00:19:58.271 "adrfam": "ipv4", 00:19:58.271 "trsvcid": "4420", 00:19:58.271 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.271 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:58.271 "hdgst": false, 00:19:58.271 "ddgst": false 00:19:58.271 }, 00:19:58.271 "method": "bdev_nvme_attach_controller" 00:19:58.271 },{ 00:19:58.271 "params": { 00:19:58.271 "name": "Nvme2", 00:19:58.271 "trtype": "tcp", 00:19:58.271 "traddr": "10.0.0.2", 00:19:58.271 "adrfam": "ipv4", 00:19:58.271 "trsvcid": "4420", 00:19:58.271 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:58.271 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:58.271 "hdgst": false, 00:19:58.271 "ddgst": false 00:19:58.271 }, 00:19:58.271 "method": "bdev_nvme_attach_controller" 00:19:58.271 },{ 00:19:58.271 "params": { 00:19:58.271 "name": "Nvme3", 00:19:58.271 "trtype": "tcp", 00:19:58.271 "traddr": "10.0.0.2", 00:19:58.271 "adrfam": "ipv4", 00:19:58.271 "trsvcid": "4420", 00:19:58.271 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:58.271 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:58.271 "hdgst": false, 00:19:58.271 "ddgst": false 00:19:58.271 }, 00:19:58.271 "method": "bdev_nvme_attach_controller" 00:19:58.271 },{ 00:19:58.271 "params": { 00:19:58.271 "name": "Nvme4", 00:19:58.271 "trtype": "tcp", 00:19:58.271 "traddr": "10.0.0.2", 00:19:58.271 "adrfam": "ipv4", 00:19:58.271 "trsvcid": "4420", 00:19:58.271 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:58.271 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:58.271 "hdgst": false, 00:19:58.271 "ddgst": false 00:19:58.271 }, 00:19:58.271 "method": "bdev_nvme_attach_controller" 00:19:58.271 },{ 00:19:58.271 "params": { 00:19:58.271 "name": "Nvme5", 00:19:58.271 "trtype": "tcp", 00:19:58.271 "traddr": "10.0.0.2", 00:19:58.271 "adrfam": "ipv4", 00:19:58.271 "trsvcid": "4420", 00:19:58.271 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:58.271 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:58.271 "hdgst": false, 00:19:58.271 "ddgst": false 00:19:58.271 }, 00:19:58.271 "method": "bdev_nvme_attach_controller" 00:19:58.271 },{ 00:19:58.271 "params": { 00:19:58.271 "name": "Nvme6", 00:19:58.271 "trtype": "tcp", 00:19:58.271 "traddr": "10.0.0.2", 00:19:58.271 "adrfam": "ipv4", 00:19:58.271 "trsvcid": "4420", 00:19:58.271 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:58.271 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:58.271 "hdgst": false, 00:19:58.271 "ddgst": false 00:19:58.271 }, 00:19:58.271 "method": "bdev_nvme_attach_controller" 00:19:58.271 },{ 00:19:58.271 "params": { 00:19:58.271 "name": "Nvme7", 00:19:58.271 "trtype": "tcp", 00:19:58.271 "traddr": "10.0.0.2", 00:19:58.271 "adrfam": "ipv4", 00:19:58.271 "trsvcid": "4420", 00:19:58.271 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:58.271 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:58.271 "hdgst": false, 00:19:58.271 "ddgst": false 00:19:58.271 }, 00:19:58.271 "method": "bdev_nvme_attach_controller" 00:19:58.271 },{ 00:19:58.271 "params": { 00:19:58.271 "name": "Nvme8", 00:19:58.271 "trtype": "tcp", 00:19:58.271 "traddr": "10.0.0.2", 00:19:58.271 "adrfam": "ipv4", 00:19:58.271 "trsvcid": "4420", 00:19:58.271 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:58.271 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:58.271 "hdgst": false, 00:19:58.271 "ddgst": false 00:19:58.271 }, 00:19:58.271 "method": "bdev_nvme_attach_controller" 00:19:58.271 },{ 00:19:58.271 "params": { 00:19:58.271 "name": "Nvme9", 00:19:58.271 "trtype": "tcp", 00:19:58.271 "traddr": "10.0.0.2", 00:19:58.271 "adrfam": "ipv4", 00:19:58.271 "trsvcid": "4420", 00:19:58.271 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:58.271 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:58.271 "hdgst": false, 00:19:58.271 "ddgst": false 00:19:58.271 }, 00:19:58.271 "method": "bdev_nvme_attach_controller" 00:19:58.271 },{ 00:19:58.271 "params": { 00:19:58.271 "name": "Nvme10", 00:19:58.271 "trtype": "tcp", 00:19:58.271 "traddr": "10.0.0.2", 00:19:58.271 "adrfam": "ipv4", 00:19:58.271 "trsvcid": "4420", 00:19:58.271 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:58.271 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:58.271 "hdgst": false, 00:19:58.271 "ddgst": false 00:19:58.271 }, 00:19:58.271 "method": "bdev_nvme_attach_controller" 00:19:58.271 }' 00:19:58.271 [2024-11-19 11:21:53.558442] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:19:58.271 [2024-11-19 11:21:53.558518] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:58.271 [2024-11-19 11:21:53.640620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.271 [2024-11-19 11:21:53.699927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.167 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:00.167 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:00.167 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:00.167 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.167 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:00.167 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.167 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2652783 00:20:00.167 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:20:00.167 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:20:01.100 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2652783 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:01.100 11:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2652602 00:20:01.100 11:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:01.100 11:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:01.100 11:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:01.100 11:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:01.100 11:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:01.100 11:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:01.100 { 00:20:01.100 "params": { 00:20:01.100 "name": "Nvme$subsystem", 00:20:01.100 "trtype": "$TEST_TRANSPORT", 00:20:01.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:01.100 "adrfam": "ipv4", 00:20:01.100 "trsvcid": "$NVMF_PORT", 00:20:01.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:01.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:01.100 "hdgst": ${hdgst:-false}, 00:20:01.100 "ddgst": ${ddgst:-false} 00:20:01.100 }, 00:20:01.100 "method": "bdev_nvme_attach_controller" 00:20:01.100 } 00:20:01.100 EOF 00:20:01.100 )") 00:20:01.100 11:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:01.100 11:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:01.100 11:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:01.100 { 00:20:01.100 "params": { 00:20:01.100 "name": "Nvme$subsystem", 00:20:01.100 "trtype": "$TEST_TRANSPORT", 00:20:01.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:01.100 "adrfam": "ipv4", 00:20:01.100 "trsvcid": "$NVMF_PORT", 00:20:01.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:01.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:01.100 "hdgst": ${hdgst:-false}, 00:20:01.100 "ddgst": ${ddgst:-false} 00:20:01.100 }, 00:20:01.100 "method": "bdev_nvme_attach_controller" 00:20:01.100 } 00:20:01.100 EOF 00:20:01.100 )") 00:20:01.100 11:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:01.100 11:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:01.100 11:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:01.100 { 00:20:01.100 "params": { 00:20:01.100 "name": "Nvme$subsystem", 00:20:01.100 "trtype": "$TEST_TRANSPORT", 00:20:01.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:01.100 "adrfam": "ipv4", 00:20:01.100 "trsvcid": "$NVMF_PORT", 00:20:01.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:01.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:01.100 "hdgst": ${hdgst:-false}, 00:20:01.100 "ddgst": ${ddgst:-false} 00:20:01.100 }, 00:20:01.100 "method": "bdev_nvme_attach_controller" 00:20:01.100 } 00:20:01.100 EOF 00:20:01.100 )") 00:20:01.100 11:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:01.100 11:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:01.100 11:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:01.100 { 00:20:01.100 "params": { 00:20:01.100 "name": "Nvme$subsystem", 00:20:01.100 "trtype": "$TEST_TRANSPORT", 00:20:01.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:01.100 "adrfam": "ipv4", 00:20:01.100 "trsvcid": "$NVMF_PORT", 00:20:01.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:01.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:01.100 "hdgst": ${hdgst:-false}, 00:20:01.100 "ddgst": ${ddgst:-false} 00:20:01.100 }, 00:20:01.100 "method": "bdev_nvme_attach_controller" 00:20:01.100 } 00:20:01.100 EOF 00:20:01.100 )") 00:20:01.100 11:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:01.100 11:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:01.100 11:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:01.100 { 00:20:01.100 "params": { 00:20:01.100 "name": "Nvme$subsystem", 00:20:01.100 "trtype": "$TEST_TRANSPORT", 00:20:01.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:01.100 "adrfam": "ipv4", 00:20:01.100 "trsvcid": "$NVMF_PORT", 00:20:01.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:01.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:01.100 "hdgst": ${hdgst:-false}, 00:20:01.100 "ddgst": ${ddgst:-false} 00:20:01.100 }, 00:20:01.100 "method": "bdev_nvme_attach_controller" 00:20:01.100 } 00:20:01.100 EOF 00:20:01.100 )") 00:20:01.100 11:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:01.100 11:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:01.100 11:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:01.100 { 00:20:01.100 "params": { 00:20:01.100 "name": "Nvme$subsystem", 00:20:01.100 "trtype": "$TEST_TRANSPORT", 00:20:01.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:01.100 "adrfam": "ipv4", 00:20:01.100 "trsvcid": "$NVMF_PORT", 00:20:01.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:01.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:01.100 "hdgst": ${hdgst:-false}, 00:20:01.100 "ddgst": ${ddgst:-false} 00:20:01.100 }, 00:20:01.100 "method": "bdev_nvme_attach_controller" 00:20:01.100 } 00:20:01.100 EOF 00:20:01.100 )") 00:20:01.100 11:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:01.100 11:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:01.100 11:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:01.100 { 00:20:01.100 "params": { 00:20:01.100 "name": "Nvme$subsystem", 00:20:01.100 "trtype": "$TEST_TRANSPORT", 00:20:01.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:01.100 "adrfam": "ipv4", 00:20:01.100 "trsvcid": "$NVMF_PORT", 00:20:01.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:01.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:01.100 "hdgst": ${hdgst:-false}, 00:20:01.100 "ddgst": ${ddgst:-false} 00:20:01.100 }, 00:20:01.100 "method": "bdev_nvme_attach_controller" 00:20:01.100 } 00:20:01.100 EOF 00:20:01.100 )") 00:20:01.100 11:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:01.100 11:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:01.100 11:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:01.100 { 00:20:01.100 "params": { 00:20:01.100 "name": "Nvme$subsystem", 00:20:01.100 "trtype": "$TEST_TRANSPORT", 00:20:01.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:01.100 "adrfam": "ipv4", 00:20:01.100 "trsvcid": "$NVMF_PORT", 00:20:01.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:01.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:01.100 "hdgst": ${hdgst:-false}, 00:20:01.100 "ddgst": ${ddgst:-false} 00:20:01.100 }, 00:20:01.100 "method": "bdev_nvme_attach_controller" 00:20:01.100 } 00:20:01.100 EOF 00:20:01.100 )") 00:20:01.100 11:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:01.100 11:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:01.100 11:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:01.100 { 00:20:01.100 "params": { 00:20:01.100 "name": "Nvme$subsystem", 00:20:01.100 "trtype": "$TEST_TRANSPORT", 00:20:01.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:01.100 "adrfam": "ipv4", 00:20:01.100 "trsvcid": "$NVMF_PORT", 00:20:01.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:01.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:01.100 "hdgst": ${hdgst:-false}, 00:20:01.100 "ddgst": ${ddgst:-false} 00:20:01.101 }, 00:20:01.101 "method": "bdev_nvme_attach_controller" 00:20:01.101 } 00:20:01.101 EOF 00:20:01.101 )") 00:20:01.101 11:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:01.358 11:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:01.358 11:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:01.358 { 00:20:01.358 "params": { 00:20:01.358 "name": "Nvme$subsystem", 00:20:01.358 "trtype": "$TEST_TRANSPORT", 00:20:01.358 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:01.358 "adrfam": "ipv4", 00:20:01.358 "trsvcid": "$NVMF_PORT", 00:20:01.358 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:01.358 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:01.358 "hdgst": ${hdgst:-false}, 00:20:01.358 "ddgst": ${ddgst:-false} 00:20:01.358 }, 00:20:01.358 "method": "bdev_nvme_attach_controller" 00:20:01.358 } 00:20:01.358 EOF 00:20:01.358 )") 00:20:01.358 11:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:01.358 11:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:01.358 11:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:01.358 11:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:01.358 "params": { 00:20:01.358 "name": "Nvme1", 00:20:01.359 "trtype": "tcp", 00:20:01.359 "traddr": "10.0.0.2", 00:20:01.359 "adrfam": "ipv4", 00:20:01.359 "trsvcid": "4420", 00:20:01.359 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:01.359 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:01.359 "hdgst": false, 00:20:01.359 "ddgst": false 00:20:01.359 }, 00:20:01.359 "method": "bdev_nvme_attach_controller" 00:20:01.359 },{ 00:20:01.359 "params": { 00:20:01.359 "name": "Nvme2", 00:20:01.359 "trtype": "tcp", 00:20:01.359 "traddr": "10.0.0.2", 00:20:01.359 "adrfam": "ipv4", 00:20:01.359 "trsvcid": "4420", 00:20:01.359 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:01.359 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:01.359 "hdgst": false, 00:20:01.359 "ddgst": false 00:20:01.359 }, 00:20:01.359 "method": "bdev_nvme_attach_controller" 00:20:01.359 },{ 00:20:01.359 "params": { 00:20:01.359 "name": "Nvme3", 00:20:01.359 "trtype": "tcp", 00:20:01.359 "traddr": "10.0.0.2", 00:20:01.359 "adrfam": "ipv4", 00:20:01.359 "trsvcid": "4420", 00:20:01.359 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:01.359 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:01.359 "hdgst": false, 00:20:01.359 "ddgst": false 00:20:01.359 }, 00:20:01.359 "method": "bdev_nvme_attach_controller" 00:20:01.359 },{ 00:20:01.359 "params": { 00:20:01.359 "name": "Nvme4", 00:20:01.359 "trtype": "tcp", 00:20:01.359 "traddr": "10.0.0.2", 00:20:01.359 "adrfam": "ipv4", 00:20:01.359 "trsvcid": "4420", 00:20:01.359 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:01.359 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:01.359 "hdgst": false, 00:20:01.359 "ddgst": false 00:20:01.359 }, 00:20:01.359 "method": "bdev_nvme_attach_controller" 00:20:01.359 },{ 00:20:01.359 "params": { 00:20:01.359 "name": "Nvme5", 00:20:01.359 "trtype": "tcp", 00:20:01.359 "traddr": "10.0.0.2", 00:20:01.359 "adrfam": "ipv4", 00:20:01.359 "trsvcid": "4420", 00:20:01.359 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:01.359 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:01.359 "hdgst": false, 00:20:01.359 "ddgst": false 00:20:01.359 }, 00:20:01.359 "method": "bdev_nvme_attach_controller" 00:20:01.359 },{ 00:20:01.359 "params": { 00:20:01.359 "name": "Nvme6", 00:20:01.359 "trtype": "tcp", 00:20:01.359 "traddr": "10.0.0.2", 00:20:01.359 "adrfam": "ipv4", 00:20:01.359 "trsvcid": "4420", 00:20:01.359 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:01.359 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:01.359 "hdgst": false, 00:20:01.359 "ddgst": false 00:20:01.359 }, 00:20:01.359 "method": "bdev_nvme_attach_controller" 00:20:01.359 },{ 00:20:01.359 "params": { 00:20:01.359 "name": "Nvme7", 00:20:01.359 "trtype": "tcp", 00:20:01.359 "traddr": "10.0.0.2", 00:20:01.359 "adrfam": "ipv4", 00:20:01.359 "trsvcid": "4420", 00:20:01.359 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:01.359 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:01.359 "hdgst": false, 00:20:01.359 "ddgst": false 00:20:01.359 }, 00:20:01.359 "method": "bdev_nvme_attach_controller" 00:20:01.359 },{ 00:20:01.359 "params": { 00:20:01.359 "name": "Nvme8", 00:20:01.359 "trtype": "tcp", 00:20:01.359 "traddr": "10.0.0.2", 00:20:01.359 "adrfam": "ipv4", 00:20:01.359 "trsvcid": "4420", 00:20:01.359 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:01.359 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:01.359 "hdgst": false, 00:20:01.359 "ddgst": false 00:20:01.359 }, 00:20:01.359 "method": "bdev_nvme_attach_controller" 00:20:01.359 },{ 00:20:01.359 "params": { 00:20:01.359 "name": "Nvme9", 00:20:01.359 "trtype": "tcp", 00:20:01.359 "traddr": "10.0.0.2", 00:20:01.359 "adrfam": "ipv4", 00:20:01.359 "trsvcid": "4420", 00:20:01.359 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:01.359 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:01.359 "hdgst": false, 00:20:01.359 "ddgst": false 00:20:01.359 }, 00:20:01.359 "method": "bdev_nvme_attach_controller" 00:20:01.359 },{ 00:20:01.359 "params": { 00:20:01.359 "name": "Nvme10", 00:20:01.359 "trtype": "tcp", 00:20:01.359 "traddr": "10.0.0.2", 00:20:01.359 "adrfam": "ipv4", 00:20:01.359 "trsvcid": "4420", 00:20:01.359 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:01.359 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:01.359 "hdgst": false, 00:20:01.359 "ddgst": false 00:20:01.359 }, 00:20:01.359 "method": "bdev_nvme_attach_controller" 00:20:01.359 }' 00:20:01.359 [2024-11-19 11:21:56.612931] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:20:01.359 [2024-11-19 11:21:56.613004] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2653205 ] 00:20:01.359 [2024-11-19 11:21:56.694778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.359 [2024-11-19 11:21:56.753973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:03.257 Running I/O for 1 seconds... 00:20:04.191 1672.00 IOPS, 104.50 MiB/s 00:20:04.191 Latency(us) 00:20:04.191 [2024-11-19T10:21:59.688Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.191 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:04.191 Verification LBA range: start 0x0 length 0x400 00:20:04.191 Nvme1n1 : 1.14 224.53 14.03 0.00 0.00 281986.84 31263.10 256318.58 00:20:04.191 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:04.191 Verification LBA range: start 0x0 length 0x400 00:20:04.191 Nvme2n1 : 1.04 185.08 11.57 0.00 0.00 335597.92 21942.42 290494.39 00:20:04.191 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:04.191 Verification LBA range: start 0x0 length 0x400 00:20:04.191 Nvme3n1 : 1.13 226.60 14.16 0.00 0.00 269971.53 30292.20 257872.02 00:20:04.191 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:04.191 Verification LBA range: start 0x0 length 0x400 00:20:04.191 Nvme4n1 : 1.13 229.62 14.35 0.00 0.00 259740.45 10534.31 268746.15 00:20:04.191 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:04.191 Verification LBA range: start 0x0 length 0x400 00:20:04.191 Nvme5n1 : 1.17 219.17 13.70 0.00 0.00 269944.60 21262.79 274959.93 00:20:04.191 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:04.191 Verification LBA range: start 0x0 length 0x400 00:20:04.191 Nvme6n1 : 1.15 226.81 14.18 0.00 0.00 255152.65 4927.34 268746.15 00:20:04.191 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:04.191 Verification LBA range: start 0x0 length 0x400 00:20:04.191 Nvme7n1 : 1.15 221.94 13.87 0.00 0.00 256840.44 29903.83 262532.36 00:20:04.191 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:04.191 Verification LBA range: start 0x0 length 0x400 00:20:04.191 Nvme8n1 : 1.16 220.98 13.81 0.00 0.00 253395.82 19126.80 276513.37 00:20:04.191 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:04.191 Verification LBA range: start 0x0 length 0x400 00:20:04.191 Nvme9n1 : 1.18 217.70 13.61 0.00 0.00 253039.50 31845.64 271853.04 00:20:04.191 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:04.191 Verification LBA range: start 0x0 length 0x400 00:20:04.191 Nvme10n1 : 1.17 218.35 13.65 0.00 0.00 247837.01 19806.44 288940.94 00:20:04.191 [2024-11-19T10:21:59.688Z] =================================================================================================================== 00:20:04.191 [2024-11-19T10:21:59.688Z] Total : 2190.79 136.92 0.00 0.00 266597.06 4927.34 290494.39 00:20:04.449 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:20:04.449 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:04.449 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:04.449 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:04.449 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:04.449 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:04.449 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:20:04.449 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:04.449 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:20:04.449 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:04.449 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:04.449 rmmod nvme_tcp 00:20:04.449 rmmod nvme_fabrics 00:20:04.449 rmmod nvme_keyring 00:20:04.449 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:04.449 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:20:04.449 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:20:04.449 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2652602 ']' 00:20:04.449 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2652602 00:20:04.449 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 2652602 ']' 00:20:04.449 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 2652602 00:20:04.449 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:20:04.449 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:04.449 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2652602 00:20:04.449 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:04.449 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:04.449 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2652602' 00:20:04.449 killing process with pid 2652602 00:20:04.449 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 2652602 00:20:04.449 11:21:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 2652602 00:20:05.017 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:05.017 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:05.017 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:05.017 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:20:05.017 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:20:05.017 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:05.017 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:20:05.017 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:05.017 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:05.017 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:05.018 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:05.018 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.924 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:06.924 00:20:06.924 real 0m12.503s 00:20:06.924 user 0m35.093s 00:20:06.924 sys 0m3.635s 00:20:06.924 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:06.924 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:06.924 ************************************ 00:20:06.924 END TEST nvmf_shutdown_tc1 00:20:06.924 ************************************ 00:20:06.924 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:06.924 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:06.924 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:06.924 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:06.924 ************************************ 00:20:06.924 START TEST nvmf_shutdown_tc2 00:20:06.924 ************************************ 00:20:06.924 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:20:06.924 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:20:06.924 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:06.924 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:06.924 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:06.924 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:06.924 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:06.924 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:06.924 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:06.924 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:06.924 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:20:07.184 Found 0000:82:00.0 (0x8086 - 0x159b) 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:20:07.184 Found 0000:82:00.1 (0x8086 - 0x159b) 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:20:07.184 Found net devices under 0000:82:00.0: cvl_0_0 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:20:07.184 Found net devices under 0000:82:00.1: cvl_0_1 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:07.184 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:07.185 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:07.185 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:20:07.185 00:20:07.185 --- 10.0.0.2 ping statistics --- 00:20:07.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.185 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:07.185 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:07.185 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:20:07.185 00:20:07.185 --- 10.0.0.1 ping statistics --- 00:20:07.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.185 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2653971 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2653971 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2653971 ']' 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:07.185 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:07.185 [2024-11-19 11:22:02.639675] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:20:07.185 [2024-11-19 11:22:02.639758] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:07.444 [2024-11-19 11:22:02.724710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:07.444 [2024-11-19 11:22:02.784645] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:07.444 [2024-11-19 11:22:02.784708] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:07.444 [2024-11-19 11:22:02.784721] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:07.444 [2024-11-19 11:22:02.784733] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:07.444 [2024-11-19 11:22:02.784742] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:07.444 [2024-11-19 11:22:02.786397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:07.444 [2024-11-19 11:22:02.786428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:07.444 [2024-11-19 11:22:02.786488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:07.444 [2024-11-19 11:22:02.786492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.444 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:07.444 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:07.444 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:07.444 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:07.444 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:07.444 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:07.444 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:07.444 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.444 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:07.702 [2024-11-19 11:22:02.943140] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:07.702 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.702 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:07.702 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:07.702 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:07.702 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:07.702 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:07.702 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:07.702 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:07.702 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:07.702 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:07.702 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:07.702 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:07.702 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:07.702 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:07.702 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:07.702 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:07.702 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:07.702 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:07.702 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:07.702 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:07.702 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:07.702 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:07.702 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:07.702 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:07.702 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:07.702 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:07.702 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:07.702 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.702 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:07.702 Malloc1 00:20:07.702 [2024-11-19 11:22:03.042993] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:07.702 Malloc2 00:20:07.702 Malloc3 00:20:07.702 Malloc4 00:20:07.960 Malloc5 00:20:07.960 Malloc6 00:20:07.960 Malloc7 00:20:07.960 Malloc8 00:20:07.960 Malloc9 00:20:08.219 Malloc10 00:20:08.219 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.219 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:08.219 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:08.219 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:08.219 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2654152 00:20:08.219 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2654152 /var/tmp/bdevperf.sock 00:20:08.219 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2654152 ']' 00:20:08.219 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:08.219 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:08.219 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:08.219 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:08.219 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:20:08.219 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:08.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:08.219 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:20:08.219 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:08.219 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:08.219 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:08.219 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:08.219 { 00:20:08.219 "params": { 00:20:08.219 "name": "Nvme$subsystem", 00:20:08.219 "trtype": "$TEST_TRANSPORT", 00:20:08.219 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.219 "adrfam": "ipv4", 00:20:08.219 "trsvcid": "$NVMF_PORT", 00:20:08.219 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.219 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.219 "hdgst": ${hdgst:-false}, 00:20:08.219 "ddgst": ${ddgst:-false} 00:20:08.219 }, 00:20:08.219 "method": "bdev_nvme_attach_controller" 00:20:08.219 } 00:20:08.219 EOF 00:20:08.219 )") 00:20:08.219 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:08.219 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:08.219 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:08.219 { 00:20:08.219 "params": { 00:20:08.219 "name": "Nvme$subsystem", 00:20:08.219 "trtype": "$TEST_TRANSPORT", 00:20:08.219 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.219 "adrfam": "ipv4", 00:20:08.219 "trsvcid": "$NVMF_PORT", 00:20:08.219 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.219 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.219 "hdgst": ${hdgst:-false}, 00:20:08.219 "ddgst": ${ddgst:-false} 00:20:08.219 }, 00:20:08.219 "method": "bdev_nvme_attach_controller" 00:20:08.219 } 00:20:08.219 EOF 00:20:08.219 )") 00:20:08.219 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:08.219 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:08.219 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:08.219 { 00:20:08.219 "params": { 00:20:08.219 "name": "Nvme$subsystem", 00:20:08.219 "trtype": "$TEST_TRANSPORT", 00:20:08.219 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.219 "adrfam": "ipv4", 00:20:08.219 "trsvcid": "$NVMF_PORT", 00:20:08.219 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.219 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.219 "hdgst": ${hdgst:-false}, 00:20:08.219 "ddgst": ${ddgst:-false} 00:20:08.219 }, 00:20:08.219 "method": "bdev_nvme_attach_controller" 00:20:08.219 } 00:20:08.219 EOF 00:20:08.219 )") 00:20:08.219 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:08.219 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:08.219 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:08.219 { 00:20:08.219 "params": { 00:20:08.219 "name": "Nvme$subsystem", 00:20:08.219 "trtype": "$TEST_TRANSPORT", 00:20:08.219 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.219 "adrfam": "ipv4", 00:20:08.219 "trsvcid": "$NVMF_PORT", 00:20:08.219 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.219 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.219 "hdgst": ${hdgst:-false}, 00:20:08.220 "ddgst": ${ddgst:-false} 00:20:08.220 }, 00:20:08.220 "method": "bdev_nvme_attach_controller" 00:20:08.220 } 00:20:08.220 EOF 00:20:08.220 )") 00:20:08.220 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:08.220 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:08.220 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:08.220 { 00:20:08.220 "params": { 00:20:08.220 "name": "Nvme$subsystem", 00:20:08.220 "trtype": "$TEST_TRANSPORT", 00:20:08.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.220 "adrfam": "ipv4", 00:20:08.220 "trsvcid": "$NVMF_PORT", 00:20:08.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.220 "hdgst": ${hdgst:-false}, 00:20:08.220 "ddgst": ${ddgst:-false} 00:20:08.220 }, 00:20:08.220 "method": "bdev_nvme_attach_controller" 00:20:08.220 } 00:20:08.220 EOF 00:20:08.220 )") 00:20:08.220 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:08.220 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:08.220 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:08.220 { 00:20:08.220 "params": { 00:20:08.220 "name": "Nvme$subsystem", 00:20:08.220 "trtype": "$TEST_TRANSPORT", 00:20:08.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.220 "adrfam": "ipv4", 00:20:08.220 "trsvcid": "$NVMF_PORT", 00:20:08.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.220 "hdgst": ${hdgst:-false}, 00:20:08.220 "ddgst": ${ddgst:-false} 00:20:08.220 }, 00:20:08.220 "method": "bdev_nvme_attach_controller" 00:20:08.220 } 00:20:08.220 EOF 00:20:08.220 )") 00:20:08.220 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:08.220 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:08.220 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:08.220 { 00:20:08.220 "params": { 00:20:08.220 "name": "Nvme$subsystem", 00:20:08.220 "trtype": "$TEST_TRANSPORT", 00:20:08.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.220 "adrfam": "ipv4", 00:20:08.220 "trsvcid": "$NVMF_PORT", 00:20:08.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.220 "hdgst": ${hdgst:-false}, 00:20:08.220 "ddgst": ${ddgst:-false} 00:20:08.220 }, 00:20:08.220 "method": "bdev_nvme_attach_controller" 00:20:08.220 } 00:20:08.220 EOF 00:20:08.220 )") 00:20:08.220 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:08.220 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:08.220 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:08.220 { 00:20:08.220 "params": { 00:20:08.220 "name": "Nvme$subsystem", 00:20:08.220 "trtype": "$TEST_TRANSPORT", 00:20:08.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.220 "adrfam": "ipv4", 00:20:08.220 "trsvcid": "$NVMF_PORT", 00:20:08.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.220 "hdgst": ${hdgst:-false}, 00:20:08.220 "ddgst": ${ddgst:-false} 00:20:08.220 }, 00:20:08.220 "method": "bdev_nvme_attach_controller" 00:20:08.220 } 00:20:08.220 EOF 00:20:08.220 )") 00:20:08.220 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:08.220 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:08.220 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:08.220 { 00:20:08.220 "params": { 00:20:08.220 "name": "Nvme$subsystem", 00:20:08.220 "trtype": "$TEST_TRANSPORT", 00:20:08.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.220 "adrfam": "ipv4", 00:20:08.220 "trsvcid": "$NVMF_PORT", 00:20:08.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.220 "hdgst": ${hdgst:-false}, 00:20:08.220 "ddgst": ${ddgst:-false} 00:20:08.220 }, 00:20:08.220 "method": "bdev_nvme_attach_controller" 00:20:08.220 } 00:20:08.220 EOF 00:20:08.220 )") 00:20:08.220 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:08.220 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:08.220 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:08.220 { 00:20:08.220 "params": { 00:20:08.220 "name": "Nvme$subsystem", 00:20:08.220 "trtype": "$TEST_TRANSPORT", 00:20:08.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.220 "adrfam": "ipv4", 00:20:08.220 "trsvcid": "$NVMF_PORT", 00:20:08.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.220 "hdgst": ${hdgst:-false}, 00:20:08.220 "ddgst": ${ddgst:-false} 00:20:08.220 }, 00:20:08.220 "method": "bdev_nvme_attach_controller" 00:20:08.220 } 00:20:08.220 EOF 00:20:08.220 )") 00:20:08.220 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:08.220 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:20:08.220 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:20:08.220 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:08.220 "params": { 00:20:08.220 "name": "Nvme1", 00:20:08.220 "trtype": "tcp", 00:20:08.220 "traddr": "10.0.0.2", 00:20:08.220 "adrfam": "ipv4", 00:20:08.220 "trsvcid": "4420", 00:20:08.220 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:08.220 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:08.220 "hdgst": false, 00:20:08.220 "ddgst": false 00:20:08.220 }, 00:20:08.220 "method": "bdev_nvme_attach_controller" 00:20:08.220 },{ 00:20:08.220 "params": { 00:20:08.220 "name": "Nvme2", 00:20:08.220 "trtype": "tcp", 00:20:08.220 "traddr": "10.0.0.2", 00:20:08.220 "adrfam": "ipv4", 00:20:08.220 "trsvcid": "4420", 00:20:08.220 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:08.220 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:08.220 "hdgst": false, 00:20:08.220 "ddgst": false 00:20:08.220 }, 00:20:08.220 "method": "bdev_nvme_attach_controller" 00:20:08.220 },{ 00:20:08.220 "params": { 00:20:08.220 "name": "Nvme3", 00:20:08.220 "trtype": "tcp", 00:20:08.220 "traddr": "10.0.0.2", 00:20:08.220 "adrfam": "ipv4", 00:20:08.220 "trsvcid": "4420", 00:20:08.220 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:08.220 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:08.220 "hdgst": false, 00:20:08.220 "ddgst": false 00:20:08.220 }, 00:20:08.220 "method": "bdev_nvme_attach_controller" 00:20:08.220 },{ 00:20:08.220 "params": { 00:20:08.220 "name": "Nvme4", 00:20:08.220 "trtype": "tcp", 00:20:08.220 "traddr": "10.0.0.2", 00:20:08.220 "adrfam": "ipv4", 00:20:08.220 "trsvcid": "4420", 00:20:08.220 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:08.220 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:08.220 "hdgst": false, 00:20:08.220 "ddgst": false 00:20:08.220 }, 00:20:08.220 "method": "bdev_nvme_attach_controller" 00:20:08.220 },{ 00:20:08.220 "params": { 00:20:08.220 "name": "Nvme5", 00:20:08.220 "trtype": "tcp", 00:20:08.220 "traddr": "10.0.0.2", 00:20:08.220 "adrfam": "ipv4", 00:20:08.220 "trsvcid": "4420", 00:20:08.220 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:08.220 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:08.220 "hdgst": false, 00:20:08.221 "ddgst": false 00:20:08.221 }, 00:20:08.221 "method": "bdev_nvme_attach_controller" 00:20:08.221 },{ 00:20:08.221 "params": { 00:20:08.221 "name": "Nvme6", 00:20:08.221 "trtype": "tcp", 00:20:08.221 "traddr": "10.0.0.2", 00:20:08.221 "adrfam": "ipv4", 00:20:08.221 "trsvcid": "4420", 00:20:08.221 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:08.221 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:08.221 "hdgst": false, 00:20:08.221 "ddgst": false 00:20:08.221 }, 00:20:08.221 "method": "bdev_nvme_attach_controller" 00:20:08.221 },{ 00:20:08.221 "params": { 00:20:08.221 "name": "Nvme7", 00:20:08.221 "trtype": "tcp", 00:20:08.221 "traddr": "10.0.0.2", 00:20:08.221 "adrfam": "ipv4", 00:20:08.221 "trsvcid": "4420", 00:20:08.221 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:08.221 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:08.221 "hdgst": false, 00:20:08.221 "ddgst": false 00:20:08.221 }, 00:20:08.221 "method": "bdev_nvme_attach_controller" 00:20:08.221 },{ 00:20:08.221 "params": { 00:20:08.221 "name": "Nvme8", 00:20:08.221 "trtype": "tcp", 00:20:08.221 "traddr": "10.0.0.2", 00:20:08.221 "adrfam": "ipv4", 00:20:08.221 "trsvcid": "4420", 00:20:08.221 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:08.221 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:08.221 "hdgst": false, 00:20:08.221 "ddgst": false 00:20:08.221 }, 00:20:08.221 "method": "bdev_nvme_attach_controller" 00:20:08.221 },{ 00:20:08.221 "params": { 00:20:08.221 "name": "Nvme9", 00:20:08.221 "trtype": "tcp", 00:20:08.221 "traddr": "10.0.0.2", 00:20:08.221 "adrfam": "ipv4", 00:20:08.221 "trsvcid": "4420", 00:20:08.221 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:08.221 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:08.221 "hdgst": false, 00:20:08.221 "ddgst": false 00:20:08.221 }, 00:20:08.221 "method": "bdev_nvme_attach_controller" 00:20:08.221 },{ 00:20:08.221 "params": { 00:20:08.221 "name": "Nvme10", 00:20:08.221 "trtype": "tcp", 00:20:08.221 "traddr": "10.0.0.2", 00:20:08.221 "adrfam": "ipv4", 00:20:08.221 "trsvcid": "4420", 00:20:08.221 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:08.221 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:08.221 "hdgst": false, 00:20:08.221 "ddgst": false 00:20:08.221 }, 00:20:08.221 "method": "bdev_nvme_attach_controller" 00:20:08.221 }' 00:20:08.221 [2024-11-19 11:22:03.556581] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:20:08.221 [2024-11-19 11:22:03.556687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2654152 ] 00:20:08.221 [2024-11-19 11:22:03.638657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.221 [2024-11-19 11:22:03.697769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.119 Running I/O for 10 seconds... 00:20:10.119 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:10.119 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:10.119 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:10.119 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.119 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:10.378 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.378 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:10.378 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:10.378 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:20:10.378 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:20:10.378 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:20:10.378 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:20:10.378 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:10.378 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:10.378 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.378 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:10.378 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:10.378 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.378 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=14 00:20:10.378 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 14 -ge 100 ']' 00:20:10.378 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:10.636 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:10.636 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:10.636 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:10.636 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:10.636 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.636 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:10.636 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.636 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=80 00:20:10.636 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 80 -ge 100 ']' 00:20:10.636 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:10.895 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:10.895 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:10.895 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:10.895 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:10.895 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.895 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:10.895 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.895 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=195 00:20:10.895 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:20:10.895 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:20:10.895 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:20:10.895 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:20:10.895 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2654152 00:20:10.895 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2654152 ']' 00:20:10.895 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2654152 00:20:10.895 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:20:10.895 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:10.895 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2654152 00:20:10.895 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:10.895 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:10.895 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2654152' 00:20:10.895 killing process with pid 2654152 00:20:10.895 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2654152 00:20:10.895 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2654152 00:20:11.154 1679.00 IOPS, 104.94 MiB/s [2024-11-19T10:22:06.651Z] Received shutdown signal, test time was about 1.094170 seconds 00:20:11.154 00:20:11.154 Latency(us) 00:20:11.154 [2024-11-19T10:22:06.651Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:11.154 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:11.154 Verification LBA range: start 0x0 length 0x400 00:20:11.154 Nvme1n1 : 1.08 246.17 15.39 0.00 0.00 254508.77 12913.02 239230.67 00:20:11.154 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:11.154 Verification LBA range: start 0x0 length 0x400 00:20:11.154 Nvme2n1 : 1.08 236.69 14.79 0.00 0.00 262305.94 18641.35 271853.04 00:20:11.154 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:11.154 Verification LBA range: start 0x0 length 0x400 00:20:11.154 Nvme3n1 : 1.07 238.52 14.91 0.00 0.00 255693.75 18252.99 268746.15 00:20:11.154 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:11.154 Verification LBA range: start 0x0 length 0x400 00:20:11.154 Nvme4n1 : 1.08 237.42 14.84 0.00 0.00 252051.91 23301.69 285834.05 00:20:11.154 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:11.154 Verification LBA range: start 0x0 length 0x400 00:20:11.154 Nvme5n1 : 1.04 183.84 11.49 0.00 0.00 318304.21 35923.44 271853.04 00:20:11.154 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:11.154 Verification LBA range: start 0x0 length 0x400 00:20:11.154 Nvme6n1 : 1.05 183.32 11.46 0.00 0.00 312627.45 21942.42 273406.48 00:20:11.154 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:11.154 Verification LBA range: start 0x0 length 0x400 00:20:11.154 Nvme7n1 : 1.09 234.14 14.63 0.00 0.00 241088.28 18447.17 278066.82 00:20:11.154 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:11.154 Verification LBA range: start 0x0 length 0x400 00:20:11.154 Nvme8n1 : 1.09 235.16 14.70 0.00 0.00 234125.46 18738.44 265639.25 00:20:11.154 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:11.154 Verification LBA range: start 0x0 length 0x400 00:20:11.154 Nvme9n1 : 1.06 180.66 11.29 0.00 0.00 298879.05 35146.71 279620.27 00:20:11.154 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:11.154 Verification LBA range: start 0x0 length 0x400 00:20:11.154 Nvme10n1 : 1.07 183.84 11.49 0.00 0.00 285751.08 6990.51 298261.62 00:20:11.154 [2024-11-19T10:22:06.651Z] =================================================================================================================== 00:20:11.154 [2024-11-19T10:22:06.651Z] Total : 2159.76 134.99 0.00 0.00 267905.41 6990.51 298261.62 00:20:11.154 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:20:12.586 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2653971 00:20:12.586 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:20:12.586 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:12.586 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:12.586 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:12.586 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:12.587 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:12.587 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:20:12.587 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:12.587 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:20:12.587 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:12.587 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:12.587 rmmod nvme_tcp 00:20:12.587 rmmod nvme_fabrics 00:20:12.587 rmmod nvme_keyring 00:20:12.587 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:12.587 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:20:12.587 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:20:12.587 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2653971 ']' 00:20:12.587 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2653971 00:20:12.587 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2653971 ']' 00:20:12.587 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2653971 00:20:12.587 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:20:12.587 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:12.587 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2653971 00:20:12.587 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:12.587 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:12.587 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2653971' 00:20:12.587 killing process with pid 2653971 00:20:12.587 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2653971 00:20:12.587 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2653971 00:20:12.846 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:12.846 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:12.846 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:12.846 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:20:12.846 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:20:12.846 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:12.846 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:20:12.846 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:12.846 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:12.846 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.846 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:12.846 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.389 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:15.389 00:20:15.389 real 0m7.875s 00:20:15.389 user 0m24.427s 00:20:15.389 sys 0m1.595s 00:20:15.389 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:15.389 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:15.389 ************************************ 00:20:15.389 END TEST nvmf_shutdown_tc2 00:20:15.389 ************************************ 00:20:15.389 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:15.390 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:15.390 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:15.390 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:15.390 ************************************ 00:20:15.390 START TEST nvmf_shutdown_tc3 00:20:15.390 ************************************ 00:20:15.390 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:20:15.390 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:20:15.390 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:15.390 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:15.390 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:15.390 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:15.390 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:15.390 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:15.390 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.390 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:15.390 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.390 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:15.390 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:15.390 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:15.390 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:15.391 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:15.391 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:15.391 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:15.391 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:15.391 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:15.391 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:15.391 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:15.391 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:20:15.391 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:15.391 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:20:15.391 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:20:15.391 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:20:15.391 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:20:15.391 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:20:15.391 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:15.391 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:15.391 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:15.392 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:15.392 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:15.392 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:15.392 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:15.392 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:15.392 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:15.392 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:15.392 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:15.392 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:15.392 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:15.392 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:15.392 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:15.392 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:15.392 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:15.392 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:15.392 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:15.392 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:15.392 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:20:15.392 Found 0000:82:00.0 (0x8086 - 0x159b) 00:20:15.392 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:15.392 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:15.392 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:15.393 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:15.393 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:15.393 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:15.393 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:20:15.393 Found 0000:82:00.1 (0x8086 - 0x159b) 00:20:15.393 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:15.393 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:15.393 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:15.393 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:15.393 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:15.393 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:15.393 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:15.393 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:15.394 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:15.394 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:15.394 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:15.394 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:15.394 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:15.394 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:15.394 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:15.394 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:20:15.394 Found net devices under 0000:82:00.0: cvl_0_0 00:20:15.394 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:15.394 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:15.394 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:15.394 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:15.394 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:15.394 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:15.394 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:15.394 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:15.394 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:20:15.394 Found net devices under 0000:82:00.1: cvl_0_1 00:20:15.395 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:15.395 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:15.395 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:15.395 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:15.395 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:15.395 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:15.395 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:15.395 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:15.395 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:15.395 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:15.395 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:15.395 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:15.395 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:15.396 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:15.396 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:15.396 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:15.396 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:15.396 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:15.396 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:15.396 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:15.396 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:15.396 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:15.396 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:15.396 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:15.396 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:15.396 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:15.396 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:15.396 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:15.396 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:15.397 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:15.397 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:20:15.397 00:20:15.397 --- 10.0.0.2 ping statistics --- 00:20:15.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:15.397 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:20:15.397 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:15.397 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:15.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:20:15.397 00:20:15.397 --- 10.0.0.1 ping statistics --- 00:20:15.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:15.397 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:20:15.397 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:15.397 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:20:15.397 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:15.397 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:15.397 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:15.397 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:15.397 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:15.398 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:15.398 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:15.398 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:15.398 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:15.398 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:15.398 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:15.398 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2655073 00:20:15.398 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:15.398 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2655073 00:20:15.398 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2655073 ']' 00:20:15.398 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:15.398 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:15.399 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:15.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:15.399 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:15.399 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:15.399 [2024-11-19 11:22:10.575223] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:20:15.399 [2024-11-19 11:22:10.575310] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:15.399 [2024-11-19 11:22:10.656785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:15.399 [2024-11-19 11:22:10.714794] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:15.399 [2024-11-19 11:22:10.714846] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:15.399 [2024-11-19 11:22:10.714860] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:15.399 [2024-11-19 11:22:10.714871] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:15.399 [2024-11-19 11:22:10.714881] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:15.399 [2024-11-19 11:22:10.716357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:15.399 [2024-11-19 11:22:10.716490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:15.399 [2024-11-19 11:22:10.716554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:15.399 [2024-11-19 11:22:10.716557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:15.399 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:15.399 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:20:15.399 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:15.400 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:15.400 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:15.400 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:15.400 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:15.400 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.400 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:15.400 [2024-11-19 11:22:10.869881] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:15.400 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.400 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:15.400 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:15.400 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:15.400 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:15.662 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:15.662 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:15.662 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:15.662 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:15.662 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:15.662 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:15.662 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:15.662 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:15.662 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:15.662 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:15.662 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:15.662 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:15.662 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:15.662 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:15.662 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:15.662 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:15.662 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:15.662 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:15.662 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:15.662 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:15.662 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:15.662 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:15.662 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.662 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:15.662 Malloc1 00:20:15.662 [2024-11-19 11:22:10.978422] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:15.662 Malloc2 00:20:15.662 Malloc3 00:20:15.662 Malloc4 00:20:15.662 Malloc5 00:20:15.920 Malloc6 00:20:15.920 Malloc7 00:20:15.920 Malloc8 00:20:15.920 Malloc9 00:20:15.920 Malloc10 00:20:16.178 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.178 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:16.178 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:16.179 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:16.179 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2655247 00:20:16.179 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2655247 /var/tmp/bdevperf.sock 00:20:16.179 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2655247 ']' 00:20:16.179 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:16.179 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:16.179 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:16.179 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:16.179 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:20:16.179 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:16.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:16.179 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:20:16.179 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:16.179 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:16.179 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:16.179 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:16.179 { 00:20:16.179 "params": { 00:20:16.179 "name": "Nvme$subsystem", 00:20:16.179 "trtype": "$TEST_TRANSPORT", 00:20:16.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:16.179 "adrfam": "ipv4", 00:20:16.179 "trsvcid": "$NVMF_PORT", 00:20:16.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:16.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:16.179 "hdgst": ${hdgst:-false}, 00:20:16.179 "ddgst": ${ddgst:-false} 00:20:16.179 }, 00:20:16.179 "method": "bdev_nvme_attach_controller" 00:20:16.179 } 00:20:16.179 EOF 00:20:16.179 )") 00:20:16.179 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:16.179 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:16.179 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:16.179 { 00:20:16.179 "params": { 00:20:16.179 "name": "Nvme$subsystem", 00:20:16.179 "trtype": "$TEST_TRANSPORT", 00:20:16.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:16.179 "adrfam": "ipv4", 00:20:16.179 "trsvcid": "$NVMF_PORT", 00:20:16.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:16.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:16.179 "hdgst": ${hdgst:-false}, 00:20:16.179 "ddgst": ${ddgst:-false} 00:20:16.179 }, 00:20:16.179 "method": "bdev_nvme_attach_controller" 00:20:16.179 } 00:20:16.179 EOF 00:20:16.179 )") 00:20:16.179 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:16.179 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:16.179 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:16.179 { 00:20:16.179 "params": { 00:20:16.179 "name": "Nvme$subsystem", 00:20:16.179 "trtype": "$TEST_TRANSPORT", 00:20:16.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:16.179 "adrfam": "ipv4", 00:20:16.179 "trsvcid": "$NVMF_PORT", 00:20:16.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:16.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:16.179 "hdgst": ${hdgst:-false}, 00:20:16.179 "ddgst": ${ddgst:-false} 00:20:16.179 }, 00:20:16.179 "method": "bdev_nvme_attach_controller" 00:20:16.179 } 00:20:16.179 EOF 00:20:16.179 )") 00:20:16.179 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:16.179 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:16.179 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:16.179 { 00:20:16.179 "params": { 00:20:16.179 "name": "Nvme$subsystem", 00:20:16.179 "trtype": "$TEST_TRANSPORT", 00:20:16.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:16.179 "adrfam": "ipv4", 00:20:16.179 "trsvcid": "$NVMF_PORT", 00:20:16.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:16.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:16.179 "hdgst": ${hdgst:-false}, 00:20:16.179 "ddgst": ${ddgst:-false} 00:20:16.179 }, 00:20:16.179 "method": "bdev_nvme_attach_controller" 00:20:16.179 } 00:20:16.179 EOF 00:20:16.179 )") 00:20:16.179 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:16.179 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:16.179 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:16.179 { 00:20:16.179 "params": { 00:20:16.179 "name": "Nvme$subsystem", 00:20:16.179 "trtype": "$TEST_TRANSPORT", 00:20:16.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:16.179 "adrfam": "ipv4", 00:20:16.179 "trsvcid": "$NVMF_PORT", 00:20:16.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:16.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:16.179 "hdgst": ${hdgst:-false}, 00:20:16.179 "ddgst": ${ddgst:-false} 00:20:16.179 }, 00:20:16.179 "method": "bdev_nvme_attach_controller" 00:20:16.179 } 00:20:16.179 EOF 00:20:16.179 )") 00:20:16.179 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:16.179 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:16.179 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:16.179 { 00:20:16.179 "params": { 00:20:16.179 "name": "Nvme$subsystem", 00:20:16.179 "trtype": "$TEST_TRANSPORT", 00:20:16.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:16.179 "adrfam": "ipv4", 00:20:16.179 "trsvcid": "$NVMF_PORT", 00:20:16.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:16.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:16.179 "hdgst": ${hdgst:-false}, 00:20:16.179 "ddgst": ${ddgst:-false} 00:20:16.179 }, 00:20:16.179 "method": "bdev_nvme_attach_controller" 00:20:16.179 } 00:20:16.179 EOF 00:20:16.179 )") 00:20:16.179 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:16.179 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:16.179 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:16.179 { 00:20:16.179 "params": { 00:20:16.179 "name": "Nvme$subsystem", 00:20:16.179 "trtype": "$TEST_TRANSPORT", 00:20:16.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:16.179 "adrfam": "ipv4", 00:20:16.179 "trsvcid": "$NVMF_PORT", 00:20:16.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:16.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:16.179 "hdgst": ${hdgst:-false}, 00:20:16.179 "ddgst": ${ddgst:-false} 00:20:16.179 }, 00:20:16.179 "method": "bdev_nvme_attach_controller" 00:20:16.179 } 00:20:16.179 EOF 00:20:16.179 )") 00:20:16.179 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:16.179 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:16.179 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:16.179 { 00:20:16.179 "params": { 00:20:16.179 "name": "Nvme$subsystem", 00:20:16.179 "trtype": "$TEST_TRANSPORT", 00:20:16.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:16.179 "adrfam": "ipv4", 00:20:16.179 "trsvcid": "$NVMF_PORT", 00:20:16.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:16.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:16.179 "hdgst": ${hdgst:-false}, 00:20:16.179 "ddgst": ${ddgst:-false} 00:20:16.179 }, 00:20:16.179 "method": "bdev_nvme_attach_controller" 00:20:16.179 } 00:20:16.179 EOF 00:20:16.179 )") 00:20:16.179 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:16.179 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:16.179 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:16.179 { 00:20:16.179 "params": { 00:20:16.179 "name": "Nvme$subsystem", 00:20:16.179 "trtype": "$TEST_TRANSPORT", 00:20:16.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:16.179 "adrfam": "ipv4", 00:20:16.179 "trsvcid": "$NVMF_PORT", 00:20:16.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:16.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:16.180 "hdgst": ${hdgst:-false}, 00:20:16.180 "ddgst": ${ddgst:-false} 00:20:16.180 }, 00:20:16.180 "method": "bdev_nvme_attach_controller" 00:20:16.180 } 00:20:16.180 EOF 00:20:16.180 )") 00:20:16.180 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:16.180 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:16.180 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:16.180 { 00:20:16.180 "params": { 00:20:16.180 "name": "Nvme$subsystem", 00:20:16.180 "trtype": "$TEST_TRANSPORT", 00:20:16.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:16.180 "adrfam": "ipv4", 00:20:16.180 "trsvcid": "$NVMF_PORT", 00:20:16.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:16.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:16.180 "hdgst": ${hdgst:-false}, 00:20:16.180 "ddgst": ${ddgst:-false} 00:20:16.180 }, 00:20:16.180 "method": "bdev_nvme_attach_controller" 00:20:16.180 } 00:20:16.180 EOF 00:20:16.180 )") 00:20:16.180 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:16.180 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:20:16.180 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:20:16.180 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:16.180 "params": { 00:20:16.180 "name": "Nvme1", 00:20:16.180 "trtype": "tcp", 00:20:16.180 "traddr": "10.0.0.2", 00:20:16.180 "adrfam": "ipv4", 00:20:16.180 "trsvcid": "4420", 00:20:16.180 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.180 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:16.180 "hdgst": false, 00:20:16.180 "ddgst": false 00:20:16.180 }, 00:20:16.180 "method": "bdev_nvme_attach_controller" 00:20:16.180 },{ 00:20:16.180 "params": { 00:20:16.180 "name": "Nvme2", 00:20:16.180 "trtype": "tcp", 00:20:16.180 "traddr": "10.0.0.2", 00:20:16.180 "adrfam": "ipv4", 00:20:16.180 "trsvcid": "4420", 00:20:16.180 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:16.180 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:16.180 "hdgst": false, 00:20:16.180 "ddgst": false 00:20:16.180 }, 00:20:16.180 "method": "bdev_nvme_attach_controller" 00:20:16.180 },{ 00:20:16.180 "params": { 00:20:16.180 "name": "Nvme3", 00:20:16.180 "trtype": "tcp", 00:20:16.180 "traddr": "10.0.0.2", 00:20:16.180 "adrfam": "ipv4", 00:20:16.180 "trsvcid": "4420", 00:20:16.180 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:16.180 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:16.180 "hdgst": false, 00:20:16.180 "ddgst": false 00:20:16.180 }, 00:20:16.180 "method": "bdev_nvme_attach_controller" 00:20:16.180 },{ 00:20:16.180 "params": { 00:20:16.180 "name": "Nvme4", 00:20:16.180 "trtype": "tcp", 00:20:16.180 "traddr": "10.0.0.2", 00:20:16.180 "adrfam": "ipv4", 00:20:16.180 "trsvcid": "4420", 00:20:16.180 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:16.180 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:16.180 "hdgst": false, 00:20:16.180 "ddgst": false 00:20:16.180 }, 00:20:16.180 "method": "bdev_nvme_attach_controller" 00:20:16.180 },{ 00:20:16.180 "params": { 00:20:16.180 "name": "Nvme5", 00:20:16.180 "trtype": "tcp", 00:20:16.180 "traddr": "10.0.0.2", 00:20:16.180 "adrfam": "ipv4", 00:20:16.180 "trsvcid": "4420", 00:20:16.180 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:16.180 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:16.180 "hdgst": false, 00:20:16.180 "ddgst": false 00:20:16.180 }, 00:20:16.180 "method": "bdev_nvme_attach_controller" 00:20:16.180 },{ 00:20:16.180 "params": { 00:20:16.180 "name": "Nvme6", 00:20:16.180 "trtype": "tcp", 00:20:16.180 "traddr": "10.0.0.2", 00:20:16.180 "adrfam": "ipv4", 00:20:16.180 "trsvcid": "4420", 00:20:16.180 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:16.180 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:16.180 "hdgst": false, 00:20:16.180 "ddgst": false 00:20:16.180 }, 00:20:16.180 "method": "bdev_nvme_attach_controller" 00:20:16.180 },{ 00:20:16.180 "params": { 00:20:16.180 "name": "Nvme7", 00:20:16.180 "trtype": "tcp", 00:20:16.180 "traddr": "10.0.0.2", 00:20:16.180 "adrfam": "ipv4", 00:20:16.180 "trsvcid": "4420", 00:20:16.180 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:16.180 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:16.180 "hdgst": false, 00:20:16.180 "ddgst": false 00:20:16.180 }, 00:20:16.180 "method": "bdev_nvme_attach_controller" 00:20:16.180 },{ 00:20:16.180 "params": { 00:20:16.180 "name": "Nvme8", 00:20:16.180 "trtype": "tcp", 00:20:16.180 "traddr": "10.0.0.2", 00:20:16.180 "adrfam": "ipv4", 00:20:16.180 "trsvcid": "4420", 00:20:16.180 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:16.180 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:16.180 "hdgst": false, 00:20:16.180 "ddgst": false 00:20:16.180 }, 00:20:16.180 "method": "bdev_nvme_attach_controller" 00:20:16.180 },{ 00:20:16.180 "params": { 00:20:16.180 "name": "Nvme9", 00:20:16.180 "trtype": "tcp", 00:20:16.180 "traddr": "10.0.0.2", 00:20:16.180 "adrfam": "ipv4", 00:20:16.180 "trsvcid": "4420", 00:20:16.180 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:16.180 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:16.180 "hdgst": false, 00:20:16.180 "ddgst": false 00:20:16.180 }, 00:20:16.180 "method": "bdev_nvme_attach_controller" 00:20:16.180 },{ 00:20:16.180 "params": { 00:20:16.180 "name": "Nvme10", 00:20:16.180 "trtype": "tcp", 00:20:16.180 "traddr": "10.0.0.2", 00:20:16.180 "adrfam": "ipv4", 00:20:16.180 "trsvcid": "4420", 00:20:16.180 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:16.180 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:16.180 "hdgst": false, 00:20:16.180 "ddgst": false 00:20:16.180 }, 00:20:16.180 "method": "bdev_nvme_attach_controller" 00:20:16.180 }' 00:20:16.180 [2024-11-19 11:22:11.496078] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:20:16.180 [2024-11-19 11:22:11.496152] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2655247 ] 00:20:16.180 [2024-11-19 11:22:11.577544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.180 [2024-11-19 11:22:11.636310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:18.079 Running I/O for 10 seconds... 00:20:18.079 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:18.079 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:20:18.079 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:18.079 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.079 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:18.338 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.338 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:18.338 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:18.338 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:18.338 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:20:18.338 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:20:18.338 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:20:18.338 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:20:18.338 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:18.338 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:18.338 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.338 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:18.338 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:18.338 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.338 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=15 00:20:18.338 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 15 -ge 100 ']' 00:20:18.338 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:18.612 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:18.612 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:18.612 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:18.612 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:18.612 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.612 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:18.612 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.612 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:20:18.612 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:20:18.612 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:20:18.612 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:20:18.612 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:20:18.612 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2655073 00:20:18.612 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2655073 ']' 00:20:18.612 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2655073 00:20:18.612 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:20:18.612 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:18.612 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2655073 00:20:18.612 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:18.612 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:18.612 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2655073' 00:20:18.612 killing process with pid 2655073 00:20:18.612 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 2655073 00:20:18.612 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 2655073 00:20:18.612 [2024-11-19 11:22:13.950420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf46f00 is same with the state(6) to be set 00:20:18.612 [2024-11-19 11:22:13.950504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf46f00 is same with the state(6) to be set 00:20:18.612 [2024-11-19 11:22:13.950520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf46f00 is same with the state(6) to be set 00:20:18.612 [2024-11-19 11:22:13.950532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf46f00 is same with the state(6) to be set 00:20:18.612 [2024-11-19 11:22:13.950545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf46f00 is same with the state(6) to be set 00:20:18.612 [2024-11-19 11:22:13.950558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf46f00 is same with the state(6) to be set 00:20:18.612 [2024-11-19 11:22:13.950570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf46f00 is same with the state(6) to be set 00:20:18.612 [2024-11-19 11:22:13.950593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf46f00 is same with the state(6) to be set 00:20:18.612 [2024-11-19 11:22:13.950605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf46f00 is same with the state(6) to be set 00:20:18.612 [2024-11-19 11:22:13.950617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf46f00 is same with the state(6) to be set 00:20:18.612 [2024-11-19 11:22:13.950629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf46f00 is same with the state(6) to be set 00:20:18.612 [2024-11-19 11:22:13.950640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf46f00 is same with the state(6) to be set 00:20:18.612 [2024-11-19 11:22:13.950652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf46f00 is same with the state(6) to be set 00:20:18.612 [2024-11-19 11:22:13.950664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf46f00 is same with the state(6) to be set 00:20:18.612 [2024-11-19 11:22:13.950690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf46f00 is same with the state(6) to be set 00:20:18.612 [2024-11-19 11:22:13.950702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf46f00 is same with the state(6) to be set 00:20:18.612 [2024-11-19 11:22:13.950713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf46f00 is same with the state(6) to be set 00:20:18.612 [2024-11-19 11:22:13.950725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf46f00 is same with the state(6) to be set 00:20:18.612 [2024-11-19 11:22:13.950751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf46f00 is same with the state(6) to be set 00:20:18.612 [2024-11-19 11:22:13.950764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf46f00 is same with the state(6) to be set 00:20:18.612 [2024-11-19 11:22:13.952103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.612 [2024-11-19 11:22:13.952152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.612 [2024-11-19 11:22:13.952195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.612 [2024-11-19 11:22:13.952224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.612 [2024-11-19 11:22:13.952255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.612 [2024-11-19 11:22:13.952282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.612 [2024-11-19 11:22:13.952311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.612 [2024-11-19 11:22:13.952337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.612 [2024-11-19 11:22:13.952377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.612 [2024-11-19 11:22:13.952405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.612 [2024-11-19 11:22:13.952433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.612 [2024-11-19 11:22:13.952459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.612 [2024-11-19 11:22:13.952488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.612 [2024-11-19 11:22:13.952520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.612 [2024-11-19 11:22:13.952550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.612 [2024-11-19 11:22:13.952575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.612 [2024-11-19 11:22:13.952602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.612 [2024-11-19 11:22:13.952627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.613 [2024-11-19 11:22:13.952657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.613 [2024-11-19 11:22:13.952683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.613 [2024-11-19 11:22:13.952712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.613 [2024-11-19 11:22:13.952738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.613 [2024-11-19 11:22:13.952764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.613 [2024-11-19 11:22:13.952790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.613 [2024-11-19 11:22:13.952817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.613 [2024-11-19 11:22:13.952843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.613 [2024-11-19 11:22:13.952870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.613 [2024-11-19 11:22:13.952897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.613 [2024-11-19 11:22:13.952926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.613 [2024-11-19 11:22:13.952955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.613 [2024-11-19 11:22:13.952983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.613 [2024-11-19 11:22:13.953010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.613 [2024-11-19 11:22:13.953037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.613 [2024-11-19 11:22:13.953063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.613 [2024-11-19 11:22:13.953090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.613 [2024-11-19 11:22:13.953117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.613 [2024-11-19 11:22:13.953145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.613 [2024-11-19 11:22:13.953171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.613 [2024-11-19 11:22:13.953204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.613 [2024-11-19 11:22:13.953239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.613 [2024-11-19 11:22:13.953267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.613 [2024-11-19 11:22:13.953275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.613 [2024-11-19 11:22:13.953294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.613 [2024-11-19 11:22:13.953311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.613 [2024-11-19 11:22:13.953327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.613 [2024-11-19 11:22:13.953322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.613 [2024-11-19 11:22:13.953340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.613 [2024-11-19 11:22:13.953355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.613 [2024-11-19 11:22:13.953358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.613 [2024-11-19 11:22:13.953378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.613 [2024-11-19 11:22:13.953393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.613 [2024-11-19 11:22:13.953405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with t[2024-11-19 11:22:13.953398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:12he state(6) to be set 00:20:18.613 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.613 [2024-11-19 11:22:13.953420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.613 [2024-11-19 11:22:13.953428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.613 [2024-11-19 11:22:13.953438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.613 [2024-11-19 11:22:13.953451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.613 [2024-11-19 11:22:13.953456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:12[2024-11-19 11:22:13.953463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.613 he state(6) to be set 00:20:18.613 [2024-11-19 11:22:13.953479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.613 [2024-11-19 11:22:13.953484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 11:22:13.953491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.613 he state(6) to be set 00:20:18.613 [2024-11-19 11:22:13.953506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.613 [2024-11-19 11:22:13.953518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.613 [2024-11-19 11:22:13.953513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.613 [2024-11-19 11:22:13.953530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.613 [2024-11-19 11:22:13.953543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.613 [2024-11-19 11:22:13.953547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 11:22:13.953555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.613 he state(6) to be set 00:20:18.613 [2024-11-19 11:22:13.953569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.613 [2024-11-19 11:22:13.953581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.613 [2024-11-19 11:22:13.953579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.613 [2024-11-19 11:22:13.953593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.613 [2024-11-19 11:22:13.953606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.613 [2024-11-19 11:22:13.953608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.613 [2024-11-19 11:22:13.953618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.613 [2024-11-19 11:22:13.953631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.613 [2024-11-19 11:22:13.953636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.613 [2024-11-19 11:22:13.953653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.613 [2024-11-19 11:22:13.953665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.613 [2024-11-19 11:22:13.953666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.613 [2024-11-19 11:22:13.953677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.613 [2024-11-19 11:22:13.953690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.613 [2024-11-19 11:22:13.953693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:12[2024-11-19 11:22:13.953701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.613 he state(6) to be set 00:20:18.613 [2024-11-19 11:22:13.953717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.613 [2024-11-19 11:22:13.953723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 11:22:13.953729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.613 he state(6) to be set 00:20:18.613 [2024-11-19 11:22:13.953744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.613 [2024-11-19 11:22:13.953756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.613 [2024-11-19 11:22:13.953753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.613 [2024-11-19 11:22:13.953773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.613 [2024-11-19 11:22:13.953787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.613 [2024-11-19 11:22:13.953790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.613 [2024-11-19 11:22:13.953799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.614 [2024-11-19 11:22:13.953812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.614 [2024-11-19 11:22:13.953823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with t[2024-11-19 11:22:13.953818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:12he state(6) to be set 00:20:18.614 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.614 [2024-11-19 11:22:13.953839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.614 [2024-11-19 11:22:13.953845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 11:22:13.953851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.614 he state(6) to be set 00:20:18.614 [2024-11-19 11:22:13.953867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.614 [2024-11-19 11:22:13.953878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.614 [2024-11-19 11:22:13.953875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.614 [2024-11-19 11:22:13.953891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.614 [2024-11-19 11:22:13.953903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.614 [2024-11-19 11:22:13.953902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.614 [2024-11-19 11:22:13.953915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.614 [2024-11-19 11:22:13.953928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.614 [2024-11-19 11:22:13.953930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.614 [2024-11-19 11:22:13.953939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.614 [2024-11-19 11:22:13.953963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.614 [2024-11-19 11:22:13.953964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.614 [2024-11-19 11:22:13.953975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.614 [2024-11-19 11:22:13.953987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.614 [2024-11-19 11:22:13.953991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:12[2024-11-19 11:22:13.953999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.614 he state(6) to be set 00:20:18.614 [2024-11-19 11:22:13.954015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.614 [2024-11-19 11:22:13.954020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.614 [2024-11-19 11:22:13.954034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.614 [2024-11-19 11:22:13.954047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.614 [2024-11-19 11:22:13.954048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.614 [2024-11-19 11:22:13.954059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.614 [2024-11-19 11:22:13.954072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.614 [2024-11-19 11:22:13.954073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.614 [2024-11-19 11:22:13.954083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.614 [2024-11-19 11:22:13.954095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.614 [2024-11-19 11:22:13.954100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:12[2024-11-19 11:22:13.954106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.614 he state(6) to be set 00:20:18.614 [2024-11-19 11:22:13.954123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.614 [2024-11-19 11:22:13.954129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 11:22:13.954135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.614 he state(6) to be set 00:20:18.614 [2024-11-19 11:22:13.954150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48ce0 is same with the state(6) to be set 00:20:18.614 [2024-11-19 11:22:13.954158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.614 [2024-11-19 11:22:13.954186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.614 [2024-11-19 11:22:13.954213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.614 [2024-11-19 11:22:13.954245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.614 [2024-11-19 11:22:13.954272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.614 [2024-11-19 11:22:13.954298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.614 [2024-11-19 11:22:13.954325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.614 [2024-11-19 11:22:13.954359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.614 [2024-11-19 11:22:13.954401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.614 [2024-11-19 11:22:13.954427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.614 [2024-11-19 11:22:13.954455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.614 [2024-11-19 11:22:13.954487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.614 [2024-11-19 11:22:13.954527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.614 [2024-11-19 11:22:13.954553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.614 [2024-11-19 11:22:13.954582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.614 [2024-11-19 11:22:13.954607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.614 [2024-11-19 11:22:13.954648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.614 [2024-11-19 11:22:13.954673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.614 [2024-11-19 11:22:13.954722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.614 [2024-11-19 11:22:13.954747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.614 [2024-11-19 11:22:13.954773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.614 [2024-11-19 11:22:13.954798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.614 [2024-11-19 11:22:13.954825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.614 [2024-11-19 11:22:13.954850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.614 [2024-11-19 11:22:13.954878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.614 [2024-11-19 11:22:13.954902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.614 [2024-11-19 11:22:13.954930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.614 [2024-11-19 11:22:13.954954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.614 [2024-11-19 11:22:13.954982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.614 [2024-11-19 11:22:13.955005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.614 [2024-11-19 11:22:13.955032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.614 [2024-11-19 11:22:13.955056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.614 [2024-11-19 11:22:13.955083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.614 [2024-11-19 11:22:13.955108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.614 [2024-11-19 11:22:13.955135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.614 [2024-11-19 11:22:13.955166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.614 [2024-11-19 11:22:13.955203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.615 [2024-11-19 11:22:13.955229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.615 [2024-11-19 11:22:13.955256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.615 [2024-11-19 11:22:13.955280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.615 [2024-11-19 11:22:13.955307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.615 [2024-11-19 11:22:13.955333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.615 [2024-11-19 11:22:13.955374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.615 [2024-11-19 11:22:13.955401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.615 [2024-11-19 11:22:13.955427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.615 [2024-11-19 11:22:13.955458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.615 [2024-11-19 11:22:13.955484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.615 [2024-11-19 11:22:13.955521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.615 [2024-11-19 11:22:13.955547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.615 [2024-11-19 11:22:13.955572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.615 [2024-11-19 11:22:13.955577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf473d0 is same with the state(6) to be set 00:20:18.615 [2024-11-19 11:22:13.955601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf473d0 is same with the state(6) to be set 00:20:18.615 [2024-11-19 11:22:13.955598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.615 [2024-11-19 11:22:13.955628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf473d0 is same with t[2024-11-19 11:22:13.955625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:20:18.615 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.615 [2024-11-19 11:22:13.955650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf473d0 is same with the state(6) to be set 00:20:18.615 [2024-11-19 11:22:13.955658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.615 [2024-11-19 11:22:13.955684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.615 [2024-11-19 11:22:13.955716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.615 [2024-11-19 11:22:13.955749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.615 [2024-11-19 11:22:13.955786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.615 [2024-11-19 11:22:13.955816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.615 [2024-11-19 11:22:13.956545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.615 [2024-11-19 11:22:13.956578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.615 [2024-11-19 11:22:13.956612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.615 [2024-11-19 11:22:13.956639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.615 [2024-11-19 11:22:13.956668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.615 [2024-11-19 11:22:13.956693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.615 [2024-11-19 11:22:13.956721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.615 [2024-11-19 11:22:13.956746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.615 [2024-11-19 11:22:13.956774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.615 [2024-11-19 11:22:13.956799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.615 [2024-11-19 11:22:13.956826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.615 [2024-11-19 11:22:13.956851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.615 [2024-11-19 11:22:13.956891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.615 [2024-11-19 11:22:13.956915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.615 [2024-11-19 11:22:13.956941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.615 [2024-11-19 11:22:13.956966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.615 [2024-11-19 11:22:13.956991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.615 [2024-11-19 11:22:13.957017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.615 [2024-11-19 11:22:13.957043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.615 [2024-11-19 11:22:13.957069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.615 [2024-11-19 11:22:13.957095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.615 [2024-11-19 11:22:13.957120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.615 [2024-11-19 11:22:13.957148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with t[2024-11-19 11:22:13.957155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:1he state(6) to be set 00:20:18.615 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.615 [2024-11-19 11:22:13.957187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.615 [2024-11-19 11:22:13.957190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.615 [2024-11-19 11:22:13.957212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.615 [2024-11-19 11:22:13.957225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.615 [2024-11-19 11:22:13.957221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.615 [2024-11-19 11:22:13.957237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.615 [2024-11-19 11:22:13.957249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.615 [2024-11-19 11:22:13.957247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.615 [2024-11-19 11:22:13.957260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.615 [2024-11-19 11:22:13.957273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.615 [2024-11-19 11:22:13.957274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.615 [2024-11-19 11:22:13.957284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.615 [2024-11-19 11:22:13.957297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.615 [2024-11-19 11:22:13.957300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.615 [2024-11-19 11:22:13.957309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.615 [2024-11-19 11:22:13.957320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.615 [2024-11-19 11:22:13.957332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with t[2024-11-19 11:22:13.957326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:1he state(6) to be set 00:20:18.615 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.615 [2024-11-19 11:22:13.957346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.615 [2024-11-19 11:22:13.957360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.615 [2024-11-19 11:22:13.957370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.615 [2024-11-19 11:22:13.957396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.615 [2024-11-19 11:22:13.957410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.615 [2024-11-19 11:22:13.957424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.615 [2024-11-19 11:22:13.957426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:1[2024-11-19 11:22:13.957436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.615 he state(6) to be set 00:20:18.616 [2024-11-19 11:22:13.957450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.616 [2024-11-19 11:22:13.957453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.616 [2024-11-19 11:22:13.957467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.616 [2024-11-19 11:22:13.957480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.616 [2024-11-19 11:22:13.957482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.616 [2024-11-19 11:22:13.957492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.616 [2024-11-19 11:22:13.957505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.616 [2024-11-19 11:22:13.957507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.616 [2024-11-19 11:22:13.957517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.616 [2024-11-19 11:22:13.957530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.616 [2024-11-19 11:22:13.957535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:1[2024-11-19 11:22:13.957542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.616 he state(6) to be set 00:20:18.616 [2024-11-19 11:22:13.957558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.616 [2024-11-19 11:22:13.957562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 11:22:13.957570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.616 he state(6) to be set 00:20:18.616 [2024-11-19 11:22:13.957584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.616 [2024-11-19 11:22:13.957596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.616 [2024-11-19 11:22:13.957592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.616 [2024-11-19 11:22:13.957608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.616 [2024-11-19 11:22:13.957621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.616 [2024-11-19 11:22:13.957619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.616 [2024-11-19 11:22:13.957642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.616 [2024-11-19 11:22:13.957648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.616 [2024-11-19 11:22:13.957665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.616 [2024-11-19 11:22:13.957693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with t[2024-11-19 11:22:13.957688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:20:18.616 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.616 [2024-11-19 11:22:13.957707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.616 [2024-11-19 11:22:13.957722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with t[2024-11-19 11:22:13.957718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:1he state(6) to be set 00:20:18.616 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.616 [2024-11-19 11:22:13.957738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.616 [2024-11-19 11:22:13.957744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 11:22:13.957750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.616 he state(6) to be set 00:20:18.616 [2024-11-19 11:22:13.957765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.616 [2024-11-19 11:22:13.957777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.616 [2024-11-19 11:22:13.957772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.616 [2024-11-19 11:22:13.957799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.616 [2024-11-19 11:22:13.957803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.616 [2024-11-19 11:22:13.957811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.616 [2024-11-19 11:22:13.957824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.616 [2024-11-19 11:22:13.957829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:1[2024-11-19 11:22:13.957835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.616 he state(6) to be set 00:20:18.616 [2024-11-19 11:22:13.957852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.616 [2024-11-19 11:22:13.957857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 11:22:13.957863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.616 he state(6) to be set 00:20:18.616 [2024-11-19 11:22:13.957877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.616 [2024-11-19 11:22:13.957889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.616 [2024-11-19 11:22:13.957885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.616 [2024-11-19 11:22:13.957901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.616 [2024-11-19 11:22:13.957913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.616 [2024-11-19 11:22:13.957910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.616 [2024-11-19 11:22:13.957925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.616 [2024-11-19 11:22:13.957936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.616 [2024-11-19 11:22:13.957937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.616 [2024-11-19 11:22:13.957948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.616 [2024-11-19 11:22:13.957963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.616 [2024-11-19 11:22:13.957963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.616 [2024-11-19 11:22:13.957976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.616 [2024-11-19 11:22:13.957988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.616 [2024-11-19 11:22:13.957989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.616 [2024-11-19 11:22:13.957999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.616 [2024-11-19 11:22:13.958012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.616 [2024-11-19 11:22:13.958015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.616 [2024-11-19 11:22:13.958024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.616 [2024-11-19 11:22:13.958036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with the state(6) to be set 00:20:18.616 [2024-11-19 11:22:13.958047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf478a0 is same with t[2024-11-19 11:22:13.958041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128he state(6) to be set 00:20:18.616 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.616 [2024-11-19 11:22:13.958067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.616 [2024-11-19 11:22:13.958093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.616 [2024-11-19 11:22:13.958118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.616 [2024-11-19 11:22:13.958144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.616 [2024-11-19 11:22:13.958168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.616 [2024-11-19 11:22:13.958194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.617 [2024-11-19 11:22:13.958219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.617 [2024-11-19 11:22:13.958244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.617 [2024-11-19 11:22:13.958275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.617 [2024-11-19 11:22:13.958302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.617 [2024-11-19 11:22:13.958327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.617 [2024-11-19 11:22:13.958375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.617 [2024-11-19 11:22:13.958403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.617 [2024-11-19 11:22:13.958435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.617 [2024-11-19 11:22:13.958462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.617 [2024-11-19 11:22:13.958488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.617 [2024-11-19 11:22:13.958515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.617 [2024-11-19 11:22:13.958540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.617 [2024-11-19 11:22:13.958567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.617 [2024-11-19 11:22:13.958593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.617 [2024-11-19 11:22:13.958618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.617 [2024-11-19 11:22:13.958645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.617 [2024-11-19 11:22:13.958683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.617 [2024-11-19 11:22:13.958710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.617 [2024-11-19 11:22:13.958734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.617 [2024-11-19 11:22:13.958762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.617 [2024-11-19 11:22:13.958785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.617 [2024-11-19 11:22:13.958812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.617 [2024-11-19 11:22:13.958835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.617 [2024-11-19 11:22:13.958863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.617 [2024-11-19 11:22:13.958886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.617 [2024-11-19 11:22:13.958912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.617 [2024-11-19 11:22:13.958935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.617 [2024-11-19 11:22:13.958962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.617 [2024-11-19 11:22:13.958986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.617 [2024-11-19 11:22:13.959013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.617 [2024-11-19 11:22:13.959047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.617 [2024-11-19 11:22:13.959073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.617 [2024-11-19 11:22:13.959112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.617 [2024-11-19 11:22:13.959141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.617 [2024-11-19 11:22:13.959171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.617 [2024-11-19 11:22:13.959199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.617 [2024-11-19 11:22:13.959222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.617 [2024-11-19 11:22:13.959249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.617 [2024-11-19 11:22:13.959272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.617 [2024-11-19 11:22:13.959293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with t[2024-11-19 11:22:13.959299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:12he state(6) to be set 00:20:18.617 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.617 [2024-11-19 11:22:13.959325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.617 [2024-11-19 11:22:13.959327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.617 [2024-11-19 11:22:13.959339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.617 [2024-11-19 11:22:13.959357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.617 [2024-11-19 11:22:13.959371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:12[2024-11-19 11:22:13.959392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.617 he state(6) to be set 00:20:18.617 [2024-11-19 11:22:13.959421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.617 [2024-11-19 11:22:13.959424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.617 [2024-11-19 11:22:13.959433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.617 [2024-11-19 11:22:13.959447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.617 [2024-11-19 11:22:13.959458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with t[2024-11-19 11:22:13.959453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:12he state(6) to be set 00:20:18.617 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.617 [2024-11-19 11:22:13.959473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.617 [2024-11-19 11:22:13.959480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 11:22:13.959486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.617 he state(6) to be set 00:20:18.617 [2024-11-19 11:22:13.959501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.617 [2024-11-19 11:22:13.959513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.617 [2024-11-19 11:22:13.959509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.617 [2024-11-19 11:22:13.959525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.617 [2024-11-19 11:22:13.959538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.617 [2024-11-19 11:22:13.959540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.617 [2024-11-19 11:22:13.959550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.617 [2024-11-19 11:22:13.959563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.617 [2024-11-19 11:22:13.959568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:12[2024-11-19 11:22:13.959574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.617 he state(6) to be set 00:20:18.617 [2024-11-19 11:22:13.959591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.617 [2024-11-19 11:22:13.959595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.618 [2024-11-19 11:22:13.959603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.959616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.959628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.959622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.618 [2024-11-19 11:22:13.959639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.959649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 11:22:13.959658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.618 he state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.959688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.959694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:12[2024-11-19 11:22:13.959700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.618 he state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.959716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.959720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 11:22:13.959727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.618 he state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.959741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.959753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with t[2024-11-19 11:22:13.959749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:12he state(6) to be set 00:20:18.618 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.618 [2024-11-19 11:22:13.959769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.959775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.618 [2024-11-19 11:22:13.959787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.959801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.959803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.618 [2024-11-19 11:22:13.959813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.959826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.959827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.618 [2024-11-19 11:22:13.959838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.959851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.959856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:12[2024-11-19 11:22:13.959862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.618 he state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.959877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.959880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 11:22:13.959889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.618 he state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.959903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.959914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.959910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.618 [2024-11-19 11:22:13.959927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.959939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.959935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.618 [2024-11-19 11:22:13.959951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.959962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.959963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.618 [2024-11-19 11:22:13.959974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.959987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.959987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.618 [2024-11-19 11:22:13.959998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.960013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.960016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.618 [2024-11-19 11:22:13.960025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.960037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.960049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.960046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.618 [2024-11-19 11:22:13.960061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.960073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.960075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.618 [2024-11-19 11:22:13.960088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.960100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.960112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.960110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.618 [2024-11-19 11:22:13.960124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.960136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.960138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:12[2024-11-19 11:22:13.960147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.618 he state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.960161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.960166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.618 [2024-11-19 11:22:13.960173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47d90 is same with the state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.960221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:18.618 [2024-11-19 11:22:13.961025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.961051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.961064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.961077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.961091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.961109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.961122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.961134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.961147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.961160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.961171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.961192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.618 [2024-11-19 11:22:13.961204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.961857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7de0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.962899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.619 [2024-11-19 11:22:13.962934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.619 [2024-11-19 11:22:13.962961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.619 [2024-11-19 11:22:13.962986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.619 [2024-11-19 11:22:13.963009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.619 [2024-11-19 11:22:13.963033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.619 [2024-11-19 11:22:13.963056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.619 [2024-11-19 11:22:13.963080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.619 [2024-11-19 11:22:13.963103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ef1d0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.963207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.619 [2024-11-19 11:22:13.963236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.619 [2024-11-19 11:22:13.963263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.619 [2024-11-19 11:22:13.963281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with t[2024-11-19 11:22:13.963286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(6) to be set 00:20:18.619 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.619 [2024-11-19 11:22:13.963312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.963315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.619 [2024-11-19 11:22:13.963326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.963339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.963339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.619 [2024-11-19 11:22:13.963352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.963390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.963388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.619 [2024-11-19 11:22:13.963406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.963416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.619 [2024-11-19 11:22:13.963425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.963438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.963440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66aa0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.963449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.963462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.619 [2024-11-19 11:22:13.963474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.963486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.963497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.963511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.963524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.963536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.963548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.963550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-11-19 11:22:13.963560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with tid:0 cdw10:00000000 cdw11:00000000 00:20:18.620 he state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.963576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.963588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with t[2024-11-19 11:22:13.963582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(6) to be set 00:20:18.620 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.620 [2024-11-19 11:22:13.963602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.963614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.963613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.620 [2024-11-19 11:22:13.963626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.963639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.963640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.620 [2024-11-19 11:22:13.963650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.963662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.963664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-11-19 11:22:13.963688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with tid:0 cdw10:00000000 cdw11:00000000 00:20:18.620 he state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.963706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.963711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-19 11:22:13.963717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.620 he state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.963732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.963743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with t[2024-11-19 11:22:13.963737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nshe state(6) to be set 00:20:18.620 id:0 cdw10:00000000 cdw11:00000000 00:20:18.620 [2024-11-19 11:22:13.963758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.963764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.620 [2024-11-19 11:22:13.963775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.963788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.963786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1fa00 is same with the state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.963799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.963811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.963822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.963833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.963853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.963864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.963875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.963871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.620 [2024-11-19 11:22:13.963887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.963898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.963899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.620 [2024-11-19 11:22:13.963918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.963930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.963932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.620 [2024-11-19 11:22:13.963941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.963957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.963955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.620 [2024-11-19 11:22:13.963970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.963982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.963980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.620 [2024-11-19 11:22:13.963993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.964005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.964005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.620 [2024-11-19 11:22:13.964016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.964028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.964030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.620 [2024-11-19 11:22:13.964039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.964052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.964054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.620 [2024-11-19 11:22:13.964063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.964075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.964076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1270 is same with the state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.964086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.964098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.964109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.964120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.964131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd82b0 is same with the state(6) to be set 00:20:18.620 [2024-11-19 11:22:13.964145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.620 [2024-11-19 11:22:13.964173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.620 [2024-11-19 11:22:13.964198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.620 [2024-11-19 11:22:13.964220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.620 [2024-11-19 11:22:13.964249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.620 [2024-11-19 11:22:13.964272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.620 [2024-11-19 11:22:13.964295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.620 [2024-11-19 11:22:13.964319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.620 [2024-11-19 11:22:13.964339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e8220 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.964434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.621 [2024-11-19 11:22:13.964465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.621 [2024-11-19 11:22:13.964491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.621 [2024-11-19 11:22:13.964517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.621 [2024-11-19 11:22:13.964540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.621 [2024-11-19 11:22:13.964565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.621 [2024-11-19 11:22:13.964589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.621 [2024-11-19 11:22:13.964612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.621 [2024-11-19 11:22:13.964635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f16f0 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.965432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48110 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.965466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48110 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.965480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48110 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.966482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48490 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.966510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48490 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.966524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48490 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.966877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.966902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.966915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.966937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.966949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.966961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.966973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.966990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.967003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.967015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.967026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.967038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.967050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.967062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.967074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.967086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.967098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.967110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.967121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.967133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.967144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.967156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.967168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.967195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.967207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.967218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.967229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.967240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.967251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.967262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.967273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.967285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.967296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.967307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.967322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.967334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.967357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.967394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.967407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.967419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.967431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.967442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.967454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.967466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.967478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.967489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.967501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.967513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.967524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.967536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.967547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.967559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.967570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.621 [2024-11-19 11:22:13.967582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.622 [2024-11-19 11:22:13.967595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.622 [2024-11-19 11:22:13.968405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.622 [2024-11-19 11:22:13.968442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.622 [2024-11-19 11:22:13.968476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.622 [2024-11-19 11:22:13.968505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.622 [2024-11-19 11:22:13.968533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.622 [2024-11-19 11:22:13.968570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.622 [2024-11-19 11:22:13.968615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.622 [2024-11-19 11:22:13.968641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.622 [2024-11-19 11:22:13.968690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.622 [2024-11-19 11:22:13.968714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.622 [2024-11-19 11:22:13.968742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.622 [2024-11-19 11:22:13.968767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.622 [2024-11-19 11:22:13.968794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.622 [2024-11-19 11:22:13.968817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.622 [2024-11-19 11:22:13.968845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.622 [2024-11-19 11:22:13.968868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.622 [2024-11-19 11:22:13.968910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.622 [2024-11-19 11:22:13.968934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.622 [2024-11-19 11:22:13.968961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.622 [2024-11-19 11:22:13.968985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.622 [2024-11-19 11:22:13.969012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.622 [2024-11-19 11:22:13.969035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.622 [2024-11-19 11:22:13.969061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.622 [2024-11-19 11:22:13.969084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.622 [2024-11-19 11:22:13.969110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.622 [2024-11-19 11:22:13.969133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.622 [2024-11-19 11:22:13.969163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.622 [2024-11-19 11:22:13.969188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.622 [2024-11-19 11:22:13.969215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.622 [2024-11-19 11:22:13.969239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.622 [2024-11-19 11:22:13.969263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.622 [2024-11-19 11:22:13.969293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.622 [2024-11-19 11:22:13.969321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.622 [2024-11-19 11:22:13.969359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.622 [2024-11-19 11:22:13.969396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.622 [2024-11-19 11:22:13.969436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.622 [2024-11-19 11:22:13.969462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.622 [2024-11-19 11:22:13.969496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.622 [2024-11-19 11:22:13.969525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.622 [2024-11-19 11:22:13.969551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.622 [2024-11-19 11:22:13.969579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.622 [2024-11-19 11:22:13.969604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.622 [2024-11-19 11:22:13.969632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.622 [2024-11-19 11:22:13.969672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.622 [2024-11-19 11:22:13.969698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.622 [2024-11-19 11:22:13.969735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.622 [2024-11-19 11:22:13.969762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.622 [2024-11-19 11:22:13.969786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.622 [2024-11-19 11:22:13.969812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.622 [2024-11-19 11:22:13.969836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.622 [2024-11-19 11:22:13.969862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.622 [2024-11-19 11:22:13.969887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.622 [2024-11-19 11:22:13.969912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.622 [2024-11-19 11:22:13.969936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.622 [2024-11-19 11:22:13.969962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.622 [2024-11-19 11:22:13.969988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.622 [2024-11-19 11:22:13.970019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.622 [2024-11-19 11:22:13.970044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.622 [2024-11-19 11:22:13.970071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.622 [2024-11-19 11:22:13.970096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.622 [2024-11-19 11:22:13.970122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.622 [2024-11-19 11:22:13.970147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.622 [2024-11-19 11:22:13.970173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.622 [2024-11-19 11:22:13.970198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.622 [2024-11-19 11:22:13.970223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.622 [2024-11-19 11:22:13.970248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.622 [2024-11-19 11:22:13.970273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.622 [2024-11-19 11:22:13.970297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.622 [2024-11-19 11:22:13.970323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.622 [2024-11-19 11:22:13.970375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.622 [2024-11-19 11:22:13.970406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.622 [2024-11-19 11:22:13.970433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.622 [2024-11-19 11:22:13.970459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.622 [2024-11-19 11:22:13.970485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.623 [2024-11-19 11:22:13.970513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.623 [2024-11-19 11:22:13.970539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.623 [2024-11-19 11:22:13.970565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.623 [2024-11-19 11:22:13.970592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.623 [2024-11-19 11:22:13.970619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.623 [2024-11-19 11:22:13.970664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.623 [2024-11-19 11:22:13.970691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.623 [2024-11-19 11:22:13.970735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.623 [2024-11-19 11:22:13.970760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.623 [2024-11-19 11:22:13.970784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.623 [2024-11-19 11:22:13.970810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.623 [2024-11-19 11:22:13.970834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.623 [2024-11-19 11:22:13.970860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.623 [2024-11-19 11:22:13.970884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.623 [2024-11-19 11:22:13.970910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.623 [2024-11-19 11:22:13.970933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.623 [2024-11-19 11:22:13.970959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.623 [2024-11-19 11:22:13.970982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.623 [2024-11-19 11:22:13.971007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.623 [2024-11-19 11:22:13.971030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.623 [2024-11-19 11:22:13.971057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.623 [2024-11-19 11:22:13.971080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.623 [2024-11-19 11:22:13.971106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.623 [2024-11-19 11:22:13.971129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.623 [2024-11-19 11:22:13.971155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.623 [2024-11-19 11:22:13.971178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.623 [2024-11-19 11:22:13.971205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.623 [2024-11-19 11:22:13.971239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.623 [2024-11-19 11:22:13.971267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.623 [2024-11-19 11:22:13.971291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.623 [2024-11-19 11:22:13.971316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.623 [2024-11-19 11:22:13.971339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.623 [2024-11-19 11:22:13.971399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.623 [2024-11-19 11:22:13.971439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.623 [2024-11-19 11:22:13.971467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.623 [2024-11-19 11:22:13.971492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.623 [2024-11-19 11:22:13.971521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.623 [2024-11-19 11:22:13.971547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.623 [2024-11-19 11:22:13.971575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.623 [2024-11-19 11:22:13.971600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.623 [2024-11-19 11:22:13.971628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.623 [2024-11-19 11:22:13.971666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.623 [2024-11-19 11:22:13.971694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.623 [2024-11-19 11:22:13.971731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.623 [2024-11-19 11:22:13.971758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.623 [2024-11-19 11:22:13.971780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.623 [2024-11-19 11:22:13.971807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.623 [2024-11-19 11:22:13.971830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.623 [2024-11-19 11:22:13.971856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.623 [2024-11-19 11:22:13.971879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.623 [2024-11-19 11:22:13.971904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.623 [2024-11-19 11:22:13.971927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.623 [2024-11-19 11:22:13.971952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.623 [2024-11-19 11:22:13.971976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.623 [2024-11-19 11:22:13.972028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:18.623 [2024-11-19 11:22:13.972756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:18.623 [2024-11-19 11:22:13.972795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:20:18.623 [2024-11-19 11:22:13.972847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6e8220 (9): Bad file descriptor 00:20:18.623 [2024-11-19 11:22:13.972882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f16f0 (9): Bad file descriptor 00:20:18.623 [2024-11-19 11:22:13.974686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.623 [2024-11-19 11:22:13.974718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.623 [2024-11-19 11:22:13.974752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.623 [2024-11-19 11:22:13.974777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.623 [2024-11-19 11:22:13.974804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.623 [2024-11-19 11:22:13.974827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.623 [2024-11-19 11:22:13.974855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.623 [2024-11-19 11:22:13.974878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.623 [2024-11-19 11:22:13.974905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.623 [2024-11-19 11:22:13.974929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.623 [2024-11-19 11:22:13.974955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.623 [2024-11-19 11:22:13.974978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.623 [2024-11-19 11:22:13.975006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.623 [2024-11-19 11:22:13.975029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.623 [2024-11-19 11:22:13.975056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.623 [2024-11-19 11:22:13.975079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.624 [2024-11-19 11:22:13.975106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.624 [2024-11-19 11:22:13.975130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.624 [2024-11-19 11:22:13.975156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.624 [2024-11-19 11:22:13.975179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.624 [2024-11-19 11:22:13.975205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.624 [2024-11-19 11:22:13.975228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.624 [2024-11-19 11:22:13.975253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.624 [2024-11-19 11:22:13.975287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.624 [2024-11-19 11:22:13.975314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.624 [2024-11-19 11:22:13.975337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.624 [2024-11-19 11:22:13.975396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.624 [2024-11-19 11:22:13.975421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.624 [2024-11-19 11:22:13.975449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.624 [2024-11-19 11:22:13.975473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.624 [2024-11-19 11:22:13.975500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.624 [2024-11-19 11:22:13.975523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.624 [2024-11-19 11:22:13.975550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.624 [2024-11-19 11:22:13.975573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.624 [2024-11-19 11:22:13.975601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.624 [2024-11-19 11:22:13.975625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.624 [2024-11-19 11:22:13.975667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.624 [2024-11-19 11:22:13.975689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.624 [2024-11-19 11:22:13.975716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.624 [2024-11-19 11:22:13.975739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.624 [2024-11-19 11:22:13.975766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.624 [2024-11-19 11:22:13.975789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.624 [2024-11-19 11:22:13.975815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.624 [2024-11-19 11:22:13.975838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.624 [2024-11-19 11:22:13.975863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.624 [2024-11-19 11:22:13.975888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.624 [2024-11-19 11:22:13.975912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.624 [2024-11-19 11:22:13.975936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.624 [2024-11-19 11:22:13.975967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.624 [2024-11-19 11:22:13.975991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.624 [2024-11-19 11:22:13.976017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.624 [2024-11-19 11:22:13.976042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.624 [2024-11-19 11:22:13.976066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.624 [2024-11-19 11:22:13.976090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.624 [2024-11-19 11:22:13.976114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.624 [2024-11-19 11:22:13.976145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.624 [2024-11-19 11:22:13.976172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.624 [2024-11-19 11:22:13.976195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.624 [2024-11-19 11:22:13.976219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.624 [2024-11-19 11:22:13.976244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.624 [2024-11-19 11:22:13.976268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.624 [2024-11-19 11:22:13.976293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.624 [2024-11-19 11:22:13.976317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.624 [2024-11-19 11:22:13.976356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.624 [2024-11-19 11:22:13.976392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.624 [2024-11-19 11:22:13.976433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.624 [2024-11-19 11:22:13.976461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.624 [2024-11-19 11:22:13.976487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.624 [2024-11-19 11:22:13.976514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.624 [2024-11-19 11:22:13.976539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.624 [2024-11-19 11:22:13.976566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.624 [2024-11-19 11:22:13.976592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.624 [2024-11-19 11:22:13.976620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.624 [2024-11-19 11:22:13.976665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.624 [2024-11-19 11:22:13.976692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.624 [2024-11-19 11:22:13.976730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.624 [2024-11-19 11:22:13.976756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.624 [2024-11-19 11:22:13.976779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.624 [2024-11-19 11:22:13.976806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.624 [2024-11-19 11:22:13.976828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.624 [2024-11-19 11:22:13.976853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.624 [2024-11-19 11:22:13.976876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.624 [2024-11-19 11:22:13.976902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.624 [2024-11-19 11:22:13.976925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.624 [2024-11-19 11:22:13.976952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.624 [2024-11-19 11:22:13.981558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.624 [2024-11-19 11:22:13.981589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.624 [2024-11-19 11:22:13.981603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.624 [2024-11-19 11:22:13.981615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.624 [2024-11-19 11:22:13.981627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.625 [2024-11-19 11:22:13.981638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.625 [2024-11-19 11:22:13.981650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.625 [2024-11-19 11:22:13.981661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48810 is same with the state(6) to be set 00:20:18.625 [2024-11-19 11:22:13.988317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.625 [2024-11-19 11:22:13.988380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.625 [2024-11-19 11:22:13.988415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.625 [2024-11-19 11:22:13.988444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.625 [2024-11-19 11:22:13.988472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.625 [2024-11-19 11:22:13.988501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.625 [2024-11-19 11:22:13.988533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.625 [2024-11-19 11:22:13.988562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.625 [2024-11-19 11:22:13.988587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.625 [2024-11-19 11:22:13.988614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.625 [2024-11-19 11:22:13.988639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.625 [2024-11-19 11:22:13.988668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.625 [2024-11-19 11:22:13.988694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.625 [2024-11-19 11:22:13.988722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.625 [2024-11-19 11:22:13.988747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.625 [2024-11-19 11:22:13.988777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.625 [2024-11-19 11:22:13.988802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.625 [2024-11-19 11:22:13.988830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.625 [2024-11-19 11:22:13.988854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.625 [2024-11-19 11:22:13.988883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.625 [2024-11-19 11:22:13.988907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.625 [2024-11-19 11:22:13.988936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.625 [2024-11-19 11:22:13.988961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.625 [2024-11-19 11:22:13.988989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.625 [2024-11-19 11:22:13.989013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.625 [2024-11-19 11:22:13.989042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.625 [2024-11-19 11:22:13.989067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.625 [2024-11-19 11:22:13.989096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.625 [2024-11-19 11:22:13.989120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.625 [2024-11-19 11:22:13.989148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.625 [2024-11-19 11:22:13.989172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.625 [2024-11-19 11:22:13.989207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.625 [2024-11-19 11:22:13.989232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.625 [2024-11-19 11:22:13.989260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.625 [2024-11-19 11:22:13.989285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.625 [2024-11-19 11:22:13.989313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.625 [2024-11-19 11:22:13.989337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.625 [2024-11-19 11:22:13.989374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.625 [2024-11-19 11:22:13.989403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.625 [2024-11-19 11:22:13.989431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.625 [2024-11-19 11:22:13.989465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.625 [2024-11-19 11:22:13.989493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.625 [2024-11-19 11:22:13.989518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.625 [2024-11-19 11:22:13.989931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:20:18.625 [2024-11-19 11:22:13.990050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb21cc0 (9): Bad file descriptor 00:20:18.625 [2024-11-19 11:22:13.990133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ef1d0 (9): Bad file descriptor 00:20:18.625 [2024-11-19 11:22:13.990221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.625 [2024-11-19 11:22:13.990259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.625 [2024-11-19 11:22:13.990284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.625 [2024-11-19 11:22:13.990309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.625 [2024-11-19 11:22:13.990333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.625 [2024-11-19 11:22:13.990355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.625 [2024-11-19 11:22:13.990395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.625 [2024-11-19 11:22:13.990418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.625 [2024-11-19 11:22:13.990442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659110 is same with the state(6) to be set 00:20:18.625 [2024-11-19 11:22:13.990522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.625 [2024-11-19 11:22:13.990554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.625 [2024-11-19 11:22:13.990585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.625 [2024-11-19 11:22:13.990618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.625 [2024-11-19 11:22:13.990642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.625 [2024-11-19 11:22:13.990672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.625 [2024-11-19 11:22:13.990698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.625 [2024-11-19 11:22:13.990719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.625 [2024-11-19 11:22:13.990742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22220 is same with the state(6) to be set 00:20:18.625 [2024-11-19 11:22:13.990778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66aa0 (9): Bad file descriptor 00:20:18.625 [2024-11-19 11:22:13.990853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.625 [2024-11-19 11:22:13.990885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.625 [2024-11-19 11:22:13.990909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.625 [2024-11-19 11:22:13.990933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.625 [2024-11-19 11:22:13.990958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.625 [2024-11-19 11:22:13.990982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.625 [2024-11-19 11:22:13.991015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.625 [2024-11-19 11:22:13.991039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.626 [2024-11-19 11:22:13.991061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb665d0 is same with the state(6) to be set 00:20:18.626 [2024-11-19 11:22:13.991106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb1fa00 (9): Bad file descriptor 00:20:18.626 [2024-11-19 11:22:13.991158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f1270 (9): Bad file descriptor 00:20:18.626 [2024-11-19 11:22:13.994029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:20:18.626 [2024-11-19 11:22:13.994089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb22220 (9): Bad file descriptor 00:20:18.626 [2024-11-19 11:22:13.994263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:18.626 [2024-11-19 11:22:13.994302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f16f0 with addr=10.0.0.2, port=4420 00:20:18.626 [2024-11-19 11:22:13.994329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f16f0 is same with the state(6) to be set 00:20:18.626 [2024-11-19 11:22:13.994519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:18.626 [2024-11-19 11:22:13.994559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6e8220 with addr=10.0.0.2, port=4420 00:20:18.626 [2024-11-19 11:22:13.994584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e8220 is same with the state(6) to be set 00:20:18.626 [2024-11-19 11:22:13.994760] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:18.626 [2024-11-19 11:22:13.995648] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:18.626 [2024-11-19 11:22:13.995806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:18.626 [2024-11-19 11:22:13.995841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb21cc0 with addr=10.0.0.2, port=4420 00:20:18.626 [2024-11-19 11:22:13.995869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb21cc0 is same with the state(6) to be set 00:20:18.626 [2024-11-19 11:22:13.995918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f16f0 (9): Bad file descriptor 00:20:18.626 [2024-11-19 11:22:13.995955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6e8220 (9): Bad file descriptor 00:20:18.626 [2024-11-19 11:22:13.996148] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:18.626 [2024-11-19 11:22:13.996250] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:18.626 [2024-11-19 11:22:13.996372] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:18.626 [2024-11-19 11:22:13.997055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:18.626 [2024-11-19 11:22:13.997092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb22220 with addr=10.0.0.2, port=4420 00:20:18.626 [2024-11-19 11:22:13.997118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22220 is same with the state(6) to be set 00:20:18.626 [2024-11-19 11:22:13.997150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb21cc0 (9): Bad file descriptor 00:20:18.626 [2024-11-19 11:22:13.997179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:18.626 [2024-11-19 11:22:13.997204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:18.626 [2024-11-19 11:22:13.997228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:18.626 [2024-11-19 11:22:13.997265] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:18.626 [2024-11-19 11:22:13.997292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:20:18.626 [2024-11-19 11:22:13.997315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:20:18.626 [2024-11-19 11:22:13.997337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:20:18.626 [2024-11-19 11:22:13.997360] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:20:18.626 [2024-11-19 11:22:13.997596] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:18.626 [2024-11-19 11:22:13.997647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb22220 (9): Bad file descriptor 00:20:18.626 [2024-11-19 11:22:13.997678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:20:18.626 [2024-11-19 11:22:13.997703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:20:18.626 [2024-11-19 11:22:13.997724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:20:18.626 [2024-11-19 11:22:13.997747] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:20:18.626 [2024-11-19 11:22:13.997855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:20:18.626 [2024-11-19 11:22:13.997894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:20:18.626 [2024-11-19 11:22:13.997926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:20:18.626 [2024-11-19 11:22:13.997957] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:20:18.626 [2024-11-19 11:22:13.999959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x659110 (9): Bad file descriptor 00:20:18.626 [2024-11-19 11:22:14.000026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb665d0 (9): Bad file descriptor 00:20:18.626 [2024-11-19 11:22:14.000239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.626 [2024-11-19 11:22:14.000270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.626 [2024-11-19 11:22:14.000312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.626 [2024-11-19 11:22:14.000340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.626 [2024-11-19 11:22:14.000378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.626 [2024-11-19 11:22:14.000406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.626 [2024-11-19 11:22:14.000435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.626 [2024-11-19 11:22:14.000461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.626 [2024-11-19 11:22:14.000487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.626 [2024-11-19 11:22:14.000514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.626 [2024-11-19 11:22:14.000541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.626 [2024-11-19 11:22:14.000568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.626 [2024-11-19 11:22:14.000594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.626 [2024-11-19 11:22:14.000621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.626 [2024-11-19 11:22:14.000648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.626 [2024-11-19 11:22:14.000674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.626 [2024-11-19 11:22:14.000701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.626 [2024-11-19 11:22:14.000727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.626 [2024-11-19 11:22:14.000754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.626 [2024-11-19 11:22:14.000779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.626 [2024-11-19 11:22:14.000805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.626 [2024-11-19 11:22:14.000831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.626 [2024-11-19 11:22:14.000865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.626 [2024-11-19 11:22:14.000892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.626 [2024-11-19 11:22:14.000918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.626 [2024-11-19 11:22:14.000943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.626 [2024-11-19 11:22:14.000969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.626 [2024-11-19 11:22:14.000994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.627 [2024-11-19 11:22:14.001021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.627 [2024-11-19 11:22:14.001045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.627 [2024-11-19 11:22:14.001072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.627 [2024-11-19 11:22:14.001097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.627 [2024-11-19 11:22:14.001126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.627 [2024-11-19 11:22:14.001150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.627 [2024-11-19 11:22:14.001179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.627 [2024-11-19 11:22:14.001203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.627 [2024-11-19 11:22:14.001232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.627 [2024-11-19 11:22:14.001255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.627 [2024-11-19 11:22:14.001284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.627 [2024-11-19 11:22:14.001309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.627 [2024-11-19 11:22:14.001338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.627 [2024-11-19 11:22:14.001370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.627 [2024-11-19 11:22:14.001401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.627 [2024-11-19 11:22:14.001426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.627 [2024-11-19 11:22:14.001454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.627 [2024-11-19 11:22:14.001480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.627 [2024-11-19 11:22:14.001507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.627 [2024-11-19 11:22:14.001539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.627 [2024-11-19 11:22:14.001569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.627 [2024-11-19 11:22:14.001594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.627 [2024-11-19 11:22:14.001623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.627 [2024-11-19 11:22:14.001647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.627 [2024-11-19 11:22:14.001674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.627 [2024-11-19 11:22:14.001699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.627 [2024-11-19 11:22:14.001727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.627 [2024-11-19 11:22:14.001752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.627 [2024-11-19 11:22:14.001779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.627 [2024-11-19 11:22:14.001804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.627 [2024-11-19 11:22:14.001831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.627 [2024-11-19 11:22:14.001856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.627 [2024-11-19 11:22:14.001882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.627 [2024-11-19 11:22:14.001908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.627 [2024-11-19 11:22:14.001935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.627 [2024-11-19 11:22:14.001962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.627 [2024-11-19 11:22:14.001989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.627 [2024-11-19 11:22:14.002015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.627 [2024-11-19 11:22:14.002042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.627 [2024-11-19 11:22:14.002069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.627 [2024-11-19 11:22:14.002096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.627 [2024-11-19 11:22:14.002123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.627 [2024-11-19 11:22:14.002150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.627 [2024-11-19 11:22:14.002176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.627 [2024-11-19 11:22:14.002210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.627 [2024-11-19 11:22:14.002237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.627 [2024-11-19 11:22:14.002265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.627 [2024-11-19 11:22:14.002291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.627 [2024-11-19 11:22:14.002320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.627 [2024-11-19 11:22:14.002346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.627 [2024-11-19 11:22:14.002382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.627 [2024-11-19 11:22:14.002411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.627 [2024-11-19 11:22:14.002438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.627 [2024-11-19 11:22:14.002464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.627 [2024-11-19 11:22:14.002491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.627 [2024-11-19 11:22:14.002517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.627 [2024-11-19 11:22:14.002544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.627 [2024-11-19 11:22:14.002571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.627 [2024-11-19 11:22:14.002598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.627 [2024-11-19 11:22:14.002624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.627 [2024-11-19 11:22:14.002650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.627 [2024-11-19 11:22:14.002677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.627 [2024-11-19 11:22:14.002704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.627 [2024-11-19 11:22:14.002730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.627 [2024-11-19 11:22:14.002757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.627 [2024-11-19 11:22:14.002785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.627 [2024-11-19 11:22:14.002814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.627 [2024-11-19 11:22:14.002841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.627 [2024-11-19 11:22:14.002869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.627 [2024-11-19 11:22:14.002901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.627 [2024-11-19 11:22:14.002929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.627 [2024-11-19 11:22:14.002956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.627 [2024-11-19 11:22:14.002983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.627 [2024-11-19 11:22:14.003010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.628 [2024-11-19 11:22:14.003036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.628 [2024-11-19 11:22:14.003062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.628 [2024-11-19 11:22:14.003089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.628 [2024-11-19 11:22:14.003116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.628 [2024-11-19 11:22:14.003143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.628 [2024-11-19 11:22:14.003168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.628 [2024-11-19 11:22:14.003195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.628 [2024-11-19 11:22:14.003220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.628 [2024-11-19 11:22:14.003248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.628 [2024-11-19 11:22:14.003273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.628 [2024-11-19 11:22:14.003302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.628 [2024-11-19 11:22:14.003327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.628 [2024-11-19 11:22:14.003357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.628 [2024-11-19 11:22:14.003396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.628 [2024-11-19 11:22:14.003426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.628 [2024-11-19 11:22:14.003451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.628 [2024-11-19 11:22:14.003481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.628 [2024-11-19 11:22:14.003505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.628 [2024-11-19 11:22:14.003534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.628 [2024-11-19 11:22:14.003559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.628 [2024-11-19 11:22:14.003593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.628 [2024-11-19 11:22:14.003619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.628 [2024-11-19 11:22:14.003647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.628 [2024-11-19 11:22:14.003674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.628 [2024-11-19 11:22:14.003703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.628 [2024-11-19 11:22:14.003728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.628 [2024-11-19 11:22:14.003754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2c90 is same with the state(6) to be set 00:20:18.628 [2024-11-19 11:22:14.005337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.628 [2024-11-19 11:22:14.005379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.628 [2024-11-19 11:22:14.005415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.628 [2024-11-19 11:22:14.005443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.628 [2024-11-19 11:22:14.005470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.628 [2024-11-19 11:22:14.005496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.628 [2024-11-19 11:22:14.005524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.628 [2024-11-19 11:22:14.005551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.628 [2024-11-19 11:22:14.005578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.628 [2024-11-19 11:22:14.005603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.628 [2024-11-19 11:22:14.005630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.628 [2024-11-19 11:22:14.005656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.628 [2024-11-19 11:22:14.005684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.628 [2024-11-19 11:22:14.005709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.628 [2024-11-19 11:22:14.005737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.628 [2024-11-19 11:22:14.005761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.628 [2024-11-19 11:22:14.005789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.628 [2024-11-19 11:22:14.005814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.628 [2024-11-19 11:22:14.005848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.628 [2024-11-19 11:22:14.005875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.628 [2024-11-19 11:22:14.005902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.628 [2024-11-19 11:22:14.005928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.628 [2024-11-19 11:22:14.005956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.628 [2024-11-19 11:22:14.005981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.628 [2024-11-19 11:22:14.006009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.628 [2024-11-19 11:22:14.006034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.628 [2024-11-19 11:22:14.006062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.628 [2024-11-19 11:22:14.006087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.628 [2024-11-19 11:22:14.006117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.628 [2024-11-19 11:22:14.006142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.628 [2024-11-19 11:22:14.006173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.628 [2024-11-19 11:22:14.006197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.628 [2024-11-19 11:22:14.006226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.628 [2024-11-19 11:22:14.006250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.628 [2024-11-19 11:22:14.006279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.628 [2024-11-19 11:22:14.006303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.628 [2024-11-19 11:22:14.006331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.628 [2024-11-19 11:22:14.006357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.628 [2024-11-19 11:22:14.006395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.628 [2024-11-19 11:22:14.006420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.628 [2024-11-19 11:22:14.006449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.628 [2024-11-19 11:22:14.006474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.628 [2024-11-19 11:22:14.006503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.628 [2024-11-19 11:22:14.006534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.628 [2024-11-19 11:22:14.006563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.628 [2024-11-19 11:22:14.006588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.628 [2024-11-19 11:22:14.006617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.628 [2024-11-19 11:22:14.006644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.629 [2024-11-19 11:22:14.006673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.629 [2024-11-19 11:22:14.006698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.629 [2024-11-19 11:22:14.006727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.629 [2024-11-19 11:22:14.006752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.629 [2024-11-19 11:22:14.006782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.629 [2024-11-19 11:22:14.006807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.629 [2024-11-19 11:22:14.006836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.629 [2024-11-19 11:22:14.006861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.629 [2024-11-19 11:22:14.006890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.629 [2024-11-19 11:22:14.006914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.629 [2024-11-19 11:22:14.006943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.629 [2024-11-19 11:22:14.006967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.629 [2024-11-19 11:22:14.006997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.629 [2024-11-19 11:22:14.007022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.629 [2024-11-19 11:22:14.007052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.629 [2024-11-19 11:22:14.007077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.629 [2024-11-19 11:22:14.007105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.629 [2024-11-19 11:22:14.007130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.629 [2024-11-19 11:22:14.007158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.629 [2024-11-19 11:22:14.007183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.629 [2024-11-19 11:22:14.007223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.629 [2024-11-19 11:22:14.007249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.629 [2024-11-19 11:22:14.007279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.629 [2024-11-19 11:22:14.007304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.629 [2024-11-19 11:22:14.007333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.629 [2024-11-19 11:22:14.007357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.629 [2024-11-19 11:22:14.007402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.629 [2024-11-19 11:22:14.007428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.629 [2024-11-19 11:22:14.007457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.629 [2024-11-19 11:22:14.007482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.629 [2024-11-19 11:22:14.007512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.629 [2024-11-19 11:22:14.007537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.629 [2024-11-19 11:22:14.007567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.629 [2024-11-19 11:22:14.007592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.629 [2024-11-19 11:22:14.007621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.629 [2024-11-19 11:22:14.007646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.629 [2024-11-19 11:22:14.007676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.629 [2024-11-19 11:22:14.007700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.629 [2024-11-19 11:22:14.007729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.629 [2024-11-19 11:22:14.007754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.629 [2024-11-19 11:22:14.007783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.629 [2024-11-19 11:22:14.007808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.629 [2024-11-19 11:22:14.007836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.629 [2024-11-19 11:22:14.007860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.629 [2024-11-19 11:22:14.007890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.629 [2024-11-19 11:22:14.007921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.629 [2024-11-19 11:22:14.007950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.629 [2024-11-19 11:22:14.007976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.629 [2024-11-19 11:22:14.008005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.629 [2024-11-19 11:22:14.008030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.629 [2024-11-19 11:22:14.008059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.629 [2024-11-19 11:22:14.008084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.629 [2024-11-19 11:22:14.008113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.629 [2024-11-19 11:22:14.008138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.629 [2024-11-19 11:22:14.008167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.629 [2024-11-19 11:22:14.008192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.629 [2024-11-19 11:22:14.008220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.629 [2024-11-19 11:22:14.008245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.629 [2024-11-19 11:22:14.008273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.629 [2024-11-19 11:22:14.008298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.629 [2024-11-19 11:22:14.008326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.629 [2024-11-19 11:22:14.008351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.629 [2024-11-19 11:22:14.008387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.629 [2024-11-19 11:22:14.008413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.629 [2024-11-19 11:22:14.008440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.629 [2024-11-19 11:22:14.008468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.629 [2024-11-19 11:22:14.008495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.629 [2024-11-19 11:22:14.008520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.629 [2024-11-19 11:22:14.008546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.629 [2024-11-19 11:22:14.008573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.629 [2024-11-19 11:22:14.008605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.629 [2024-11-19 11:22:14.008632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.629 [2024-11-19 11:22:14.008659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.629 [2024-11-19 11:22:14.008685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.629 [2024-11-19 11:22:14.008712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.630 [2024-11-19 11:22:14.008738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.630 [2024-11-19 11:22:14.008767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.630 [2024-11-19 11:22:14.008794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.630 [2024-11-19 11:22:14.008821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.630 [2024-11-19 11:22:14.008847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.630 [2024-11-19 11:22:14.008872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf4010 is same with the state(6) to be set 00:20:18.630 [2024-11-19 11:22:14.010416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.630 [2024-11-19 11:22:14.010448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.630 [2024-11-19 11:22:14.010483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.630 [2024-11-19 11:22:14.010510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.630 [2024-11-19 11:22:14.010540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.630 [2024-11-19 11:22:14.010566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.630 [2024-11-19 11:22:14.010595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.630 [2024-11-19 11:22:14.010620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.630 [2024-11-19 11:22:14.010647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.630 [2024-11-19 11:22:14.010673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.630 [2024-11-19 11:22:14.010701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.630 [2024-11-19 11:22:14.010727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.630 [2024-11-19 11:22:14.010755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.630 [2024-11-19 11:22:14.010780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.630 [2024-11-19 11:22:14.010814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.630 [2024-11-19 11:22:14.010840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.630 [2024-11-19 11:22:14.010867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.630 [2024-11-19 11:22:14.010893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.630 [2024-11-19 11:22:14.010920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.630 [2024-11-19 11:22:14.010947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.630 [2024-11-19 11:22:14.010974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.630 [2024-11-19 11:22:14.011000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.630 [2024-11-19 11:22:14.011027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.630 [2024-11-19 11:22:14.011053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.630 [2024-11-19 11:22:14.011080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.630 [2024-11-19 11:22:14.011107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.630 [2024-11-19 11:22:14.011134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.630 [2024-11-19 11:22:14.011160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.630 [2024-11-19 11:22:14.011187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.630 [2024-11-19 11:22:14.011214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.630 [2024-11-19 11:22:14.011241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.630 [2024-11-19 11:22:14.011268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.630 [2024-11-19 11:22:14.011296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.630 [2024-11-19 11:22:14.011322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.630 [2024-11-19 11:22:14.011349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.630 [2024-11-19 11:22:14.011391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.630 [2024-11-19 11:22:14.011428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.630 [2024-11-19 11:22:14.011454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.630 [2024-11-19 11:22:14.011482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.630 [2024-11-19 11:22:14.011508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.630 [2024-11-19 11:22:14.011542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.630 [2024-11-19 11:22:14.011569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.630 [2024-11-19 11:22:14.011597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.630 [2024-11-19 11:22:14.011622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.630 [2024-11-19 11:22:14.011649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.630 [2024-11-19 11:22:14.011674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.630 [2024-11-19 11:22:14.011703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.630 [2024-11-19 11:22:14.011729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.630 [2024-11-19 11:22:14.011756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.630 [2024-11-19 11:22:14.011790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.630 [2024-11-19 11:22:14.011818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.630 [2024-11-19 11:22:14.011843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.630 [2024-11-19 11:22:14.011871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.630 [2024-11-19 11:22:14.011897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.630 [2024-11-19 11:22:14.011926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.630 [2024-11-19 11:22:14.011950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.630 [2024-11-19 11:22:14.011980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.630 [2024-11-19 11:22:14.012004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.630 [2024-11-19 11:22:14.012033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.630 [2024-11-19 11:22:14.012058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.630 [2024-11-19 11:22:14.012087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.630 [2024-11-19 11:22:14.012111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.630 [2024-11-19 11:22:14.012140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.630 [2024-11-19 11:22:14.012164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.630 [2024-11-19 11:22:14.012192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.630 [2024-11-19 11:22:14.012223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.630 [2024-11-19 11:22:14.012254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.630 [2024-11-19 11:22:14.012279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.630 [2024-11-19 11:22:14.012308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.631 [2024-11-19 11:22:14.012333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.631 [2024-11-19 11:22:14.012370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.631 [2024-11-19 11:22:14.012397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.631 [2024-11-19 11:22:14.012425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.631 [2024-11-19 11:22:14.012450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.631 [2024-11-19 11:22:14.012476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.631 [2024-11-19 11:22:14.012502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.631 [2024-11-19 11:22:14.012528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.631 [2024-11-19 11:22:14.012555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.631 [2024-11-19 11:22:14.012581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.631 [2024-11-19 11:22:14.012607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.631 [2024-11-19 11:22:14.012635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.631 [2024-11-19 11:22:14.012665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.631 [2024-11-19 11:22:14.012692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.631 [2024-11-19 11:22:14.012719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.631 [2024-11-19 11:22:14.012745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.631 [2024-11-19 11:22:14.012772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.631 [2024-11-19 11:22:14.012798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.631 [2024-11-19 11:22:14.012824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.631 [2024-11-19 11:22:14.012850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.631 [2024-11-19 11:22:14.012877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.631 [2024-11-19 11:22:14.012910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.631 [2024-11-19 11:22:14.012936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.631 [2024-11-19 11:22:14.012963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.631 [2024-11-19 11:22:14.012990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.631 [2024-11-19 11:22:14.013016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.631 [2024-11-19 11:22:14.013043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.631 [2024-11-19 11:22:14.013070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.631 [2024-11-19 11:22:14.013097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.631 [2024-11-19 11:22:14.013124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.631 [2024-11-19 11:22:14.013150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.631 [2024-11-19 11:22:14.013177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.631 [2024-11-19 11:22:14.013203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.631 [2024-11-19 11:22:14.013229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.631 [2024-11-19 11:22:14.013255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.631 [2024-11-19 11:22:14.013282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.631 [2024-11-19 11:22:14.013307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.631 [2024-11-19 11:22:14.013335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.631 [2024-11-19 11:22:14.013360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.631 [2024-11-19 11:22:14.013402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.631 [2024-11-19 11:22:14.013427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.631 [2024-11-19 11:22:14.013455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.631 [2024-11-19 11:22:14.013480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.631 [2024-11-19 11:22:14.013507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.631 [2024-11-19 11:22:14.013532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.631 [2024-11-19 11:22:14.013559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.631 [2024-11-19 11:22:14.013590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.631 [2024-11-19 11:22:14.013619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.631 [2024-11-19 11:22:14.013643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.631 [2024-11-19 11:22:14.013671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.631 [2024-11-19 11:22:14.013696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.631 [2024-11-19 11:22:14.013724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.631 [2024-11-19 11:22:14.013750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.631 [2024-11-19 11:22:14.013780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.631 [2024-11-19 11:22:14.013805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.631 [2024-11-19 11:22:14.013835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.631 [2024-11-19 11:22:14.013859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.631 [2024-11-19 11:22:14.013887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.631 [2024-11-19 11:22:14.013911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.631 [2024-11-19 11:22:14.013939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf5410 is same with the state(6) to be set 00:20:18.631 [2024-11-19 11:22:14.015526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.631 [2024-11-19 11:22:14.015558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.631 [2024-11-19 11:22:14.015593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.631 [2024-11-19 11:22:14.015619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.631 [2024-11-19 11:22:14.015651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.631 [2024-11-19 11:22:14.015676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.631 [2024-11-19 11:22:14.015706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.632 [2024-11-19 11:22:14.015730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.632 [2024-11-19 11:22:14.015760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.632 [2024-11-19 11:22:14.015785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.632 [2024-11-19 11:22:14.015814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.632 [2024-11-19 11:22:14.015845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.632 [2024-11-19 11:22:14.015875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.632 [2024-11-19 11:22:14.015901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.632 [2024-11-19 11:22:14.015930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.632 [2024-11-19 11:22:14.015955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.632 [2024-11-19 11:22:14.015984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.632 [2024-11-19 11:22:14.016008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.632 [2024-11-19 11:22:14.016037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.632 [2024-11-19 11:22:14.016062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.632 [2024-11-19 11:22:14.016092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.632 [2024-11-19 11:22:14.016117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.632 [2024-11-19 11:22:14.016145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.632 [2024-11-19 11:22:14.016170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.632 [2024-11-19 11:22:14.016198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.632 [2024-11-19 11:22:14.016224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.632 [2024-11-19 11:22:14.016252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.632 [2024-11-19 11:22:14.016277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.632 [2024-11-19 11:22:14.016305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.632 [2024-11-19 11:22:14.016331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.632 [2024-11-19 11:22:14.016358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.632 [2024-11-19 11:22:14.016399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.632 [2024-11-19 11:22:14.016428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.632 [2024-11-19 11:22:14.016453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.632 [2024-11-19 11:22:14.016481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.632 [2024-11-19 11:22:14.016506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.632 [2024-11-19 11:22:14.016540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.632 [2024-11-19 11:22:14.016566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.632 [2024-11-19 11:22:14.016592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.632 [2024-11-19 11:22:14.016619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.632 [2024-11-19 11:22:14.016646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.632 [2024-11-19 11:22:14.016672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.632 [2024-11-19 11:22:14.016699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.632 [2024-11-19 11:22:14.016725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.632 [2024-11-19 11:22:14.016753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.632 [2024-11-19 11:22:14.016780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.632 [2024-11-19 11:22:14.016807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.632 [2024-11-19 11:22:14.016833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.632 [2024-11-19 11:22:14.016861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.632 [2024-11-19 11:22:14.016886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.632 [2024-11-19 11:22:14.016914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.632 [2024-11-19 11:22:14.016939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.632 [2024-11-19 11:22:14.016967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.632 [2024-11-19 11:22:14.016992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.632 [2024-11-19 11:22:14.017019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.632 [2024-11-19 11:22:14.017044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.632 [2024-11-19 11:22:14.017073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.632 [2024-11-19 11:22:14.017099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.632 [2024-11-19 11:22:14.017127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.632 [2024-11-19 11:22:14.017152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.632 [2024-11-19 11:22:14.017182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.632 [2024-11-19 11:22:14.017212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.632 [2024-11-19 11:22:14.017241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.632 [2024-11-19 11:22:14.017266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.632 [2024-11-19 11:22:14.017295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.632 [2024-11-19 11:22:14.017320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.632 [2024-11-19 11:22:14.017348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.632 [2024-11-19 11:22:14.017383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.632 [2024-11-19 11:22:14.017414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.632 [2024-11-19 11:22:14.017439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.632 [2024-11-19 11:22:14.017468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.632 [2024-11-19 11:22:14.017491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.632 [2024-11-19 11:22:14.017520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.632 [2024-11-19 11:22:14.017546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.632 [2024-11-19 11:22:14.017575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.632 [2024-11-19 11:22:14.017600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.632 [2024-11-19 11:22:14.017629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.632 [2024-11-19 11:22:14.017654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.632 [2024-11-19 11:22:14.017682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.632 [2024-11-19 11:22:14.017707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.632 [2024-11-19 11:22:14.017733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.633 [2024-11-19 11:22:14.017758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.633 [2024-11-19 11:22:14.017785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.633 [2024-11-19 11:22:14.017811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.633 [2024-11-19 11:22:14.017838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.633 [2024-11-19 11:22:14.017864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.633 [2024-11-19 11:22:14.017896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.633 [2024-11-19 11:22:14.017922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.633 [2024-11-19 11:22:14.017949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.633 [2024-11-19 11:22:14.017976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.633 [2024-11-19 11:22:14.018005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.633 [2024-11-19 11:22:14.018032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.633 [2024-11-19 11:22:14.018058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.633 [2024-11-19 11:22:14.018084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.633 [2024-11-19 11:22:14.018111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.633 [2024-11-19 11:22:14.018137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.633 [2024-11-19 11:22:14.018163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.633 [2024-11-19 11:22:14.018190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.633 [2024-11-19 11:22:14.018216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.633 [2024-11-19 11:22:14.018243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.633 [2024-11-19 11:22:14.018270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.633 [2024-11-19 11:22:14.018296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.633 [2024-11-19 11:22:14.018323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.633 [2024-11-19 11:22:14.018349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.633 [2024-11-19 11:22:14.018386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.633 [2024-11-19 11:22:14.018413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.633 [2024-11-19 11:22:14.018441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.633 [2024-11-19 11:22:14.018467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.633 [2024-11-19 11:22:14.018496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.633 [2024-11-19 11:22:14.018519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.633 [2024-11-19 11:22:14.018549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.633 [2024-11-19 11:22:14.018582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.633 [2024-11-19 11:22:14.018610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.633 [2024-11-19 11:22:14.018634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.633 [2024-11-19 11:22:14.018662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.633 [2024-11-19 11:22:14.018687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.633 [2024-11-19 11:22:14.018715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.633 [2024-11-19 11:22:14.018740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.633 [2024-11-19 11:22:14.018768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.633 [2024-11-19 11:22:14.018793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.633 [2024-11-19 11:22:14.018822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.633 [2024-11-19 11:22:14.018847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.633 [2024-11-19 11:22:14.018877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.633 [2024-11-19 11:22:14.018902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.633 [2024-11-19 11:22:14.018931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.633 [2024-11-19 11:22:14.018956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.633 [2024-11-19 11:22:14.018985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.633 [2024-11-19 11:22:14.019009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.633 [2024-11-19 11:22:14.019035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988460 is same with the state(6) to be set 00:20:18.633 [2024-11-19 11:22:14.020537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:20:18.633 [2024-11-19 11:22:14.020580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:20:18.633 [2024-11-19 11:22:14.020615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:20:18.633 [2024-11-19 11:22:14.020796] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:20:18.633 [2024-11-19 11:22:14.020848] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:20:18.633 [2024-11-19 11:22:14.020882] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:20:18.633 [2024-11-19 11:22:14.021021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:20:18.633 [2024-11-19 11:22:14.021057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:20:18.633 [2024-11-19 11:22:14.021095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:18.633 [2024-11-19 11:22:14.021382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:18.633 [2024-11-19 11:22:14.021423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ef1d0 with addr=10.0.0.2, port=4420 00:20:18.633 [2024-11-19 11:22:14.021451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ef1d0 is same with the state(6) to be set 00:20:18.633 [2024-11-19 11:22:14.021651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:18.633 [2024-11-19 11:22:14.021694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f1270 with addr=10.0.0.2, port=4420 00:20:18.633 [2024-11-19 11:22:14.021722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1270 is same with the state(6) to be set 00:20:18.633 [2024-11-19 11:22:14.021853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:18.633 [2024-11-19 11:22:14.021888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb1fa00 with addr=10.0.0.2, port=4420 00:20:18.633 [2024-11-19 11:22:14.021913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1fa00 is same with the state(6) to be set 00:20:18.633 [2024-11-19 11:22:14.023034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.633 [2024-11-19 11:22:14.023067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.633 [2024-11-19 11:22:14.023102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.633 [2024-11-19 11:22:14.023131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.633 [2024-11-19 11:22:14.023159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.633 [2024-11-19 11:22:14.023185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.633 [2024-11-19 11:22:14.023213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.633 [2024-11-19 11:22:14.023238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.633 [2024-11-19 11:22:14.023266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.633 [2024-11-19 11:22:14.023292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.633 [2024-11-19 11:22:14.023320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.634 [2024-11-19 11:22:14.023344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.634 [2024-11-19 11:22:14.023382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.634 [2024-11-19 11:22:14.023409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.634 [2024-11-19 11:22:14.023438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.634 [2024-11-19 11:22:14.023463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.634 [2024-11-19 11:22:14.023491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.634 [2024-11-19 11:22:14.023522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.634 [2024-11-19 11:22:14.023550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.634 [2024-11-19 11:22:14.023576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.634 [2024-11-19 11:22:14.023604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.634 [2024-11-19 11:22:14.023629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.634 [2024-11-19 11:22:14.023659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.634 [2024-11-19 11:22:14.023684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.634 [2024-11-19 11:22:14.023714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.634 [2024-11-19 11:22:14.023739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.634 [2024-11-19 11:22:14.023768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.634 [2024-11-19 11:22:14.023792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.634 [2024-11-19 11:22:14.023822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.634 [2024-11-19 11:22:14.023847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.634 [2024-11-19 11:22:14.023875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.634 [2024-11-19 11:22:14.023900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.634 [2024-11-19 11:22:14.023928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.634 [2024-11-19 11:22:14.023952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.634 [2024-11-19 11:22:14.023982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.634 [2024-11-19 11:22:14.024006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.634 [2024-11-19 11:22:14.024034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.634 [2024-11-19 11:22:14.024058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.634 [2024-11-19 11:22:14.024088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.634 [2024-11-19 11:22:14.024112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.634 [2024-11-19 11:22:14.024141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.634 [2024-11-19 11:22:14.024165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.634 [2024-11-19 11:22:14.024205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.634 [2024-11-19 11:22:14.024230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.634 [2024-11-19 11:22:14.024259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.634 [2024-11-19 11:22:14.024285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.634 [2024-11-19 11:22:14.024314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.634 [2024-11-19 11:22:14.024338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.634 [2024-11-19 11:22:14.024377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.634 [2024-11-19 11:22:14.024405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.634 [2024-11-19 11:22:14.024434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.634 [2024-11-19 11:22:14.024459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.634 [2024-11-19 11:22:14.024488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.634 [2024-11-19 11:22:14.024513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.634 [2024-11-19 11:22:14.024541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.634 [2024-11-19 11:22:14.024565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.634 [2024-11-19 11:22:14.024594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.634 [2024-11-19 11:22:14.024618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.634 [2024-11-19 11:22:14.024647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.634 [2024-11-19 11:22:14.024672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.634 [2024-11-19 11:22:14.024700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.634 [2024-11-19 11:22:14.024724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.634 [2024-11-19 11:22:14.024753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.634 [2024-11-19 11:22:14.024778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.634 [2024-11-19 11:22:14.024805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.634 [2024-11-19 11:22:14.024830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.634 [2024-11-19 11:22:14.024857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.634 [2024-11-19 11:22:14.024887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.634 [2024-11-19 11:22:14.024916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.634 [2024-11-19 11:22:14.024942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.634 [2024-11-19 11:22:14.024970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.634 [2024-11-19 11:22:14.024995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.634 [2024-11-19 11:22:14.025023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.634 [2024-11-19 11:22:14.025049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.634 [2024-11-19 11:22:14.025076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.634 [2024-11-19 11:22:14.025102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.634 [2024-11-19 11:22:14.025129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.634 [2024-11-19 11:22:14.025156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.634 [2024-11-19 11:22:14.025183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.634 [2024-11-19 11:22:14.025209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.634 [2024-11-19 11:22:14.025236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.634 [2024-11-19 11:22:14.025263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.634 [2024-11-19 11:22:14.025290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.634 [2024-11-19 11:22:14.025316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.634 [2024-11-19 11:22:14.025344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.634 [2024-11-19 11:22:14.025380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.635 [2024-11-19 11:22:14.025409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.635 [2024-11-19 11:22:14.025436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.635 [2024-11-19 11:22:14.025463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.635 [2024-11-19 11:22:14.025490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.635 [2024-11-19 11:22:14.025517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.635 [2024-11-19 11:22:14.025542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.635 [2024-11-19 11:22:14.025575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.635 [2024-11-19 11:22:14.025601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.635 [2024-11-19 11:22:14.025628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.635 [2024-11-19 11:22:14.025654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.635 [2024-11-19 11:22:14.025682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.635 [2024-11-19 11:22:14.025707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.635 [2024-11-19 11:22:14.025735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.635 [2024-11-19 11:22:14.025760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.635 [2024-11-19 11:22:14.025789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.635 [2024-11-19 11:22:14.025814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.635 [2024-11-19 11:22:14.025842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.635 [2024-11-19 11:22:14.025866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.635 [2024-11-19 11:22:14.025895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.635 [2024-11-19 11:22:14.025919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.635 [2024-11-19 11:22:14.025948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.635 [2024-11-19 11:22:14.025972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.635 [2024-11-19 11:22:14.026001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.635 [2024-11-19 11:22:14.026027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.635 [2024-11-19 11:22:14.026056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.635 [2024-11-19 11:22:14.026081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.635 [2024-11-19 11:22:14.026110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.635 [2024-11-19 11:22:14.026134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.635 [2024-11-19 11:22:14.026163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.635 [2024-11-19 11:22:14.026189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.635 [2024-11-19 11:22:14.026218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.635 [2024-11-19 11:22:14.026248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.635 [2024-11-19 11:22:14.026277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.635 [2024-11-19 11:22:14.026303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.635 [2024-11-19 11:22:14.026332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.635 [2024-11-19 11:22:14.026356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.635 [2024-11-19 11:22:14.026396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.635 [2024-11-19 11:22:14.026421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.635 [2024-11-19 11:22:14.026450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.635 [2024-11-19 11:22:14.026475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.635 [2024-11-19 11:22:14.026504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.635 [2024-11-19 11:22:14.026529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.635 [2024-11-19 11:22:14.026556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf6880 is same with the state(6) to be set 00:20:18.635 [2024-11-19 11:22:14.028112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.635 [2024-11-19 11:22:14.028144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.635 [2024-11-19 11:22:14.028182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.635 [2024-11-19 11:22:14.028210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.635 [2024-11-19 11:22:14.028240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.635 [2024-11-19 11:22:14.028265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.635 [2024-11-19 11:22:14.028293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.635 [2024-11-19 11:22:14.028319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.635 [2024-11-19 11:22:14.028346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.635 [2024-11-19 11:22:14.028383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.635 [2024-11-19 11:22:14.028413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.635 [2024-11-19 11:22:14.028439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.635 [2024-11-19 11:22:14.028469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.635 [2024-11-19 11:22:14.028499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.635 [2024-11-19 11:22:14.028528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.635 [2024-11-19 11:22:14.028552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.635 [2024-11-19 11:22:14.028581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.635 [2024-11-19 11:22:14.028606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.635 [2024-11-19 11:22:14.028635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.635 [2024-11-19 11:22:14.028659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.635 [2024-11-19 11:22:14.028687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.635 [2024-11-19 11:22:14.028712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.635 [2024-11-19 11:22:14.028740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.635 [2024-11-19 11:22:14.028764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.636 [2024-11-19 11:22:14.028792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.636 [2024-11-19 11:22:14.028816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.636 [2024-11-19 11:22:14.028845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.636 [2024-11-19 11:22:14.028870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.636 [2024-11-19 11:22:14.028899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.636 [2024-11-19 11:22:14.028923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.636 [2024-11-19 11:22:14.028951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.636 [2024-11-19 11:22:14.028976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.636 [2024-11-19 11:22:14.029005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.636 [2024-11-19 11:22:14.029029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.636 [2024-11-19 11:22:14.029056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.636 [2024-11-19 11:22:14.029080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.636 [2024-11-19 11:22:14.029108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.636 [2024-11-19 11:22:14.029134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.636 [2024-11-19 11:22:14.029169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.636 [2024-11-19 11:22:14.029194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.636 [2024-11-19 11:22:14.029223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.636 [2024-11-19 11:22:14.029248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.636 [2024-11-19 11:22:14.029276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.636 [2024-11-19 11:22:14.029301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.636 [2024-11-19 11:22:14.029329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.636 [2024-11-19 11:22:14.029355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.636 [2024-11-19 11:22:14.029393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.636 [2024-11-19 11:22:14.029419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.636 [2024-11-19 11:22:14.029447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.636 [2024-11-19 11:22:14.029473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.636 [2024-11-19 11:22:14.029500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.636 [2024-11-19 11:22:14.029526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.636 [2024-11-19 11:22:14.029553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.636 [2024-11-19 11:22:14.029580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.636 [2024-11-19 11:22:14.029606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.636 [2024-11-19 11:22:14.029633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.636 [2024-11-19 11:22:14.029660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.636 [2024-11-19 11:22:14.029686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.636 [2024-11-19 11:22:14.029713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.636 [2024-11-19 11:22:14.029740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.636 [2024-11-19 11:22:14.029767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.636 [2024-11-19 11:22:14.029794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.636 [2024-11-19 11:22:14.029820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.636 [2024-11-19 11:22:14.029852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.636 [2024-11-19 11:22:14.029881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.636 [2024-11-19 11:22:14.029907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.636 [2024-11-19 11:22:14.029935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.636 [2024-11-19 11:22:14.029962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.636 [2024-11-19 11:22:14.029991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.636 [2024-11-19 11:22:14.030017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.636 [2024-11-19 11:22:14.030044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.636 [2024-11-19 11:22:14.030071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.636 [2024-11-19 11:22:14.030097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.636 [2024-11-19 11:22:14.030124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.636 [2024-11-19 11:22:14.030151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.636 [2024-11-19 11:22:14.030177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.636 [2024-11-19 11:22:14.030204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.636 [2024-11-19 11:22:14.030229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.636 [2024-11-19 11:22:14.030256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.636 [2024-11-19 11:22:14.030282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.636 [2024-11-19 11:22:14.030309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.636 [2024-11-19 11:22:14.030334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.636 [2024-11-19 11:22:14.030370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.636 [2024-11-19 11:22:14.030398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.636 [2024-11-19 11:22:14.030427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.636 [2024-11-19 11:22:14.030453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.636 [2024-11-19 11:22:14.030480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.636 [2024-11-19 11:22:14.030505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.636 [2024-11-19 11:22:14.030532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.636 [2024-11-19 11:22:14.030563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.636 [2024-11-19 11:22:14.030590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.636 [2024-11-19 11:22:14.030614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.636 [2024-11-19 11:22:14.030643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.636 [2024-11-19 11:22:14.030668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.636 [2024-11-19 11:22:14.030697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.636 [2024-11-19 11:22:14.030722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.636 [2024-11-19 11:22:14.030751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.636 [2024-11-19 11:22:14.030775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.637 [2024-11-19 11:22:14.030804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.637 [2024-11-19 11:22:14.030828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.637 [2024-11-19 11:22:14.030858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.637 [2024-11-19 11:22:14.030883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.637 [2024-11-19 11:22:14.030911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.637 [2024-11-19 11:22:14.030936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.637 [2024-11-19 11:22:14.030965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.637 [2024-11-19 11:22:14.030991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.637 [2024-11-19 11:22:14.031019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.637 [2024-11-19 11:22:14.031043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.637 [2024-11-19 11:22:14.031072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.637 [2024-11-19 11:22:14.031096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.637 [2024-11-19 11:22:14.031125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.637 [2024-11-19 11:22:14.031150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.637 [2024-11-19 11:22:14.031179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.637 [2024-11-19 11:22:14.031203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.637 [2024-11-19 11:22:14.031237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.637 [2024-11-19 11:22:14.031263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.637 [2024-11-19 11:22:14.031292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.637 [2024-11-19 11:22:14.031316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.637 [2024-11-19 11:22:14.031344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.637 [2024-11-19 11:22:14.031376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.637 [2024-11-19 11:22:14.031406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.637 [2024-11-19 11:22:14.031432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.637 [2024-11-19 11:22:14.031460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.637 [2024-11-19 11:22:14.031485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.637 [2024-11-19 11:22:14.031514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.637 [2024-11-19 11:22:14.031541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.637 [2024-11-19 11:22:14.031571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.637 [2024-11-19 11:22:14.031597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.637 [2024-11-19 11:22:14.031623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x986fb0 is same with the state(6) to be set 00:20:18.637 [2024-11-19 11:22:14.033463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:20:18.637 [2024-11-19 11:22:14.033508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:20:18.637 [2024-11-19 11:22:14.033546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:20:18.637 task offset: 23424 on job bdev=Nvme1n1 fails 00:20:18.637 00:20:18.637 Latency(us) 00:20:18.637 [2024-11-19T10:22:14.134Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.637 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:18.637 Job: Nvme1n1 ended in about 0.71 seconds with error 00:20:18.637 Verification LBA range: start 0x0 length 0x400 00:20:18.637 Nvme1n1 : 0.71 179.39 11.21 89.70 0.00 234383.99 15243.19 237677.23 00:20:18.637 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:18.637 Job: Nvme2n1 ended in about 0.71 seconds with error 00:20:18.637 Verification LBA range: start 0x0 length 0x400 00:20:18.637 Nvme2n1 : 0.71 179.04 11.19 89.52 0.00 228449.09 10000.31 239230.67 00:20:18.637 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:18.637 Job: Nvme3n1 ended in about 0.75 seconds with error 00:20:18.637 Verification LBA range: start 0x0 length 0x400 00:20:18.637 Nvme3n1 : 0.75 170.09 10.63 85.05 0.00 234625.96 20097.71 262532.36 00:20:18.637 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:18.637 Job: Nvme4n1 ended in about 0.76 seconds with error 00:20:18.637 Verification LBA range: start 0x0 length 0x400 00:20:18.637 Nvme4n1 : 0.76 168.95 10.56 84.48 0.00 229953.99 29515.47 251658.24 00:20:18.637 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:18.637 Job: Nvme5n1 ended in about 0.76 seconds with error 00:20:18.637 Verification LBA range: start 0x0 length 0x400 00:20:18.637 Nvme5n1 : 0.76 83.91 5.24 83.91 0.00 338062.22 21456.97 288940.94 00:20:18.637 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:18.637 Job: Nvme6n1 ended in about 0.78 seconds with error 00:20:18.637 Verification LBA range: start 0x0 length 0x400 00:20:18.637 Nvme6n1 : 0.78 82.55 5.16 82.55 0.00 334805.52 29515.47 323116.75 00:20:18.637 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:18.637 Job: Nvme7n1 ended in about 0.72 seconds with error 00:20:18.637 Verification LBA range: start 0x0 length 0x400 00:20:18.637 Nvme7n1 : 0.72 177.25 11.08 88.63 0.00 199078.05 5388.52 262532.36 00:20:18.637 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:18.637 Job: Nvme8n1 ended in about 0.74 seconds with error 00:20:18.637 Verification LBA range: start 0x0 length 0x400 00:20:18.637 Nvme8n1 : 0.74 172.69 10.79 86.34 0.00 199098.97 20291.89 264085.81 00:20:18.637 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:18.637 Job: Nvme9n1 ended in about 0.78 seconds with error 00:20:18.637 Verification LBA range: start 0x0 length 0x400 00:20:18.637 Nvme9n1 : 0.78 82.02 5.13 82.02 0.00 309091.37 21165.70 292047.83 00:20:18.637 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:18.637 Job: Nvme10n1 ended in about 0.77 seconds with error 00:20:18.637 Verification LBA range: start 0x0 length 0x400 00:20:18.637 Nvme10n1 : 0.77 83.36 5.21 83.36 0.00 293741.61 33787.45 274959.93 00:20:18.637 [2024-11-19T10:22:14.134Z] =================================================================================================================== 00:20:18.637 [2024-11-19T10:22:14.134Z] Total : 1379.26 86.20 855.55 0.00 251083.52 5388.52 323116.75 00:20:18.637 [2024-11-19 11:22:14.066686] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:18.637 [2024-11-19 11:22:14.066778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:20:18.637 [2024-11-19 11:22:14.067135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:18.637 [2024-11-19 11:22:14.067176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb66aa0 with addr=10.0.0.2, port=4420 00:20:18.637 [2024-11-19 11:22:14.067209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb66aa0 is same with the state(6) to be set 00:20:18.637 [2024-11-19 11:22:14.067377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:18.637 [2024-11-19 11:22:14.067418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6e8220 with addr=10.0.0.2, port=4420 00:20:18.637 [2024-11-19 11:22:14.067447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e8220 is same with the state(6) to be set 00:20:18.637 [2024-11-19 11:22:14.067637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:18.637 [2024-11-19 11:22:14.067680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f16f0 with addr=10.0.0.2, port=4420 00:20:18.637 [2024-11-19 11:22:14.067708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f16f0 is same with the state(6) to be set 00:20:18.637 [2024-11-19 11:22:14.067747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ef1d0 (9): Bad file descriptor 00:20:18.637 [2024-11-19 11:22:14.067799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f1270 (9): Bad file descriptor 00:20:18.637 [2024-11-19 11:22:14.067834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb1fa00 (9): Bad file descriptor 00:20:18.637 [2024-11-19 11:22:14.067951] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:20:18.637 [2024-11-19 11:22:14.067987] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:20:18.637 [2024-11-19 11:22:14.068023] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:20:18.638 [2024-11-19 11:22:14.068057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f16f0 (9): Bad file descriptor 00:20:18.638 [2024-11-19 11:22:14.068099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6e8220 (9): Bad file descriptor 00:20:18.638 [2024-11-19 11:22:14.068139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb66aa0 (9): Bad file descriptor 00:20:18.638 [2024-11-19 11:22:14.068499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:18.638 [2024-11-19 11:22:14.068533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb21cc0 with addr=10.0.0.2, port=4420 00:20:18.638 [2024-11-19 11:22:14.068562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb21cc0 is same with the state(6) to be set 00:20:18.638 [2024-11-19 11:22:14.068753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:18.638 [2024-11-19 11:22:14.068790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb22220 with addr=10.0.0.2, port=4420 00:20:18.638 [2024-11-19 11:22:14.068818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22220 is same with the state(6) to be set 00:20:18.638 [2024-11-19 11:22:14.068962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:18.638 [2024-11-19 11:22:14.068993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x659110 with addr=10.0.0.2, port=4420 00:20:18.638 [2024-11-19 11:22:14.069020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659110 is same with the state(6) to be set 00:20:18.638 [2024-11-19 11:22:14.069182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:18.638 [2024-11-19 11:22:14.069220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb665d0 with addr=10.0.0.2, port=4420 00:20:18.638 [2024-11-19 11:22:14.069247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb665d0 is same with the state(6) to be set 00:20:18.638 [2024-11-19 11:22:14.069289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:20:18.638 [2024-11-19 11:22:14.069314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:20:18.638 [2024-11-19 11:22:14.069341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:20:18.638 [2024-11-19 11:22:14.069376] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:20:18.638 [2024-11-19 11:22:14.069407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:20:18.638 [2024-11-19 11:22:14.069429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:20:18.638 [2024-11-19 11:22:14.069458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:20:18.638 [2024-11-19 11:22:14.069480] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:20:18.638 [2024-11-19 11:22:14.069506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:20:18.638 [2024-11-19 11:22:14.069527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:20:18.638 [2024-11-19 11:22:14.069548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:20:18.638 [2024-11-19 11:22:14.069578] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:20:18.638 [2024-11-19 11:22:14.069656] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:20:18.638 [2024-11-19 11:22:14.069694] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:20:18.638 [2024-11-19 11:22:14.069728] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:20:18.638 [2024-11-19 11:22:14.070511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb21cc0 (9): Bad file descriptor 00:20:18.638 [2024-11-19 11:22:14.070546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb22220 (9): Bad file descriptor 00:20:18.638 [2024-11-19 11:22:14.070580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x659110 (9): Bad file descriptor 00:20:18.638 [2024-11-19 11:22:14.070613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb665d0 (9): Bad file descriptor 00:20:18.638 [2024-11-19 11:22:14.070643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:20:18.638 [2024-11-19 11:22:14.070666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:20:18.638 [2024-11-19 11:22:14.070690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:20:18.638 [2024-11-19 11:22:14.070713] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:20:18.638 [2024-11-19 11:22:14.070737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:20:18.638 [2024-11-19 11:22:14.070761] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:20:18.638 [2024-11-19 11:22:14.070782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:20:18.638 [2024-11-19 11:22:14.070805] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:20:18.638 [2024-11-19 11:22:14.070831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:18.638 [2024-11-19 11:22:14.070852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:18.638 [2024-11-19 11:22:14.070876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:18.638 [2024-11-19 11:22:14.070898] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:18.638 [2024-11-19 11:22:14.071373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:20:18.638 [2024-11-19 11:22:14.071409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:20:18.638 [2024-11-19 11:22:14.071440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:20:18.638 [2024-11-19 11:22:14.071498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:20:18.638 [2024-11-19 11:22:14.071525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:20:18.638 [2024-11-19 11:22:14.071548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:20:18.638 [2024-11-19 11:22:14.071572] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:20:18.638 [2024-11-19 11:22:14.071596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:20:18.638 [2024-11-19 11:22:14.071626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:20:18.638 [2024-11-19 11:22:14.071648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:20:18.638 [2024-11-19 11:22:14.071670] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:20:18.638 [2024-11-19 11:22:14.071694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:20:18.638 [2024-11-19 11:22:14.071716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:20:18.638 [2024-11-19 11:22:14.071739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:20:18.638 [2024-11-19 11:22:14.071759] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:20:18.638 [2024-11-19 11:22:14.071784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:20:18.638 [2024-11-19 11:22:14.071806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:20:18.638 [2024-11-19 11:22:14.071828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:20:18.638 [2024-11-19 11:22:14.071850] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:20:18.638 [2024-11-19 11:22:14.072164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:18.638 [2024-11-19 11:22:14.072202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb1fa00 with addr=10.0.0.2, port=4420 00:20:18.638 [2024-11-19 11:22:14.072229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1fa00 is same with the state(6) to be set 00:20:18.638 [2024-11-19 11:22:14.072360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:18.638 [2024-11-19 11:22:14.072444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f1270 with addr=10.0.0.2, port=4420 00:20:18.638 [2024-11-19 11:22:14.072469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1270 is same with the state(6) to be set 00:20:18.638 [2024-11-19 11:22:14.072724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:18.638 [2024-11-19 11:22:14.072756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ef1d0 with addr=10.0.0.2, port=4420 00:20:18.638 [2024-11-19 11:22:14.072782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ef1d0 is same with the state(6) to be set 00:20:18.638 [2024-11-19 11:22:14.072841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb1fa00 (9): Bad file descriptor 00:20:18.638 [2024-11-19 11:22:14.072877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f1270 (9): Bad file descriptor 00:20:18.638 [2024-11-19 11:22:14.072910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ef1d0 (9): Bad file descriptor 00:20:18.638 [2024-11-19 11:22:14.073000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:20:18.638 [2024-11-19 11:22:14.073029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:20:18.638 [2024-11-19 11:22:14.073052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:20:18.638 [2024-11-19 11:22:14.073076] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:20:18.638 [2024-11-19 11:22:14.073101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:20:18.638 [2024-11-19 11:22:14.073123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:20:18.638 [2024-11-19 11:22:14.073152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:20:18.639 [2024-11-19 11:22:14.073174] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:20:18.639 [2024-11-19 11:22:14.073200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:20:18.639 [2024-11-19 11:22:14.073221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:20:18.639 [2024-11-19 11:22:14.073244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:20:18.639 [2024-11-19 11:22:14.073266] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:20:19.206 11:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:20:20.143 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2655247 00:20:20.143 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:20:20.143 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2655247 00:20:20.143 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:20:20.143 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:20.143 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:20:20.143 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:20.144 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 2655247 00:20:20.144 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:20:20.144 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:20.144 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:20:20.144 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:20:20.144 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:20:20.144 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:20.144 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:20:20.144 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:20.144 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:20.144 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:20.144 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:20.144 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:20.144 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:20:20.144 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:20.144 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:20:20.144 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:20.144 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:20.144 rmmod nvme_tcp 00:20:20.144 rmmod nvme_fabrics 00:20:20.144 rmmod nvme_keyring 00:20:20.144 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:20.144 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:20:20.144 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:20:20.144 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2655073 ']' 00:20:20.144 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2655073 00:20:20.144 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2655073 ']' 00:20:20.144 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2655073 00:20:20.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2655073) - No such process 00:20:20.144 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2655073 is not found' 00:20:20.144 Process with pid 2655073 is not found 00:20:20.144 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:20.144 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:20.144 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:20.144 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:20:20.144 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:20:20.144 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:20.144 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:20:20.144 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:20.144 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:20.144 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:20.144 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:20.144 11:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.682 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:22.682 00:20:22.682 real 0m7.285s 00:20:22.682 user 0m17.491s 00:20:22.682 sys 0m1.435s 00:20:22.682 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:22.682 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:22.682 ************************************ 00:20:22.682 END TEST nvmf_shutdown_tc3 00:20:22.682 ************************************ 00:20:22.682 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:20:22.682 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:20:22.682 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:20:22.682 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:22.682 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:22.682 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:22.682 ************************************ 00:20:22.682 START TEST nvmf_shutdown_tc4 00:20:22.682 ************************************ 00:20:22.682 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:20:22.682 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:20:22.682 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:22.682 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:22.682 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:22.682 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:22.682 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:22.682 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:22.682 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.682 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:22.682 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.682 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:22.682 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:22.682 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:22.682 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:22.682 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:22.682 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:22.682 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:22.682 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:22.682 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:22.682 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:22.682 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:22.682 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:20:22.682 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:22.682 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:20:22.682 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:20:22.682 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:20:22.682 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:20:22.682 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:20:22.682 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:22.682 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:22.682 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:22.682 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:22.682 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:22.682 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:22.682 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:22.682 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:22.682 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:22.682 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:20:22.683 Found 0000:82:00.0 (0x8086 - 0x159b) 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:20:22.683 Found 0000:82:00.1 (0x8086 - 0x159b) 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:20:22.683 Found net devices under 0000:82:00.0: cvl_0_0 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:20:22.683 Found net devices under 0000:82:00.1: cvl_0_1 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:22.683 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:22.683 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:20:22.683 00:20:22.683 --- 10.0.0.2 ping statistics --- 00:20:22.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.683 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:22.683 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:22.683 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:20:22.683 00:20:22.683 --- 10.0.0.1 ping statistics --- 00:20:22.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.683 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:22.683 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:22.684 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:22.684 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:22.684 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2656048 00:20:22.684 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:22.684 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2656048 00:20:22.684 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 2656048 ']' 00:20:22.684 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:22.684 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:22.684 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:22.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:22.684 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:22.684 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:22.684 [2024-11-19 11:22:17.929749] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:20:22.684 [2024-11-19 11:22:17.929829] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:22.684 [2024-11-19 11:22:18.015464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:22.684 [2024-11-19 11:22:18.075680] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:22.684 [2024-11-19 11:22:18.075741] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:22.684 [2024-11-19 11:22:18.075763] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:22.684 [2024-11-19 11:22:18.075781] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:22.684 [2024-11-19 11:22:18.075796] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:22.684 [2024-11-19 11:22:18.077518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:22.684 [2024-11-19 11:22:18.077600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:22.684 [2024-11-19 11:22:18.077542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:22.684 [2024-11-19 11:22:18.077603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.942 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:22.942 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:20:22.942 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:22.942 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:22.942 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:22.942 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:22.942 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:22.942 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.942 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:22.942 [2024-11-19 11:22:18.228788] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:22.942 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.942 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:22.942 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:22.942 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:22.942 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:22.942 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:22.942 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:22.942 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:22.942 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:22.942 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:22.942 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:22.942 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:22.942 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:22.942 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:22.942 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:22.942 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:22.943 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:22.943 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:22.943 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:22.943 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:22.943 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:22.943 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:22.943 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:22.943 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:22.943 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:22.943 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:22.943 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:22.943 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.943 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:22.943 Malloc1 00:20:22.943 [2024-11-19 11:22:18.324545] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:22.943 Malloc2 00:20:22.943 Malloc3 00:20:23.201 Malloc4 00:20:23.201 Malloc5 00:20:23.201 Malloc6 00:20:23.201 Malloc7 00:20:23.201 Malloc8 00:20:23.459 Malloc9 00:20:23.459 Malloc10 00:20:23.459 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.459 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:23.459 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:23.459 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:23.459 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2656216 00:20:23.459 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:20:23.459 11:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:20:23.459 [2024-11-19 11:22:18.847189] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:28.732 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:28.732 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2656048 00:20:28.732 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2656048 ']' 00:20:28.732 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2656048 00:20:28.732 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:20:28.732 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:28.732 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2656048 00:20:28.732 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:28.732 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:28.732 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2656048' 00:20:28.732 killing process with pid 2656048 00:20:28.732 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 2656048 00:20:28.732 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 2656048 00:20:28.732 [2024-11-19 11:22:23.853921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b4c0 is same with the state(6) to be set 00:20:28.732 [2024-11-19 11:22:23.854011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b4c0 is same with the state(6) to be set 00:20:28.732 [2024-11-19 11:22:23.854028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b4c0 is same with the state(6) to be set 00:20:28.732 [2024-11-19 11:22:23.854041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b4c0 is same with the state(6) to be set 00:20:28.732 [2024-11-19 11:22:23.854054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b4c0 is same with the state(6) to be set 00:20:28.732 [2024-11-19 11:22:23.854066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b4c0 is same with the state(6) to be set 00:20:28.732 [2024-11-19 11:22:23.854078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b4c0 is same with the state(6) to be set 00:20:28.732 [2024-11-19 11:22:23.854091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b4c0 is same with the state(6) to be set 00:20:28.732 [2024-11-19 11:22:23.854103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b4c0 is same with the state(6) to be set 00:20:28.732 [2024-11-19 11:22:23.854115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b4c0 is same with the state(6) to be set 00:20:28.732 [2024-11-19 11:22:23.854126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b4c0 is same with the state(6) to be set 00:20:28.732 [2024-11-19 11:22:23.854138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b4c0 is same with the state(6) to be set 00:20:28.732 [2024-11-19 11:22:23.854149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b4c0 is same with the state(6) to be set 00:20:28.732 [2024-11-19 11:22:23.854173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b4c0 is same with the state(6) to be set 00:20:28.732 [2024-11-19 11:22:23.854187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b4c0 is same with the state(6) to be set 00:20:28.732 [2024-11-19 11:22:23.854198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b4c0 is same with the state(6) to be set 00:20:28.732 [2024-11-19 11:22:23.854209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b4c0 is same with the state(6) to be set 00:20:28.732 [2024-11-19 11:22:23.854221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b4c0 is same with the state(6) to be set 00:20:28.732 [2024-11-19 11:22:23.854679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b9b0 is same with the state(6) to be set 00:20:28.732 [2024-11-19 11:22:23.854715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b9b0 is same with the state(6) to be set 00:20:28.732 [2024-11-19 11:22:23.854731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b9b0 is same with the state(6) to be set 00:20:28.732 [2024-11-19 11:22:23.854744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b9b0 is same with the state(6) to be set 00:20:28.732 [2024-11-19 11:22:23.854757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b9b0 is same with the state(6) to be set 00:20:28.732 [2024-11-19 11:22:23.854770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b9b0 is same with the state(6) to be set 00:20:28.732 [2024-11-19 11:22:23.854782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b9b0 is same with the state(6) to be set 00:20:28.732 [2024-11-19 11:22:23.854794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b9b0 is same with the state(6) to be set 00:20:28.732 [2024-11-19 11:22:23.854806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96b9b0 is same with the state(6) to be set 00:20:28.732 [2024-11-19 11:22:23.855717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96be80 is same with the state(6) to be set 00:20:28.732 [2024-11-19 11:22:23.855750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96be80 is same with the state(6) to be set 00:20:28.732 [2024-11-19 11:22:23.855766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96be80 is same with the state(6) to be set 00:20:28.732 [2024-11-19 11:22:23.855779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96be80 is same with the state(6) to be set 00:20:28.732 [2024-11-19 11:22:23.855791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96be80 is same with the state(6) to be set 00:20:28.732 [2024-11-19 11:22:23.856711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96aff0 is same with the state(6) to be set 00:20:28.732 [2024-11-19 11:22:23.856744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96aff0 is same with the state(6) to be set 00:20:28.732 [2024-11-19 11:22:23.856760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96aff0 is same with the state(6) to be set 00:20:28.732 [2024-11-19 11:22:23.856773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96aff0 is same with the state(6) to be set 00:20:28.732 [2024-11-19 11:22:23.856785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96aff0 is same with the state(6) to be set 00:20:28.732 [2024-11-19 11:22:23.856798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96aff0 is same with the state(6) to be set 00:20:28.732 [2024-11-19 11:22:23.856810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96aff0 is same with the state(6) to be set 00:20:28.732 [2024-11-19 11:22:23.856823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96aff0 is same with the state(6) to be set 00:20:28.732 [2024-11-19 11:22:23.858345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cbc0 is same with the state(6) to be set 00:20:28.732 [2024-11-19 11:22:23.858413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cbc0 is same with the state(6) to be set 00:20:28.732 [2024-11-19 11:22:23.858430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cbc0 is same with the state(6) to be set 00:20:28.733 [2024-11-19 11:22:23.858443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cbc0 is same with the state(6) to be set 00:20:28.733 [2024-11-19 11:22:23.858456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cbc0 is same with the state(6) to be set 00:20:28.733 [2024-11-19 11:22:23.858467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cbc0 is same with the state(6) to be set 00:20:28.733 [2024-11-19 11:22:23.858480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cbc0 is same with the state(6) to be set 00:20:28.733 [2024-11-19 11:22:23.858492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cbc0 is same with the state(6) to be set 00:20:28.733 [2024-11-19 11:22:23.858504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96cbc0 is same with the state(6) to be set 00:20:28.733 [2024-11-19 11:22:23.859165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d090 is same with the state(6) to be set 00:20:28.733 [2024-11-19 11:22:23.859193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d090 is same with the state(6) to be set 00:20:28.733 [2024-11-19 11:22:23.859208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d090 is same with the state(6) to be set 00:20:28.733 [2024-11-19 11:22:23.859220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d090 is same with the state(6) to be set 00:20:28.733 [2024-11-19 11:22:23.859232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d090 is same with the state(6) to be set 00:20:28.733 [2024-11-19 11:22:23.859244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d090 is same with the state(6) to be set 00:20:28.733 [2024-11-19 11:22:23.859256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d090 is same with the state(6) to be set 00:20:28.733 [2024-11-19 11:22:23.859267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d090 is same with the state(6) to be set 00:20:28.733 [2024-11-19 11:22:23.860196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c220 is same with the state(6) to be set 00:20:28.733 [2024-11-19 11:22:23.860222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c220 is same with the state(6) to be set 00:20:28.733 [2024-11-19 11:22:23.860236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c220 is same with the state(6) to be set 00:20:28.733 [2024-11-19 11:22:23.860249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c220 is same with the state(6) to be set 00:20:28.733 [2024-11-19 11:22:23.860261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c220 is same with the state(6) to be set 00:20:28.733 [2024-11-19 11:22:23.860273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c220 is same with the state(6) to be set 00:20:28.733 [2024-11-19 11:22:23.860284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c220 is same with the state(6) to be set 00:20:28.733 [2024-11-19 11:22:23.860296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c220 is same with the state(6) to be set 00:20:28.733 [2024-11-19 11:22:23.860308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c220 is same with the state(6) to be set 00:20:28.733 [2024-11-19 11:22:23.860319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c220 is same with the state(6) to be set 00:20:28.733 [2024-11-19 11:22:23.861435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96da30 is same with the state(6) to be set 00:20:28.733 [2024-11-19 11:22:23.861471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96da30 is same with the state(6) to be set 00:20:28.733 [2024-11-19 11:22:23.861487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96da30 is same with the state(6) to be set 00:20:28.733 [2024-11-19 11:22:23.861500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96da30 is same with the state(6) to be set 00:20:28.733 [2024-11-19 11:22:23.861512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96da30 is same with the state(6) to be set 00:20:28.733 [2024-11-19 11:22:23.861524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96da30 is same with the state(6) to be set 00:20:28.733 [2024-11-19 11:22:23.861536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96da30 is same with the state(6) to be set 00:20:28.733 [2024-11-19 11:22:23.861548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96da30 is same with the state(6) to be set 00:20:28.733 [2024-11-19 11:22:23.861560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96da30 is same with the state(6) to be set 00:20:28.733 [2024-11-19 11:22:23.861572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96da30 is same with the state(6) to be set 00:20:28.733 [2024-11-19 11:22:23.862286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96df00 is same with the state(6) to be set 00:20:28.733 [2024-11-19 11:22:23.862312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96df00 is same with the state(6) to be set 00:20:28.733 [2024-11-19 11:22:23.862325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96df00 is same with the state(6) to be set 00:20:28.733 [2024-11-19 11:22:23.862351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96df00 is same with the state(6) to be set 00:20:28.733 [2024-11-19 11:22:23.862375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96df00 is same with the state(6) to be set 00:20:28.733 Write completed with error (sct=0, sc=8) 00:20:28.733 starting I/O failed: -6 00:20:28.733 Write completed with error (sct=0, sc=8) 00:20:28.733 Write completed with error (sct=0, sc=8) 00:20:28.733 Write completed with error (sct=0, sc=8) 00:20:28.733 Write completed with error (sct=0, sc=8) 00:20:28.733 [2024-11-19 11:22:23.862803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96e3d0 is same with the state(6) to be set 00:20:28.733 starting I/O failed: -6 00:20:28.733 [2024-11-19 11:22:23.862830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96e3d0 is same with the state(6) to be set 00:20:28.733 Write completed with error (sct=0, sc=8) 00:20:28.733 [2024-11-19 11:22:23.862844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96e3d0 is same with the state(6) to be set 00:20:28.733 [2024-11-19 11:22:23.862856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96e3d0 is same with the state(6) to be set 00:20:28.733 Write completed with error (sct=0, sc=8) 00:20:28.733 [2024-11-19 11:22:23.862868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96e3d0 is same with the state(6) to be set 00:20:28.733 [2024-11-19 11:22:23.862880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96e3d0 is same with the state(6) to be set 00:20:28.733 Write completed with error (sct=0, sc=8) 00:20:28.733 [2024-11-19 11:22:23.862893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96e3d0 is same with the state(6) to be set 00:20:28.733 [2024-11-19 11:22:23.862905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96e3d0 is same with the state(6) to be set 00:20:28.733 Write completed with error (sct=0, sc=8) 00:20:28.733 starting I/O failed: -6 00:20:28.733 Write completed with error (sct=0, sc=8) 00:20:28.733 Write completed with error (sct=0, sc=8) 00:20:28.733 Write completed with error (sct=0, sc=8) 00:20:28.733 Write completed with error (sct=0, sc=8) 00:20:28.733 starting I/O failed: -6 00:20:28.733 Write completed with error (sct=0, sc=8) 00:20:28.733 Write completed with error (sct=0, sc=8) 00:20:28.733 Write completed with error (sct=0, sc=8) 00:20:28.733 Write completed with error (sct=0, sc=8) 00:20:28.733 starting I/O failed: -6 00:20:28.733 Write completed with error (sct=0, sc=8) 00:20:28.733 Write completed with error (sct=0, sc=8) 00:20:28.733 Write completed with error (sct=0, sc=8) 00:20:28.733 Write completed with error (sct=0, sc=8) 00:20:28.733 starting I/O failed: -6 00:20:28.733 Write completed with error (sct=0, sc=8) 00:20:28.733 Write completed with error (sct=0, sc=8) 00:20:28.733 Write completed with error (sct=0, sc=8) 00:20:28.733 Write completed with error (sct=0, sc=8) 00:20:28.733 starting I/O failed: -6 00:20:28.733 Write completed with error (sct=0, sc=8) 00:20:28.733 Write completed with error (sct=0, sc=8) 00:20:28.733 Write completed with error (sct=0, sc=8) 00:20:28.733 Write completed with error (sct=0, sc=8) 00:20:28.733 starting I/O failed: -6 00:20:28.733 Write completed with error (sct=0, sc=8) 00:20:28.733 [2024-11-19 11:22:23.863521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d560 is same with the state(6) to be set 00:20:28.733 Write completed with error (sct=0, sc=8) 00:20:28.733 [2024-11-19 11:22:23.863549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d560 is same with the state(6) to be set 00:20:28.733 Write completed with error (sct=0, sc=8) 00:20:28.733 [2024-11-19 11:22:23.863563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d560 is same with the state(6) to be set 00:20:28.733 [2024-11-19 11:22:23.863579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d560 is same with the state(6) to be set 00:20:28.733 [2024-11-19 11:22:23.863591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d560 is same with tWrite completed with error (sct=0, sc=8) 00:20:28.733 he state(6) to be set 00:20:28.733 starting I/O failed: -6 00:20:28.733 [2024-11-19 11:22:23.863606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d560 is same with the state(6) to be set 00:20:28.733 [2024-11-19 11:22:23.863618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d560 is same with the state(6) to be set 00:20:28.733 Write completed with error (sct=0, sc=8) 00:20:28.733 [2024-11-19 11:22:23.863629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d560 is same with the state(6) to be set 00:20:28.733 [2024-11-19 11:22:23.863646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d560 is same with the state(6) to be set 00:20:28.733 [2024-11-19 11:22:23.863658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d560 is same with the state(6) to be set 00:20:28.733 Write completed with error (sct=0, sc=8) 00:20:28.733 Write completed with error (sct=0, sc=8) 00:20:28.733 Write completed with error (sct=0, sc=8) 00:20:28.733 starting I/O failed: -6 00:20:28.733 Write completed with error (sct=0, sc=8) 00:20:28.733 Write completed with error (sct=0, sc=8) 00:20:28.733 [2024-11-19 11:22:23.863867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:28.733 starting I/O failed: -6 00:20:28.733 Write completed with error (sct=0, sc=8) 00:20:28.733 Write completed with error (sct=0, sc=8) 00:20:28.733 Write completed with error (sct=0, sc=8) 00:20:28.733 starting I/O failed: -6 00:20:28.733 Write completed with error (sct=0, sc=8) 00:20:28.734 starting I/O failed: -6 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 starting I/O failed: -6 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 starting I/O failed: -6 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 starting I/O failed: -6 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 starting I/O failed: -6 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 starting I/O failed: -6 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 starting I/O failed: -6 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 starting I/O failed: -6 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 starting I/O failed: -6 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 starting I/O failed: -6 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 starting I/O failed: -6 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 starting I/O failed: -6 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 starting I/O failed: -6 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 starting I/O failed: -6 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 starting I/O failed: -6 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 starting I/O failed: -6 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 starting I/O failed: -6 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 [2024-11-19 11:22:23.865228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 starting I/O failed: -6 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 starting I/O failed: -6 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 starting I/O failed: -6 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 starting I/O failed: -6 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 starting I/O failed: -6 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 starting I/O failed: -6 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 starting I/O failed: -6 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 starting I/O failed: -6 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 starting I/O failed: -6 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 starting I/O failed: -6 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 starting I/O failed: -6 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 starting I/O failed: -6 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 starting I/O failed: -6 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 starting I/O failed: -6 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 starting I/O failed: -6 00:20:28.734 [2024-11-19 11:22:23.865867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ed70 is same with the state(6) to be set 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 [2024-11-19 11:22:23.865894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ed70 is same with the state(6) to be set 00:20:28.734 starting I/O failed: -6 00:20:28.734 [2024-11-19 11:22:23.865907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ed70 is same with the state(6) to be set 00:20:28.734 [2024-11-19 11:22:23.865920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ed70 is same with the state(6) to be set 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 [2024-11-19 11:22:23.865932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ed70 is same with the state(6) to be set 00:20:28.734 starting I/O failed: -6 00:20:28.734 [2024-11-19 11:22:23.865944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ed70 is same with the state(6) to be set 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 starting I/O failed: -6 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 starting I/O failed: -6 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 starting I/O failed: -6 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 starting I/O failed: -6 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 starting I/O failed: -6 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 starting I/O failed: -6 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 starting I/O failed: -6 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 starting I/O failed: -6 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 starting I/O failed: -6 00:20:28.734 [2024-11-19 11:22:23.866342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca5d0 is same with the state(6) to be set 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 [2024-11-19 11:22:23.866413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca5d0 is same with tstarting I/O failed: -6 00:20:28.734 he state(6) to be set 00:20:28.734 [2024-11-19 11:22:23.866434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca5d0 is same with the state(6) to be set 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 [2024-11-19 11:22:23.866447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca5d0 is same with the state(6) to be set 00:20:28.734 starting I/O failed: -6 00:20:28.734 [2024-11-19 11:22:23.866459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca5d0 is same with the state(6) to be set 00:20:28.734 [2024-11-19 11:22:23.866471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca5d0 is same with the state(6) to be set 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 [2024-11-19 11:22:23.866483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca5d0 is same with the state(6) to be set 00:20:28.734 starting I/O failed: -6 00:20:28.734 [2024-11-19 11:22:23.866495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca5d0 is same with the state(6) to be set 00:20:28.734 [2024-11-19 11:22:23.866507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca5d0 is same with the state(6) to be set 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 [2024-11-19 11:22:23.866519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca5d0 is same with the state(6) to be set 00:20:28.734 [2024-11-19 11:22:23.866531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca5d0 is same with the state(6) to be set 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 [2024-11-19 11:22:23.866542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca5d0 is same with the state(6) to be set 00:20:28.734 starting I/O failed: -6 00:20:28.734 [2024-11-19 11:22:23.866554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca5d0 is same with the state(6) to be set 00:20:28.734 [2024-11-19 11:22:23.866566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca5d0 is same with the state(6) to be set 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 [2024-11-19 11:22:23.866578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca5d0 is same with the state(6) to be set 00:20:28.734 starting I/O failed: -6 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 starting I/O failed: -6 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 starting I/O failed: -6 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 starting I/O failed: -6 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 starting I/O failed: -6 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 starting I/O failed: -6 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 starting I/O failed: -6 00:20:28.734 Write completed with error (sct=0, sc=8) 00:20:28.734 starting I/O failed: -6 00:20:28.734 [2024-11-19 11:22:23.866921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:28.734 [2024-11-19 11:22:23.866971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8caaa0 is same with the state(6) to be set 00:20:28.734 [2024-11-19 11:22:23.866997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8caaa0 is same with the state(6) to be set 00:20:28.734 [2024-11-19 11:22:23.867011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8caaa0 is same with the state(6) to be set 00:20:28.734 [2024-11-19 11:22:23.867023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8caaa0 is same with the state(6) to be set 00:20:28.735 [2024-11-19 11:22:23.867034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8caaa0 is same with the state(6) to be set 00:20:28.735 [2024-11-19 11:22:23.867045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8caaa0 is same with the state(6) to be set 00:20:28.735 [2024-11-19 11:22:23.867056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8caaa0 is same with the state(6) to be set 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 [2024-11-19 11:22:23.867074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8caaa0 is same with the state(6) to be set 00:20:28.735 starting I/O failed: -6 00:20:28.735 [2024-11-19 11:22:23.867088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8caaa0 is same with the state(6) to be set 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 [2024-11-19 11:22:23.867625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96e8a0 is same with tstarting I/O failed: -6 00:20:28.735 he state(6) to be set 00:20:28.735 [2024-11-19 11:22:23.867652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96e8a0 is same with the state(6) to be set 00:20:28.735 [2024-11-19 11:22:23.867666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96e8a0 is same with the state(6) to be set 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 [2024-11-19 11:22:23.867678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96e8a0 is same with tstarting I/O failed: -6 00:20:28.735 he state(6) to be set 00:20:28.735 [2024-11-19 11:22:23.867692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96e8a0 is same with the state(6) to be set 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 [2024-11-19 11:22:23.867704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96e8a0 is same with the state(6) to be set 00:20:28.735 starting I/O failed: -6 00:20:28.735 [2024-11-19 11:22:23.867718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96e8a0 is same with the state(6) to be set 00:20:28.735 [2024-11-19 11:22:23.867730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96e8a0 is same with the state(6) to be set 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 [2024-11-19 11:22:23.869490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:28.735 NVMe io qpair process completion error 00:20:28.735 [2024-11-19 11:22:23.874536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cd970 is same with the state(6) to be set 00:20:28.735 [2024-11-19 11:22:23.874581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cd970 is same with the state(6) to be set 00:20:28.735 [2024-11-19 11:22:23.874595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cd970 is same with the state(6) to be set 00:20:28.735 [2024-11-19 11:22:23.875298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cde40 is same with the state(6) to be set 00:20:28.735 [2024-11-19 11:22:23.875330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cde40 is same with the state(6) to be set 00:20:28.735 [2024-11-19 11:22:23.875345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cde40 is same with the state(6) to be set 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 Write completed with error (sct=0, sc=8) 00:20:28.735 starting I/O failed: -6 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 starting I/O failed: -6 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 starting I/O failed: -6 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 starting I/O failed: -6 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 starting I/O failed: -6 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 starting I/O failed: -6 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 starting I/O failed: -6 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 starting I/O failed: -6 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 starting I/O failed: -6 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 starting I/O failed: -6 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 starting I/O failed: -6 00:20:28.736 [2024-11-19 11:22:23.876913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cc630 is same with the state(6) to be set 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 [2024-11-19 11:22:23.876941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cc630 is same with the state(6) to be set 00:20:28.736 [2024-11-19 11:22:23.876955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cc630 is same with the state(6) to be set 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 [2024-11-19 11:22:23.876968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cc630 is same with the state(6) to be set 00:20:28.736 starting I/O failed: -6 00:20:28.736 [2024-11-19 11:22:23.876980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cc630 is same with the state(6) to be set 00:20:28.736 [2024-11-19 11:22:23.876992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cc630 is same with the state(6) to be set 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 [2024-11-19 11:22:23.877004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cc630 is same with the state(6) to be set 00:20:28.736 [2024-11-19 11:22:23.877016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cc630 is same with the state(6) to be set 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 starting I/O failed: -6 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 starting I/O failed: -6 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 starting I/O failed: -6 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 starting I/O failed: -6 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 starting I/O failed: -6 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 starting I/O failed: -6 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 [2024-11-19 11:22:23.877358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ccb00 is same with the state(6) to be set 00:20:28.736 [2024-11-19 11:22:23.877413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ccb00 is same with the state(6) to be set 00:20:28.736 [2024-11-19 11:22:23.877427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ccb00 is same with the state(6) to be set 00:20:28.736 [2024-11-19 11:22:23.877440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ccb00 is same with the state(6) to be set 00:20:28.736 [2024-11-19 11:22:23.877434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:28.736 [2024-11-19 11:22:23.877452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ccb00 is same with the state(6) to be set 00:20:28.736 [2024-11-19 11:22:23.877465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ccb00 is same with the state(6) to be set 00:20:28.736 NVMe io qpair process completion error 00:20:28.736 [2024-11-19 11:22:23.877477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ccb00 is same with the state(6) to be set 00:20:28.736 [2024-11-19 11:22:23.877495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ccb00 is same with the state(6) to be set 00:20:28.736 [2024-11-19 11:22:23.877883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ccfd0 is same with the state(6) to be set 00:20:28.736 [2024-11-19 11:22:23.877912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ccfd0 is same with the state(6) to be set 00:20:28.736 [2024-11-19 11:22:23.877926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ccfd0 is same with the state(6) to be set 00:20:28.736 [2024-11-19 11:22:23.877940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ccfd0 is same with the state(6) to be set 00:20:28.736 [2024-11-19 11:22:23.877952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ccfd0 is same with the state(6) to be set 00:20:28.736 [2024-11-19 11:22:23.877965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ccfd0 is same with the state(6) to be set 00:20:28.736 [2024-11-19 11:22:23.877977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ccfd0 is same with the state(6) to be set 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 starting I/O failed: -6 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 starting I/O failed: -6 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 starting I/O failed: -6 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 starting I/O failed: -6 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 [2024-11-19 11:22:23.878505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cc160 is same with tWrite completed with error (sct=0, sc=8) 00:20:28.736 he state(6) to be set 00:20:28.736 [2024-11-19 11:22:23.878536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cc160 is same with tWrite completed with error (sct=0, sc=8) 00:20:28.736 he state(6) to be set 00:20:28.736 [2024-11-19 11:22:23.878553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cc160 is same with the state(6) to be set 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 [2024-11-19 11:22:23.878566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cc160 is same with the state(6) to be set 00:20:28.736 starting I/O failed: -6 00:20:28.736 [2024-11-19 11:22:23.878580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cc160 is same with the state(6) to be set 00:20:28.736 [2024-11-19 11:22:23.878592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cc160 is same with the state(6) to be set 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 [2024-11-19 11:22:23.878605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cc160 is same with the state(6) to be set 00:20:28.736 [2024-11-19 11:22:23.878617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cc160 is same with the state(6) to be set 00:20:28.736 [2024-11-19 11:22:23.878628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cc160 is same with the state(6) to be set 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 [2024-11-19 11:22:23.878641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cc160 is same with the state(6) to be set 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 starting I/O failed: -6 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 starting I/O failed: -6 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 starting I/O failed: -6 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 starting I/O failed: -6 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 [2024-11-19 11:22:23.879185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 starting I/O failed: -6 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 starting I/O failed: -6 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 starting I/O failed: -6 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 starting I/O failed: -6 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 Write completed with error (sct=0, sc=8) 00:20:28.736 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 [2024-11-19 11:22:23.880440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 [2024-11-19 11:22:23.881912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.737 Write completed with error (sct=0, sc=8) 00:20:28.737 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 [2024-11-19 11:22:23.883967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:28.738 NVMe io qpair process completion error 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 [2024-11-19 11:22:23.885248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 [2024-11-19 11:22:23.886394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.738 Write completed with error (sct=0, sc=8) 00:20:28.738 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 [2024-11-19 11:22:23.887776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 [2024-11-19 11:22:23.890688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:28.739 NVMe io qpair process completion error 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 starting I/O failed: -6 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.739 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 [2024-11-19 11:22:23.892142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 [2024-11-19 11:22:23.893257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 [2024-11-19 11:22:23.894697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.740 starting I/O failed: -6 00:20:28.740 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 [2024-11-19 11:22:23.899262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:28.741 NVMe io qpair process completion error 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 [2024-11-19 11:22:23.900850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.741 Write completed with error (sct=0, sc=8) 00:20:28.741 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 [2024-11-19 11:22:23.902073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 [2024-11-19 11:22:23.903471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.742 Write completed with error (sct=0, sc=8) 00:20:28.742 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 [2024-11-19 11:22:23.907575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:28.743 NVMe io qpair process completion error 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 [2024-11-19 11:22:23.908783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 [2024-11-19 11:22:23.909893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 starting I/O failed: -6 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.743 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 [2024-11-19 11:22:23.911535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 [2024-11-19 11:22:23.913848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:28.744 NVMe io qpair process completion error 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 starting I/O failed: -6 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.744 Write completed with error (sct=0, sc=8) 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 [2024-11-19 11:22:23.915157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:28.745 starting I/O failed: -6 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 [2024-11-19 11:22:23.916431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 [2024-11-19 11:22:23.917856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.745 Write completed with error (sct=0, sc=8) 00:20:28.745 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 [2024-11-19 11:22:23.920929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:28.746 NVMe io qpair process completion error 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 [2024-11-19 11:22:23.922391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 [2024-11-19 11:22:23.923500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 Write completed with error (sct=0, sc=8) 00:20:28.746 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 [2024-11-19 11:22:23.924936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:28.747 starting I/O failed: -6 00:20:28.747 starting I/O failed: -6 00:20:28.747 starting I/O failed: -6 00:20:28.747 starting I/O failed: -6 00:20:28.747 starting I/O failed: -6 00:20:28.747 starting I/O failed: -6 00:20:28.747 starting I/O failed: -6 00:20:28.747 starting I/O failed: -6 00:20:28.747 starting I/O failed: -6 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 starting I/O failed: -6 00:20:28.747 [2024-11-19 11:22:23.931355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:28.747 NVMe io qpair process completion error 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.747 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 [2024-11-19 11:22:23.932653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 [2024-11-19 11:22:23.933917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 [2024-11-19 11:22:23.935286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.748 starting I/O failed: -6 00:20:28.748 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 starting I/O failed: -6 00:20:28.749 [2024-11-19 11:22:23.939099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:28.749 NVMe io qpair process completion error 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.749 Write completed with error (sct=0, sc=8) 00:20:28.750 Write completed with error (sct=0, sc=8) 00:20:28.750 Write completed with error (sct=0, sc=8) 00:20:28.750 Write completed with error (sct=0, sc=8) 00:20:28.750 Write completed with error (sct=0, sc=8) 00:20:28.750 Write completed with error (sct=0, sc=8) 00:20:28.750 Write completed with error (sct=0, sc=8) 00:20:28.750 Write completed with error (sct=0, sc=8) 00:20:28.750 Write completed with error (sct=0, sc=8) 00:20:28.750 Write completed with error (sct=0, sc=8) 00:20:28.750 Write completed with error (sct=0, sc=8) 00:20:28.750 Initializing NVMe Controllers 00:20:28.750 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:20:28.750 Controller IO queue size 128, less than required. 00:20:28.750 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:28.750 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:20:28.750 Controller IO queue size 128, less than required. 00:20:28.750 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:28.750 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:20:28.750 Controller IO queue size 128, less than required. 00:20:28.750 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:28.750 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:20:28.750 Controller IO queue size 128, less than required. 00:20:28.750 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:28.750 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:20:28.750 Controller IO queue size 128, less than required. 00:20:28.750 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:28.750 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:20:28.750 Controller IO queue size 128, less than required. 00:20:28.750 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:28.750 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:20:28.750 Controller IO queue size 128, less than required. 00:20:28.750 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:28.750 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:28.750 Controller IO queue size 128, less than required. 00:20:28.750 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:28.750 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:20:28.750 Controller IO queue size 128, less than required. 00:20:28.750 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:28.750 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:20:28.750 Controller IO queue size 128, less than required. 00:20:28.750 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:28.750 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:20:28.750 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:20:28.750 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:20:28.750 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:20:28.750 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:20:28.750 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:20:28.750 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:20:28.750 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:28.750 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:20:28.750 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:20:28.750 Initialization complete. Launching workers. 00:20:28.750 ======================================================== 00:20:28.750 Latency(us) 00:20:28.750 Device Information : IOPS MiB/s Average min max 00:20:28.750 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1711.48 73.54 74814.36 1262.61 133486.73 00:20:28.750 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1719.79 73.90 74486.94 1127.92 129865.04 00:20:28.750 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1691.26 72.67 75790.68 1014.72 139219.80 00:20:28.750 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1708.93 73.43 75080.56 891.06 144541.86 00:20:28.750 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1715.95 73.73 74837.49 1126.56 129530.12 00:20:28.750 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1705.73 73.29 75322.02 930.00 129519.59 00:20:28.750 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1708.93 73.43 75235.09 1131.11 156445.69 00:20:28.750 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1734.90 74.55 73126.84 1128.22 131174.52 00:20:28.750 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1643.99 70.64 78233.13 1315.67 131462.67 00:20:28.750 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1669.96 71.76 76805.19 577.73 136132.44 00:20:28.750 ======================================================== 00:20:28.750 Total : 17010.93 730.94 75354.31 577.73 156445.69 00:20:28.750 00:20:28.750 [2024-11-19 11:22:23.946151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ee5f0 is same with the state(6) to be set 00:20:28.750 [2024-11-19 11:22:23.946265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ee2c0 is same with the state(6) to be set 00:20:28.750 [2024-11-19 11:22:23.946345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ed9e0 is same with the state(6) to be set 00:20:28.750 [2024-11-19 11:22:23.946457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ef900 is same with the state(6) to be set 00:20:28.750 [2024-11-19 11:22:23.946535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13efae0 is same with the state(6) to be set 00:20:28.750 [2024-11-19 11:22:23.946612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ed6b0 is same with the state(6) to be set 00:20:28.750 [2024-11-19 11:22:23.946702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13edd10 is same with the state(6) to be set 00:20:28.750 [2024-11-19 11:22:23.946782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ef720 is same with the state(6) to be set 00:20:28.750 [2024-11-19 11:22:23.946870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eec50 is same with the state(6) to be set 00:20:28.750 [2024-11-19 11:22:23.946951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ee920 is same with the state(6) to be set 00:20:28.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:20:29.010 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:20:29.948 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2656216 00:20:29.948 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:20:29.948 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2656216 00:20:29.948 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:20:29.948 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:29.948 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:20:29.948 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:29.948 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 2656216 00:20:29.948 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:20:29.948 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:29.948 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:29.948 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:29.948 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:20:29.948 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:29.948 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:29.948 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:29.948 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:29.948 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:29.948 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:20:29.948 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:29.948 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:20:29.948 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:29.948 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:29.948 rmmod nvme_tcp 00:20:29.948 rmmod nvme_fabrics 00:20:29.948 rmmod nvme_keyring 00:20:29.948 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:29.948 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:20:29.948 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:20:29.948 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2656048 ']' 00:20:29.948 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2656048 00:20:29.948 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2656048 ']' 00:20:29.948 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2656048 00:20:29.948 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2656048) - No such process 00:20:29.948 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2656048 is not found' 00:20:29.948 Process with pid 2656048 is not found 00:20:29.948 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:29.948 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:29.948 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:29.948 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:20:29.948 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:20:29.948 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:29.948 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:20:29.948 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:29.948 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:29.948 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:29.948 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:29.948 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:32.486 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:32.486 00:20:32.486 real 0m9.800s 00:20:32.486 user 0m24.254s 00:20:32.486 sys 0m6.179s 00:20:32.486 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:32.486 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:32.486 ************************************ 00:20:32.486 END TEST nvmf_shutdown_tc4 00:20:32.486 ************************************ 00:20:32.486 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:20:32.486 00:20:32.486 real 0m37.825s 00:20:32.486 user 1m41.448s 00:20:32.486 sys 0m13.044s 00:20:32.486 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:32.486 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:32.486 ************************************ 00:20:32.486 END TEST nvmf_shutdown 00:20:32.486 ************************************ 00:20:32.486 11:22:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:20:32.486 11:22:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:32.486 11:22:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:32.486 11:22:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:32.486 ************************************ 00:20:32.486 START TEST nvmf_nsid 00:20:32.486 ************************************ 00:20:32.486 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:20:32.486 * Looking for test storage... 00:20:32.486 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:32.486 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:32.486 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:20:32.486 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:32.486 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:32.486 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:32.486 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:32.486 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:32.486 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:20:32.486 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:20:32.486 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:20:32.486 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:20:32.486 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:20:32.486 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:20:32.486 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:20:32.486 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:32.486 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:20:32.486 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:20:32.486 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:32.486 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:32.486 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:20:32.486 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:20:32.486 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:32.486 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:20:32.486 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:20:32.486 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:20:32.486 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:20:32.486 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:32.486 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:20:32.486 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:32.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:32.487 --rc genhtml_branch_coverage=1 00:20:32.487 --rc genhtml_function_coverage=1 00:20:32.487 --rc genhtml_legend=1 00:20:32.487 --rc geninfo_all_blocks=1 00:20:32.487 --rc geninfo_unexecuted_blocks=1 00:20:32.487 00:20:32.487 ' 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:32.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:32.487 --rc genhtml_branch_coverage=1 00:20:32.487 --rc genhtml_function_coverage=1 00:20:32.487 --rc genhtml_legend=1 00:20:32.487 --rc geninfo_all_blocks=1 00:20:32.487 --rc geninfo_unexecuted_blocks=1 00:20:32.487 00:20:32.487 ' 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:32.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:32.487 --rc genhtml_branch_coverage=1 00:20:32.487 --rc genhtml_function_coverage=1 00:20:32.487 --rc genhtml_legend=1 00:20:32.487 --rc geninfo_all_blocks=1 00:20:32.487 --rc geninfo_unexecuted_blocks=1 00:20:32.487 00:20:32.487 ' 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:32.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:32.487 --rc genhtml_branch_coverage=1 00:20:32.487 --rc genhtml_function_coverage=1 00:20:32.487 --rc genhtml_legend=1 00:20:32.487 --rc geninfo_all_blocks=1 00:20:32.487 --rc geninfo_unexecuted_blocks=1 00:20:32.487 00:20:32.487 ' 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:32.487 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:20:32.487 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:35.047 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:35.047 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:20:35.047 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:35.047 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:35.047 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:35.047 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:35.047 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:35.047 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:20:35.047 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:35.047 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:20:35.047 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:20:35.047 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:20:35.047 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:20:35.047 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:20:35.047 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:20:35.047 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:35.047 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:35.047 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:35.047 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:35.047 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:35.047 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:35.047 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:35.047 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:35.047 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:35.047 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:35.047 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:35.047 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:35.047 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:35.047 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:35.047 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:35.047 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:35.047 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:35.047 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:35.047 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:35.047 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:20:35.047 Found 0000:82:00.0 (0x8086 - 0x159b) 00:20:35.047 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:35.047 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:35.047 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:35.047 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:35.047 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:35.047 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:35.047 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:20:35.047 Found 0000:82:00.1 (0x8086 - 0x159b) 00:20:35.047 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:35.047 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:35.047 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:20:35.048 Found net devices under 0000:82:00.0: cvl_0_0 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:20:35.048 Found net devices under 0000:82:00.1: cvl_0_1 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:35.048 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:35.048 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:20:35.048 00:20:35.048 --- 10.0.0.2 ping statistics --- 00:20:35.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.048 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:35.048 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:35.048 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:20:35.048 00:20:35.048 --- 10.0.0.1 ping statistics --- 00:20:35.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.048 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=2659308 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 2659308 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2659308 ']' 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:35.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:35.048 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:35.307 [2024-11-19 11:22:30.581776] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:20:35.307 [2024-11-19 11:22:30.581863] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:35.307 [2024-11-19 11:22:30.666963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.307 [2024-11-19 11:22:30.726766] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:35.307 [2024-11-19 11:22:30.726832] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:35.307 [2024-11-19 11:22:30.726862] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:35.307 [2024-11-19 11:22:30.726874] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:35.307 [2024-11-19 11:22:30.726885] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:35.307 [2024-11-19 11:22:30.727581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.565 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:35.565 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:20:35.565 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:35.565 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:35.565 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:35.566 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:35.566 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:35.566 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2659386 00:20:35.566 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:20:35.566 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:20:35.566 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:20:35.566 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:20:35.566 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:35.566 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:35.566 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:35.566 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:35.566 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:35.566 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:35.566 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:35.566 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:35.566 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:35.566 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:20:35.566 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:20:35.566 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=22d12dbe-9870-4d37-8e48-a2278e5a4539 00:20:35.566 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:20:35.566 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=c0e189f6-2eda-46a7-9736-dd5eaf2fdd30 00:20:35.566 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:20:35.566 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=5eb5d1fc-80ae-4fda-9cae-3ab9a83438a5 00:20:35.566 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:20:35.566 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.566 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:35.566 null0 00:20:35.566 null1 00:20:35.566 null2 00:20:35.566 [2024-11-19 11:22:30.909893] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:35.566 [2024-11-19 11:22:30.923299] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:20:35.566 [2024-11-19 11:22:30.923390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2659386 ] 00:20:35.566 [2024-11-19 11:22:30.934100] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:35.566 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.566 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2659386 /var/tmp/tgt2.sock 00:20:35.566 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2659386 ']' 00:20:35.566 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:20:35.566 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:35.566 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:20:35.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:20:35.566 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:35.566 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:35.566 [2024-11-19 11:22:30.999869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.566 [2024-11-19 11:22:31.057163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:35.826 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:35.826 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:20:35.826 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:20:36.421 [2024-11-19 11:22:31.698000] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:36.421 [2024-11-19 11:22:31.714188] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:20:36.421 nvme0n1 nvme0n2 00:20:36.421 nvme1n1 00:20:36.421 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:20:36.421 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:20:36.421 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd 00:20:36.988 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:20:36.988 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:20:36.988 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:20:36.988 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:20:36.988 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:20:36.988 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:20:36.988 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:20:36.988 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:20:36.988 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:36.988 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:36.988 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:20:36.988 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:20:36.988 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:20:37.923 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:37.923 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:37.923 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:37.923 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:20:37.923 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:20:37.923 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 22d12dbe-9870-4d37-8e48-a2278e5a4539 00:20:37.923 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:37.923 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:20:37.923 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:20:37.923 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:20:37.923 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:37.923 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=22d12dbe98704d378e48a2278e5a4539 00:20:37.923 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 22D12DBE98704D378E48A2278E5A4539 00:20:37.923 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 22D12DBE98704D378E48A2278E5A4539 == \2\2\D\1\2\D\B\E\9\8\7\0\4\D\3\7\8\E\4\8\A\2\2\7\8\E\5\A\4\5\3\9 ]] 00:20:37.923 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:20:37.923 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:20:37.923 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:37.923 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:20:37.923 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:37.923 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:20:37.923 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:20:37.923 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid c0e189f6-2eda-46a7-9736-dd5eaf2fdd30 00:20:37.923 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:37.923 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:20:37.923 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:20:37.923 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:20:37.923 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:38.181 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=c0e189f62eda46a79736dd5eaf2fdd30 00:20:38.181 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo C0E189F62EDA46A79736DD5EAF2FDD30 00:20:38.181 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ C0E189F62EDA46A79736DD5EAF2FDD30 == \C\0\E\1\8\9\F\6\2\E\D\A\4\6\A\7\9\7\3\6\D\D\5\E\A\F\2\F\D\D\3\0 ]] 00:20:38.181 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:20:38.181 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:20:38.181 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:38.181 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:20:38.181 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:38.181 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:20:38.181 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:20:38.182 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 5eb5d1fc-80ae-4fda-9cae-3ab9a83438a5 00:20:38.182 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:38.182 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:20:38.182 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:20:38.182 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:20:38.182 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:38.182 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=5eb5d1fc80ae4fda9cae3ab9a83438a5 00:20:38.182 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 5EB5D1FC80AE4FDA9CAE3AB9A83438A5 00:20:38.182 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 5EB5D1FC80AE4FDA9CAE3AB9A83438A5 == \5\E\B\5\D\1\F\C\8\0\A\E\4\F\D\A\9\C\A\E\3\A\B\9\A\8\3\4\3\8\A\5 ]] 00:20:38.182 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:20:38.440 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:20:38.440 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:20:38.440 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2659386 00:20:38.440 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2659386 ']' 00:20:38.440 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2659386 00:20:38.440 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:20:38.440 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:38.440 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2659386 00:20:38.440 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:38.440 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:38.440 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2659386' 00:20:38.440 killing process with pid 2659386 00:20:38.440 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2659386 00:20:38.440 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2659386 00:20:38.698 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:20:38.698 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:38.698 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:20:38.698 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:38.698 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:20:38.698 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:38.698 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:38.698 rmmod nvme_tcp 00:20:38.698 rmmod nvme_fabrics 00:20:38.698 rmmod nvme_keyring 00:20:38.956 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:38.956 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:20:38.956 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:20:38.956 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 2659308 ']' 00:20:38.956 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 2659308 00:20:38.956 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2659308 ']' 00:20:38.956 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2659308 00:20:38.956 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:20:38.956 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:38.956 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2659308 00:20:38.956 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:38.956 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:38.956 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2659308' 00:20:38.956 killing process with pid 2659308 00:20:38.956 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2659308 00:20:38.956 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2659308 00:20:39.216 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:39.216 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:39.216 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:39.216 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:20:39.216 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:20:39.216 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:39.216 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:20:39.216 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:39.216 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:39.216 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.216 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:39.216 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.124 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:41.124 00:20:41.124 real 0m8.949s 00:20:41.124 user 0m8.485s 00:20:41.124 sys 0m3.085s 00:20:41.124 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:41.124 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:41.124 ************************************ 00:20:41.124 END TEST nvmf_nsid 00:20:41.124 ************************************ 00:20:41.124 11:22:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:20:41.124 00:20:41.124 real 11m56.861s 00:20:41.124 user 27m54.623s 00:20:41.124 sys 3m0.720s 00:20:41.124 11:22:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:41.124 11:22:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:41.124 ************************************ 00:20:41.124 END TEST nvmf_target_extra 00:20:41.124 ************************************ 00:20:41.124 11:22:36 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:20:41.124 11:22:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:41.124 11:22:36 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:41.124 11:22:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:41.124 ************************************ 00:20:41.124 START TEST nvmf_host 00:20:41.124 ************************************ 00:20:41.124 11:22:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:20:41.383 * Looking for test storage... 00:20:41.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:20:41.383 11:22:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:41.383 11:22:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:20:41.383 11:22:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:41.383 11:22:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:41.383 11:22:36 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:41.383 11:22:36 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:41.383 11:22:36 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:41.383 11:22:36 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:20:41.383 11:22:36 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:20:41.383 11:22:36 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:20:41.383 11:22:36 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:20:41.383 11:22:36 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:20:41.383 11:22:36 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:20:41.383 11:22:36 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:20:41.383 11:22:36 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:41.383 11:22:36 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:20:41.383 11:22:36 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:20:41.383 11:22:36 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:41.383 11:22:36 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:41.383 11:22:36 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:20:41.383 11:22:36 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:20:41.383 11:22:36 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:41.383 11:22:36 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:20:41.383 11:22:36 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:20:41.383 11:22:36 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:20:41.383 11:22:36 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:20:41.383 11:22:36 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:41.383 11:22:36 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:20:41.383 11:22:36 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:20:41.383 11:22:36 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:41.383 11:22:36 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:41.383 11:22:36 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:20:41.383 11:22:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:41.383 11:22:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:41.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.383 --rc genhtml_branch_coverage=1 00:20:41.383 --rc genhtml_function_coverage=1 00:20:41.383 --rc genhtml_legend=1 00:20:41.383 --rc geninfo_all_blocks=1 00:20:41.383 --rc geninfo_unexecuted_blocks=1 00:20:41.383 00:20:41.383 ' 00:20:41.383 11:22:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:41.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.383 --rc genhtml_branch_coverage=1 00:20:41.383 --rc genhtml_function_coverage=1 00:20:41.383 --rc genhtml_legend=1 00:20:41.383 --rc geninfo_all_blocks=1 00:20:41.383 --rc geninfo_unexecuted_blocks=1 00:20:41.383 00:20:41.383 ' 00:20:41.383 11:22:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:41.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.383 --rc genhtml_branch_coverage=1 00:20:41.383 --rc genhtml_function_coverage=1 00:20:41.383 --rc genhtml_legend=1 00:20:41.383 --rc geninfo_all_blocks=1 00:20:41.383 --rc geninfo_unexecuted_blocks=1 00:20:41.383 00:20:41.383 ' 00:20:41.383 11:22:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:41.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.383 --rc genhtml_branch_coverage=1 00:20:41.383 --rc genhtml_function_coverage=1 00:20:41.383 --rc genhtml_legend=1 00:20:41.383 --rc geninfo_all_blocks=1 00:20:41.383 --rc geninfo_unexecuted_blocks=1 00:20:41.383 00:20:41.383 ' 00:20:41.383 11:22:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:41.383 11:22:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:20:41.384 11:22:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:41.384 11:22:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:41.384 11:22:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:41.384 11:22:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:41.384 11:22:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:41.384 11:22:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:41.384 11:22:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:41.384 11:22:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:41.384 11:22:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:41.384 11:22:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:41.384 11:22:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:20:41.384 11:22:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:20:41.384 11:22:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:41.384 11:22:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:41.384 11:22:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:41.384 11:22:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:41.384 11:22:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:41.384 11:22:36 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:41.384 11:22:36 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:41.384 11:22:36 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:41.384 11:22:36 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:41.384 11:22:36 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.384 11:22:36 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.384 11:22:36 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.384 11:22:36 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:20:41.384 11:22:36 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.384 11:22:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:20:41.384 11:22:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:41.384 11:22:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:41.384 11:22:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:41.384 11:22:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:41.384 11:22:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:41.384 11:22:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:41.384 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:41.384 11:22:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:41.384 11:22:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:41.384 11:22:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:41.384 11:22:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:20:41.384 11:22:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:20:41.384 11:22:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:20:41.384 11:22:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:41.384 11:22:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:41.384 11:22:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:41.384 11:22:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.384 ************************************ 00:20:41.384 START TEST nvmf_multicontroller 00:20:41.384 ************************************ 00:20:41.384 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:41.384 * Looking for test storage... 00:20:41.384 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:41.384 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:41.384 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:20:41.384 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:41.644 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:41.644 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:41.644 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:41.644 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:41.644 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:20:41.644 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:20:41.644 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:20:41.644 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:20:41.644 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:20:41.644 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:20:41.644 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:20:41.644 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:41.644 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:20:41.644 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:20:41.644 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:41.644 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:41.644 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:20:41.644 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:20:41.644 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:41.644 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:20:41.644 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:20:41.644 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:20:41.644 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:20:41.644 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:41.644 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:20:41.644 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:20:41.644 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:41.644 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:41.644 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:20:41.644 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:41.644 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:41.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.644 --rc genhtml_branch_coverage=1 00:20:41.644 --rc genhtml_function_coverage=1 00:20:41.644 --rc genhtml_legend=1 00:20:41.644 --rc geninfo_all_blocks=1 00:20:41.644 --rc geninfo_unexecuted_blocks=1 00:20:41.644 00:20:41.644 ' 00:20:41.644 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:41.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.644 --rc genhtml_branch_coverage=1 00:20:41.644 --rc genhtml_function_coverage=1 00:20:41.644 --rc genhtml_legend=1 00:20:41.644 --rc geninfo_all_blocks=1 00:20:41.644 --rc geninfo_unexecuted_blocks=1 00:20:41.644 00:20:41.644 ' 00:20:41.644 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:41.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.644 --rc genhtml_branch_coverage=1 00:20:41.644 --rc genhtml_function_coverage=1 00:20:41.644 --rc genhtml_legend=1 00:20:41.644 --rc geninfo_all_blocks=1 00:20:41.644 --rc geninfo_unexecuted_blocks=1 00:20:41.644 00:20:41.644 ' 00:20:41.644 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:41.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.644 --rc genhtml_branch_coverage=1 00:20:41.644 --rc genhtml_function_coverage=1 00:20:41.644 --rc genhtml_legend=1 00:20:41.644 --rc geninfo_all_blocks=1 00:20:41.644 --rc geninfo_unexecuted_blocks=1 00:20:41.644 00:20:41.644 ' 00:20:41.644 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:41.644 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:20:41.644 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:41.644 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:41.644 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:41.644 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:41.644 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:41.644 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:41.644 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:41.644 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:41.644 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:41.644 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:41.645 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:20:41.645 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:20:41.645 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:41.645 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:41.645 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:41.645 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:41.645 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:41.645 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:20:41.645 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:41.645 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:41.645 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:41.645 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.645 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.645 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.645 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:20:41.645 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.645 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:20:41.645 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:41.645 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:41.645 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:41.645 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:41.645 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:41.645 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:41.645 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:41.645 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:41.645 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:41.645 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:41.645 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:41.645 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:41.645 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:41.645 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:41.645 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:41.645 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:41.645 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:41.645 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:41.645 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:41.645 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:41.645 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:41.645 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:41.645 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.645 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:41.645 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.645 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:41.645 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:41.645 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:20:41.645 11:22:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:20:44.177 Found 0000:82:00.0 (0x8086 - 0x159b) 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:20:44.177 Found 0000:82:00.1 (0x8086 - 0x159b) 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:20:44.177 Found net devices under 0000:82:00.0: cvl_0_0 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:20:44.177 Found net devices under 0000:82:00.1: cvl_0_1 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:44.177 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:44.178 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:44.178 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:44.178 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:44.178 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:44.178 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:44.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:44.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:20:44.178 00:20:44.178 --- 10.0.0.2 ping statistics --- 00:20:44.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.178 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:20:44.178 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:44.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:44.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:20:44.178 00:20:44.178 --- 10.0.0.1 ping statistics --- 00:20:44.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.178 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:20:44.178 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:44.178 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:20:44.178 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:44.178 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:44.178 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:44.178 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:44.178 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:44.178 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:44.178 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:44.178 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:44.178 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:44.178 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:44.178 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:44.178 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=2662119 00:20:44.178 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:44.178 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 2662119 00:20:44.178 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2662119 ']' 00:20:44.178 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.178 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:44.178 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.178 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:44.178 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:44.178 [2024-11-19 11:22:39.626848] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:20:44.178 [2024-11-19 11:22:39.626936] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:44.436 [2024-11-19 11:22:39.719463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:44.436 [2024-11-19 11:22:39.779407] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:44.436 [2024-11-19 11:22:39.779472] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:44.436 [2024-11-19 11:22:39.779500] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:44.436 [2024-11-19 11:22:39.779516] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:44.436 [2024-11-19 11:22:39.779526] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:44.436 [2024-11-19 11:22:39.781146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:44.436 [2024-11-19 11:22:39.781206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:44.436 [2024-11-19 11:22:39.781210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:44.436 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:44.436 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:20:44.436 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:44.437 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:44.437 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:44.437 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:44.437 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:44.437 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.437 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:44.437 [2024-11-19 11:22:39.932261] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:44.695 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.695 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:44.695 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.695 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:44.695 Malloc0 00:20:44.695 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.695 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:44.695 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.695 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:44.695 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.695 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:44.695 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.695 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:44.695 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.695 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:44.695 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.695 11:22:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:44.695 [2024-11-19 11:22:40.000655] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:44.695 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.695 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:44.695 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.695 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:44.695 [2024-11-19 11:22:40.008522] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:44.695 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.695 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:44.695 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.695 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:44.695 Malloc1 00:20:44.695 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.695 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:44.695 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.695 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:44.695 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.695 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:44.695 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.695 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:44.695 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.695 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:44.695 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.695 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:44.695 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.695 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:44.695 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.695 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:44.695 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.695 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2662260 00:20:44.695 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:44.695 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2662260 /var/tmp/bdevperf.sock 00:20:44.695 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:44.695 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2662260 ']' 00:20:44.695 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:44.695 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:44.695 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:44.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:44.695 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:44.695 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:44.954 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:44.954 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:20:44.954 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:20:44.954 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.954 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:45.212 NVMe0n1 00:20:45.212 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.212 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:45.212 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:45.212 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.212 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:45.212 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.212 1 00:20:45.212 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:20:45.212 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:20:45.212 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:20:45.212 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:45.212 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:45.212 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:45.212 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:45.212 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:20:45.212 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.212 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:45.212 request: 00:20:45.212 { 00:20:45.212 "name": "NVMe0", 00:20:45.212 "trtype": "tcp", 00:20:45.212 "traddr": "10.0.0.2", 00:20:45.212 "adrfam": "ipv4", 00:20:45.212 "trsvcid": "4420", 00:20:45.212 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.212 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:45.212 "hostaddr": "10.0.0.1", 00:20:45.212 "prchk_reftag": false, 00:20:45.212 "prchk_guard": false, 00:20:45.212 "hdgst": false, 00:20:45.212 "ddgst": false, 00:20:45.212 "allow_unrecognized_csi": false, 00:20:45.212 "method": "bdev_nvme_attach_controller", 00:20:45.212 "req_id": 1 00:20:45.212 } 00:20:45.212 Got JSON-RPC error response 00:20:45.212 response: 00:20:45.212 { 00:20:45.212 "code": -114, 00:20:45.212 "message": "A controller named NVMe0 already exists with the specified network path" 00:20:45.212 } 00:20:45.212 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:45.212 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:20:45.212 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:45.212 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:45.212 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:45.212 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:20:45.212 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:20:45.212 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:20:45.212 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:45.212 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:45.212 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:45.212 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:45.212 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:20:45.212 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.212 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:45.212 request: 00:20:45.212 { 00:20:45.212 "name": "NVMe0", 00:20:45.212 "trtype": "tcp", 00:20:45.212 "traddr": "10.0.0.2", 00:20:45.212 "adrfam": "ipv4", 00:20:45.212 "trsvcid": "4420", 00:20:45.212 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:45.212 "hostaddr": "10.0.0.1", 00:20:45.212 "prchk_reftag": false, 00:20:45.212 "prchk_guard": false, 00:20:45.212 "hdgst": false, 00:20:45.212 "ddgst": false, 00:20:45.212 "allow_unrecognized_csi": false, 00:20:45.212 "method": "bdev_nvme_attach_controller", 00:20:45.212 "req_id": 1 00:20:45.212 } 00:20:45.212 Got JSON-RPC error response 00:20:45.212 response: 00:20:45.212 { 00:20:45.212 "code": -114, 00:20:45.212 "message": "A controller named NVMe0 already exists with the specified network path" 00:20:45.212 } 00:20:45.212 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:45.212 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:20:45.212 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:45.212 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:45.212 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:45.212 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:20:45.212 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:20:45.212 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:20:45.212 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:45.212 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:45.212 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:45.212 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:45.212 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:20:45.212 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.212 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:45.212 request: 00:20:45.212 { 00:20:45.212 "name": "NVMe0", 00:20:45.212 "trtype": "tcp", 00:20:45.212 "traddr": "10.0.0.2", 00:20:45.212 "adrfam": "ipv4", 00:20:45.212 "trsvcid": "4420", 00:20:45.212 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.212 "hostaddr": "10.0.0.1", 00:20:45.212 "prchk_reftag": false, 00:20:45.212 "prchk_guard": false, 00:20:45.213 "hdgst": false, 00:20:45.213 "ddgst": false, 00:20:45.213 "multipath": "disable", 00:20:45.213 "allow_unrecognized_csi": false, 00:20:45.213 "method": "bdev_nvme_attach_controller", 00:20:45.213 "req_id": 1 00:20:45.213 } 00:20:45.213 Got JSON-RPC error response 00:20:45.213 response: 00:20:45.213 { 00:20:45.213 "code": -114, 00:20:45.213 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:20:45.213 } 00:20:45.213 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:45.213 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:20:45.213 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:45.213 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:45.213 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:45.213 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:20:45.213 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:20:45.213 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:20:45.213 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:45.213 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:45.213 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:45.213 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:45.213 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:20:45.213 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.213 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:45.213 request: 00:20:45.213 { 00:20:45.213 "name": "NVMe0", 00:20:45.213 "trtype": "tcp", 00:20:45.213 "traddr": "10.0.0.2", 00:20:45.213 "adrfam": "ipv4", 00:20:45.213 "trsvcid": "4420", 00:20:45.213 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.213 "hostaddr": "10.0.0.1", 00:20:45.213 "prchk_reftag": false, 00:20:45.213 "prchk_guard": false, 00:20:45.213 "hdgst": false, 00:20:45.213 "ddgst": false, 00:20:45.213 "multipath": "failover", 00:20:45.213 "allow_unrecognized_csi": false, 00:20:45.213 "method": "bdev_nvme_attach_controller", 00:20:45.213 "req_id": 1 00:20:45.213 } 00:20:45.213 Got JSON-RPC error response 00:20:45.213 response: 00:20:45.213 { 00:20:45.213 "code": -114, 00:20:45.213 "message": "A controller named NVMe0 already exists with the specified network path" 00:20:45.213 } 00:20:45.213 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:45.213 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:20:45.213 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:45.213 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:45.213 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:45.213 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:45.213 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.213 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:45.471 NVMe0n1 00:20:45.471 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.471 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:45.471 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.471 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:45.471 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.471 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:20:45.471 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.471 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:45.471 00:20:45.471 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.471 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:45.471 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:45.471 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.471 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:45.471 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.471 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:45.471 11:22:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:46.844 { 00:20:46.844 "results": [ 00:20:46.844 { 00:20:46.844 "job": "NVMe0n1", 00:20:46.844 "core_mask": "0x1", 00:20:46.844 "workload": "write", 00:20:46.844 "status": "finished", 00:20:46.844 "queue_depth": 128, 00:20:46.844 "io_size": 4096, 00:20:46.844 "runtime": 1.00699, 00:20:46.844 "iops": 19054.806899770603, 00:20:46.844 "mibps": 74.43283945222892, 00:20:46.844 "io_failed": 0, 00:20:46.844 "io_timeout": 0, 00:20:46.844 "avg_latency_us": 6698.902863672512, 00:20:46.844 "min_latency_us": 3592.343703703704, 00:20:46.844 "max_latency_us": 12233.386666666667 00:20:46.844 } 00:20:46.844 ], 00:20:46.844 "core_count": 1 00:20:46.844 } 00:20:46.844 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:46.844 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.844 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:46.844 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.844 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:20:46.844 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2662260 00:20:46.844 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2662260 ']' 00:20:46.844 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2662260 00:20:46.844 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:20:46.844 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:46.844 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2662260 00:20:46.844 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:46.844 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:46.844 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2662260' 00:20:46.844 killing process with pid 2662260 00:20:46.844 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2662260 00:20:46.844 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2662260 00:20:46.844 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:46.844 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.845 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:46.845 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.845 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:46.845 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.845 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:46.845 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.845 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:20:46.845 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:46.845 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:20:46.845 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:20:46.845 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:20:46.845 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:20:46.845 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:46.845 [2024-11-19 11:22:40.119184] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:20:46.845 [2024-11-19 11:22:40.119277] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2662260 ] 00:20:46.845 [2024-11-19 11:22:40.202163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.845 [2024-11-19 11:22:40.260826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:46.845 [2024-11-19 11:22:40.920479] bdev.c:4700:bdev_name_add: *ERROR*: Bdev name 7bbb7ac0-a362-4aff-ace5-ba964e77f368 already exists 00:20:46.845 [2024-11-19 11:22:40.920519] bdev.c:7838:bdev_register: *ERROR*: Unable to add uuid:7bbb7ac0-a362-4aff-ace5-ba964e77f368 alias for bdev NVMe1n1 00:20:46.845 [2024-11-19 11:22:40.920534] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:46.845 Running I/O for 1 seconds... 00:20:46.845 18997.00 IOPS, 74.21 MiB/s 00:20:46.845 Latency(us) 00:20:46.845 [2024-11-19T10:22:42.342Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:46.845 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:46.845 NVMe0n1 : 1.01 19054.81 74.43 0.00 0.00 6698.90 3592.34 12233.39 00:20:46.845 [2024-11-19T10:22:42.342Z] =================================================================================================================== 00:20:46.845 [2024-11-19T10:22:42.342Z] Total : 19054.81 74.43 0.00 0.00 6698.90 3592.34 12233.39 00:20:46.845 Received shutdown signal, test time was about 1.000000 seconds 00:20:46.845 00:20:46.845 Latency(us) 00:20:46.845 [2024-11-19T10:22:42.342Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:46.845 [2024-11-19T10:22:42.342Z] =================================================================================================================== 00:20:46.845 [2024-11-19T10:22:42.342Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:46.845 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:46.845 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:46.845 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:20:46.845 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:20:46.845 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:46.845 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:20:47.104 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:47.104 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:20:47.104 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:47.104 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:47.104 rmmod nvme_tcp 00:20:47.104 rmmod nvme_fabrics 00:20:47.104 rmmod nvme_keyring 00:20:47.104 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:47.104 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:20:47.104 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:20:47.104 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 2662119 ']' 00:20:47.104 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 2662119 00:20:47.104 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2662119 ']' 00:20:47.104 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2662119 00:20:47.104 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:20:47.104 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:47.104 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2662119 00:20:47.104 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:47.104 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:47.104 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2662119' 00:20:47.104 killing process with pid 2662119 00:20:47.104 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2662119 00:20:47.104 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2662119 00:20:47.364 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:47.364 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:47.364 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:47.364 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:20:47.364 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:20:47.364 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:47.364 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:20:47.364 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:47.364 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:47.364 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:47.364 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:47.364 11:22:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:49.270 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:49.270 00:20:49.270 real 0m7.981s 00:20:49.270 user 0m11.954s 00:20:49.270 sys 0m2.689s 00:20:49.270 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:49.270 11:22:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:49.270 ************************************ 00:20:49.270 END TEST nvmf_multicontroller 00:20:49.270 ************************************ 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.531 ************************************ 00:20:49.531 START TEST nvmf_aer 00:20:49.531 ************************************ 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:49.531 * Looking for test storage... 00:20:49.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:49.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.531 --rc genhtml_branch_coverage=1 00:20:49.531 --rc genhtml_function_coverage=1 00:20:49.531 --rc genhtml_legend=1 00:20:49.531 --rc geninfo_all_blocks=1 00:20:49.531 --rc geninfo_unexecuted_blocks=1 00:20:49.531 00:20:49.531 ' 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:49.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.531 --rc genhtml_branch_coverage=1 00:20:49.531 --rc genhtml_function_coverage=1 00:20:49.531 --rc genhtml_legend=1 00:20:49.531 --rc geninfo_all_blocks=1 00:20:49.531 --rc geninfo_unexecuted_blocks=1 00:20:49.531 00:20:49.531 ' 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:49.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.531 --rc genhtml_branch_coverage=1 00:20:49.531 --rc genhtml_function_coverage=1 00:20:49.531 --rc genhtml_legend=1 00:20:49.531 --rc geninfo_all_blocks=1 00:20:49.531 --rc geninfo_unexecuted_blocks=1 00:20:49.531 00:20:49.531 ' 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:49.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.531 --rc genhtml_branch_coverage=1 00:20:49.531 --rc genhtml_function_coverage=1 00:20:49.531 --rc genhtml_legend=1 00:20:49.531 --rc geninfo_all_blocks=1 00:20:49.531 --rc geninfo_unexecuted_blocks=1 00:20:49.531 00:20:49.531 ' 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:49.531 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:49.532 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:49.532 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:20:49.532 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:20:49.532 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:49.532 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:49.532 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:49.532 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:49.532 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:49.532 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:20:49.532 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:49.532 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:49.532 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:49.532 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.532 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.532 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.532 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:20:49.532 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.532 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:20:49.532 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:49.532 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:49.532 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:49.532 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:49.532 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:49.532 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:49.532 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:49.532 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:49.532 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:49.532 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:49.532 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:20:49.532 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:49.532 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:49.532 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:49.532 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:49.532 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:49.532 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.532 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:49.532 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:49.532 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:49.532 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:49.532 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:20:49.532 11:22:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:20:52.820 Found 0000:82:00.0 (0x8086 - 0x159b) 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:20:52.820 Found 0000:82:00.1 (0x8086 - 0x159b) 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:20:52.820 Found net devices under 0000:82:00.0: cvl_0_0 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.820 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:20:52.821 Found net devices under 0000:82:00.1: cvl_0_1 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:52.821 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:52.821 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:20:52.821 00:20:52.821 --- 10.0.0.2 ping statistics --- 00:20:52.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.821 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:52.821 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:52.821 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:20:52.821 00:20:52.821 --- 10.0.0.1 ping statistics --- 00:20:52.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.821 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2664892 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2664892 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 2664892 ']' 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:52.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:52.821 11:22:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:52.821 [2024-11-19 11:22:47.862331] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:20:52.821 [2024-11-19 11:22:47.862428] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:52.821 [2024-11-19 11:22:47.942926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:52.821 [2024-11-19 11:22:48.003785] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:52.821 [2024-11-19 11:22:48.003851] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:52.821 [2024-11-19 11:22:48.003880] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:52.821 [2024-11-19 11:22:48.003891] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:52.821 [2024-11-19 11:22:48.003905] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:52.821 [2024-11-19 11:22:48.005525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:52.821 [2024-11-19 11:22:48.005581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:52.821 [2024-11-19 11:22:48.005647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:52.821 [2024-11-19 11:22:48.005650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:52.821 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:52.821 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:20:52.821 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:52.821 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:52.821 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:52.821 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:52.821 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:52.821 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.821 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:52.821 [2024-11-19 11:22:48.150437] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:52.821 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.821 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:52.821 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.821 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:52.821 Malloc0 00:20:52.821 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.821 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:52.821 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.821 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:52.821 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.821 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:52.821 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.821 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:52.821 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.822 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:52.822 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.822 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:52.822 [2024-11-19 11:22:48.222578] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:52.822 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.822 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:52.822 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.822 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:52.822 [ 00:20:52.822 { 00:20:52.822 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:52.822 "subtype": "Discovery", 00:20:52.822 "listen_addresses": [], 00:20:52.822 "allow_any_host": true, 00:20:52.822 "hosts": [] 00:20:52.822 }, 00:20:52.822 { 00:20:52.822 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:52.822 "subtype": "NVMe", 00:20:52.822 "listen_addresses": [ 00:20:52.822 { 00:20:52.822 "trtype": "TCP", 00:20:52.822 "adrfam": "IPv4", 00:20:52.822 "traddr": "10.0.0.2", 00:20:52.822 "trsvcid": "4420" 00:20:52.822 } 00:20:52.822 ], 00:20:52.822 "allow_any_host": true, 00:20:52.822 "hosts": [], 00:20:52.822 "serial_number": "SPDK00000000000001", 00:20:52.822 "model_number": "SPDK bdev Controller", 00:20:52.822 "max_namespaces": 2, 00:20:52.822 "min_cntlid": 1, 00:20:52.822 "max_cntlid": 65519, 00:20:52.822 "namespaces": [ 00:20:52.822 { 00:20:52.822 "nsid": 1, 00:20:52.822 "bdev_name": "Malloc0", 00:20:52.822 "name": "Malloc0", 00:20:52.822 "nguid": "83A9E85020EE41B1809373A7255C7E0C", 00:20:52.822 "uuid": "83a9e850-20ee-41b1-8093-73a7255c7e0c" 00:20:52.822 } 00:20:52.822 ] 00:20:52.822 } 00:20:52.822 ] 00:20:52.822 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.822 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:52.822 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:52.822 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2664927 00:20:52.822 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:52.822 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:52.822 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:20:52.822 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:52.822 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:20:52.822 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:20:52.822 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:20:53.080 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:53.080 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:20:53.080 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:20:53.080 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:20:53.080 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:53.080 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:20:53.080 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:20:53.080 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:20:53.080 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:53.080 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:53.080 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:20:53.080 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:53.080 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.080 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:53.338 Malloc1 00:20:53.339 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.339 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:53.339 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.339 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:53.339 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.339 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:53.339 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.339 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:53.339 [ 00:20:53.339 { 00:20:53.339 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:53.339 "subtype": "Discovery", 00:20:53.339 "listen_addresses": [], 00:20:53.339 "allow_any_host": true, 00:20:53.339 "hosts": [] 00:20:53.339 }, 00:20:53.339 { 00:20:53.339 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.339 "subtype": "NVMe", 00:20:53.339 "listen_addresses": [ 00:20:53.339 { 00:20:53.339 "trtype": "TCP", 00:20:53.339 "adrfam": "IPv4", 00:20:53.339 "traddr": "10.0.0.2", 00:20:53.339 "trsvcid": "4420" 00:20:53.339 } 00:20:53.339 ], 00:20:53.339 "allow_any_host": true, 00:20:53.339 "hosts": [], 00:20:53.339 "serial_number": "SPDK00000000000001", 00:20:53.339 "model_number": "SPDK bdev Controller", 00:20:53.339 "max_namespaces": 2, 00:20:53.339 "min_cntlid": 1, 00:20:53.339 "max_cntlid": 65519, 00:20:53.339 "namespaces": [ 00:20:53.339 { 00:20:53.339 "nsid": 1, 00:20:53.339 "bdev_name": "Malloc0", 00:20:53.339 "name": "Malloc0", 00:20:53.339 "nguid": "83A9E85020EE41B1809373A7255C7E0C", 00:20:53.339 "uuid": "83a9e850-20ee-41b1-8093-73a7255c7e0c" 00:20:53.339 }, 00:20:53.339 { 00:20:53.339 "nsid": 2, 00:20:53.339 "bdev_name": "Malloc1", 00:20:53.339 "name": "Malloc1", 00:20:53.339 "nguid": "AC62CA541390418B91198EDBCFF1EB80", 00:20:53.339 "uuid": "ac62ca54-1390-418b-9119-8edbcff1eb80" 00:20:53.339 } 00:20:53.339 ] 00:20:53.339 } 00:20:53.339 ] 00:20:53.339 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.339 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2664927 00:20:53.339 Asynchronous Event Request test 00:20:53.339 Attaching to 10.0.0.2 00:20:53.339 Attached to 10.0.0.2 00:20:53.339 Registering asynchronous event callbacks... 00:20:53.339 Starting namespace attribute notice tests for all controllers... 00:20:53.339 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:53.339 aer_cb - Changed Namespace 00:20:53.339 Cleaning up... 00:20:53.339 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:53.339 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.339 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:53.339 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.339 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:53.339 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.339 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:53.339 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.339 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:53.339 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.339 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:53.339 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.339 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:53.339 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:20:53.339 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:53.339 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:20:53.339 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:53.339 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:20:53.339 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:53.339 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:53.339 rmmod nvme_tcp 00:20:53.339 rmmod nvme_fabrics 00:20:53.339 rmmod nvme_keyring 00:20:53.339 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:53.339 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:20:53.339 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:20:53.339 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2664892 ']' 00:20:53.339 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2664892 00:20:53.339 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 2664892 ']' 00:20:53.339 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 2664892 00:20:53.339 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:20:53.339 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:53.339 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2664892 00:20:53.339 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:53.339 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:53.339 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2664892' 00:20:53.339 killing process with pid 2664892 00:20:53.339 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 2664892 00:20:53.339 11:22:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 2664892 00:20:53.599 11:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:53.599 11:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:53.599 11:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:53.599 11:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:20:53.599 11:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:20:53.599 11:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:53.599 11:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:20:53.599 11:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:53.599 11:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:53.599 11:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:53.599 11:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:53.599 11:22:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:56.141 11:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:56.141 00:20:56.141 real 0m6.270s 00:20:56.141 user 0m5.007s 00:20:56.141 sys 0m2.482s 00:20:56.141 11:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:56.141 11:22:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:56.141 ************************************ 00:20:56.141 END TEST nvmf_aer 00:20:56.141 ************************************ 00:20:56.141 11:22:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:56.141 11:22:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:56.141 11:22:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:56.141 11:22:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.141 ************************************ 00:20:56.141 START TEST nvmf_async_init 00:20:56.141 ************************************ 00:20:56.141 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:56.141 * Looking for test storage... 00:20:56.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:56.141 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:56.141 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:20:56.141 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:56.141 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:56.141 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:56.141 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:56.141 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:56.141 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:20:56.141 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:20:56.141 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:20:56.141 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:20:56.141 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:20:56.141 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:20:56.141 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:20:56.141 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:56.141 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:20:56.141 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:20:56.141 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:56.141 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:56.141 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:20:56.141 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:20:56.141 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:56.141 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:20:56.141 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:20:56.141 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:20:56.141 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:20:56.141 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:56.141 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:20:56.141 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:20:56.141 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:56.141 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:56.141 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:20:56.141 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:56.141 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:56.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.141 --rc genhtml_branch_coverage=1 00:20:56.141 --rc genhtml_function_coverage=1 00:20:56.141 --rc genhtml_legend=1 00:20:56.141 --rc geninfo_all_blocks=1 00:20:56.141 --rc geninfo_unexecuted_blocks=1 00:20:56.141 00:20:56.141 ' 00:20:56.141 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:56.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.142 --rc genhtml_branch_coverage=1 00:20:56.142 --rc genhtml_function_coverage=1 00:20:56.142 --rc genhtml_legend=1 00:20:56.142 --rc geninfo_all_blocks=1 00:20:56.142 --rc geninfo_unexecuted_blocks=1 00:20:56.142 00:20:56.142 ' 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:56.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.142 --rc genhtml_branch_coverage=1 00:20:56.142 --rc genhtml_function_coverage=1 00:20:56.142 --rc genhtml_legend=1 00:20:56.142 --rc geninfo_all_blocks=1 00:20:56.142 --rc geninfo_unexecuted_blocks=1 00:20:56.142 00:20:56.142 ' 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:56.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.142 --rc genhtml_branch_coverage=1 00:20:56.142 --rc genhtml_function_coverage=1 00:20:56.142 --rc genhtml_legend=1 00:20:56.142 --rc geninfo_all_blocks=1 00:20:56.142 --rc geninfo_unexecuted_blocks=1 00:20:56.142 00:20:56.142 ' 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:56.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=910d83a0fc1043479d299d756ea40dd5 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:20:56.142 11:22:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:58.681 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:58.681 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:20:58.681 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:58.681 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:58.681 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:58.681 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:58.681 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:58.681 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:20:58.681 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:58.681 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:20:58.681 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:20:58.681 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:20:58.682 Found 0000:82:00.0 (0x8086 - 0x159b) 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:20:58.682 Found 0000:82:00.1 (0x8086 - 0x159b) 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:20:58.682 Found net devices under 0000:82:00.0: cvl_0_0 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:20:58.682 Found net devices under 0000:82:00.1: cvl_0_1 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:58.682 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:58.682 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:20:58.682 00:20:58.682 --- 10.0.0.2 ping statistics --- 00:20:58.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.682 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:58.682 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:58.682 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:20:58.682 00:20:58.682 --- 10.0.0.1 ping statistics --- 00:20:58.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.682 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:58.682 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:58.683 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:58.683 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:58.683 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:58.683 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2667277 00:20:58.683 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:58.683 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2667277 00:20:58.683 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 2667277 ']' 00:20:58.683 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:58.683 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:58.683 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:58.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:58.683 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:58.683 11:22:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:58.683 [2024-11-19 11:22:53.919087] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:20:58.683 [2024-11-19 11:22:53.919177] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:58.683 [2024-11-19 11:22:53.999814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.683 [2024-11-19 11:22:54.052607] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:58.683 [2024-11-19 11:22:54.052684] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:58.683 [2024-11-19 11:22:54.052697] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:58.683 [2024-11-19 11:22:54.052723] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:58.683 [2024-11-19 11:22:54.052733] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:58.683 [2024-11-19 11:22:54.053337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.941 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:58.941 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:20:58.941 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:58.941 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:58.941 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:58.941 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:58.941 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:58.941 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.941 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:58.941 [2024-11-19 11:22:54.218599] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:58.941 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.941 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:58.941 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.941 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:58.941 null0 00:20:58.941 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.941 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:58.941 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.941 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:58.941 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.941 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:58.941 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.941 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:58.941 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.941 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 910d83a0fc1043479d299d756ea40dd5 00:20:58.941 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.941 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:58.941 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.941 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:58.941 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.941 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:58.941 [2024-11-19 11:22:54.258855] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:58.941 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.941 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:58.941 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.941 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:59.200 nvme0n1 00:20:59.200 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.200 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:59.200 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.200 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:59.200 [ 00:20:59.200 { 00:20:59.200 "name": "nvme0n1", 00:20:59.200 "aliases": [ 00:20:59.200 "910d83a0-fc10-4347-9d29-9d756ea40dd5" 00:20:59.200 ], 00:20:59.200 "product_name": "NVMe disk", 00:20:59.200 "block_size": 512, 00:20:59.200 "num_blocks": 2097152, 00:20:59.200 "uuid": "910d83a0-fc10-4347-9d29-9d756ea40dd5", 00:20:59.200 "numa_id": 1, 00:20:59.200 "assigned_rate_limits": { 00:20:59.200 "rw_ios_per_sec": 0, 00:20:59.200 "rw_mbytes_per_sec": 0, 00:20:59.200 "r_mbytes_per_sec": 0, 00:20:59.200 "w_mbytes_per_sec": 0 00:20:59.200 }, 00:20:59.200 "claimed": false, 00:20:59.200 "zoned": false, 00:20:59.200 "supported_io_types": { 00:20:59.200 "read": true, 00:20:59.200 "write": true, 00:20:59.200 "unmap": false, 00:20:59.200 "flush": true, 00:20:59.200 "reset": true, 00:20:59.200 "nvme_admin": true, 00:20:59.200 "nvme_io": true, 00:20:59.200 "nvme_io_md": false, 00:20:59.200 "write_zeroes": true, 00:20:59.200 "zcopy": false, 00:20:59.200 "get_zone_info": false, 00:20:59.200 "zone_management": false, 00:20:59.200 "zone_append": false, 00:20:59.200 "compare": true, 00:20:59.200 "compare_and_write": true, 00:20:59.200 "abort": true, 00:20:59.200 "seek_hole": false, 00:20:59.200 "seek_data": false, 00:20:59.200 "copy": true, 00:20:59.200 "nvme_iov_md": false 00:20:59.200 }, 00:20:59.200 "memory_domains": [ 00:20:59.200 { 00:20:59.200 "dma_device_id": "system", 00:20:59.200 "dma_device_type": 1 00:20:59.200 } 00:20:59.200 ], 00:20:59.200 "driver_specific": { 00:20:59.200 "nvme": [ 00:20:59.200 { 00:20:59.200 "trid": { 00:20:59.200 "trtype": "TCP", 00:20:59.200 "adrfam": "IPv4", 00:20:59.200 "traddr": "10.0.0.2", 00:20:59.200 "trsvcid": "4420", 00:20:59.200 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:59.200 }, 00:20:59.200 "ctrlr_data": { 00:20:59.200 "cntlid": 1, 00:20:59.200 "vendor_id": "0x8086", 00:20:59.200 "model_number": "SPDK bdev Controller", 00:20:59.200 "serial_number": "00000000000000000000", 00:20:59.200 "firmware_revision": "25.01", 00:20:59.200 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:59.200 "oacs": { 00:20:59.200 "security": 0, 00:20:59.200 "format": 0, 00:20:59.200 "firmware": 0, 00:20:59.200 "ns_manage": 0 00:20:59.200 }, 00:20:59.200 "multi_ctrlr": true, 00:20:59.201 "ana_reporting": false 00:20:59.201 }, 00:20:59.201 "vs": { 00:20:59.201 "nvme_version": "1.3" 00:20:59.201 }, 00:20:59.201 "ns_data": { 00:20:59.201 "id": 1, 00:20:59.201 "can_share": true 00:20:59.201 } 00:20:59.201 } 00:20:59.201 ], 00:20:59.201 "mp_policy": "active_passive" 00:20:59.201 } 00:20:59.201 } 00:20:59.201 ] 00:20:59.201 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.201 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:59.201 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.201 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:59.201 [2024-11-19 11:22:54.508081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:20:59.201 [2024-11-19 11:22:54.508170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x107cda0 (9): Bad file descriptor 00:20:59.201 [2024-11-19 11:22:54.640497] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:20:59.201 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.201 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:59.201 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.201 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:59.201 [ 00:20:59.201 { 00:20:59.201 "name": "nvme0n1", 00:20:59.201 "aliases": [ 00:20:59.201 "910d83a0-fc10-4347-9d29-9d756ea40dd5" 00:20:59.201 ], 00:20:59.201 "product_name": "NVMe disk", 00:20:59.201 "block_size": 512, 00:20:59.201 "num_blocks": 2097152, 00:20:59.201 "uuid": "910d83a0-fc10-4347-9d29-9d756ea40dd5", 00:20:59.201 "numa_id": 1, 00:20:59.201 "assigned_rate_limits": { 00:20:59.201 "rw_ios_per_sec": 0, 00:20:59.201 "rw_mbytes_per_sec": 0, 00:20:59.201 "r_mbytes_per_sec": 0, 00:20:59.201 "w_mbytes_per_sec": 0 00:20:59.201 }, 00:20:59.201 "claimed": false, 00:20:59.201 "zoned": false, 00:20:59.201 "supported_io_types": { 00:20:59.201 "read": true, 00:20:59.201 "write": true, 00:20:59.201 "unmap": false, 00:20:59.201 "flush": true, 00:20:59.201 "reset": true, 00:20:59.201 "nvme_admin": true, 00:20:59.201 "nvme_io": true, 00:20:59.201 "nvme_io_md": false, 00:20:59.201 "write_zeroes": true, 00:20:59.201 "zcopy": false, 00:20:59.201 "get_zone_info": false, 00:20:59.201 "zone_management": false, 00:20:59.201 "zone_append": false, 00:20:59.201 "compare": true, 00:20:59.201 "compare_and_write": true, 00:20:59.201 "abort": true, 00:20:59.201 "seek_hole": false, 00:20:59.201 "seek_data": false, 00:20:59.201 "copy": true, 00:20:59.201 "nvme_iov_md": false 00:20:59.201 }, 00:20:59.201 "memory_domains": [ 00:20:59.201 { 00:20:59.201 "dma_device_id": "system", 00:20:59.201 "dma_device_type": 1 00:20:59.201 } 00:20:59.201 ], 00:20:59.201 "driver_specific": { 00:20:59.201 "nvme": [ 00:20:59.201 { 00:20:59.201 "trid": { 00:20:59.201 "trtype": "TCP", 00:20:59.201 "adrfam": "IPv4", 00:20:59.201 "traddr": "10.0.0.2", 00:20:59.201 "trsvcid": "4420", 00:20:59.201 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:59.201 }, 00:20:59.201 "ctrlr_data": { 00:20:59.201 "cntlid": 2, 00:20:59.201 "vendor_id": "0x8086", 00:20:59.201 "model_number": "SPDK bdev Controller", 00:20:59.201 "serial_number": "00000000000000000000", 00:20:59.201 "firmware_revision": "25.01", 00:20:59.201 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:59.201 "oacs": { 00:20:59.201 "security": 0, 00:20:59.201 "format": 0, 00:20:59.201 "firmware": 0, 00:20:59.201 "ns_manage": 0 00:20:59.201 }, 00:20:59.201 "multi_ctrlr": true, 00:20:59.201 "ana_reporting": false 00:20:59.201 }, 00:20:59.201 "vs": { 00:20:59.201 "nvme_version": "1.3" 00:20:59.201 }, 00:20:59.201 "ns_data": { 00:20:59.201 "id": 1, 00:20:59.201 "can_share": true 00:20:59.201 } 00:20:59.201 } 00:20:59.201 ], 00:20:59.201 "mp_policy": "active_passive" 00:20:59.201 } 00:20:59.201 } 00:20:59.201 ] 00:20:59.201 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.201 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:59.201 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.201 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:59.201 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.201 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:20:59.201 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.bGCgLthlnR 00:20:59.201 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:59.201 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.bGCgLthlnR 00:20:59.201 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.bGCgLthlnR 00:20:59.201 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.201 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:59.201 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.201 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:59.201 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.201 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:59.201 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.201 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:59.201 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.201 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:59.201 [2024-11-19 11:22:54.696715] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:59.460 [2024-11-19 11:22:54.696859] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:59.460 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.460 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:20:59.460 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.460 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:59.460 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.460 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:59.460 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.460 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:59.460 [2024-11-19 11:22:54.712745] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:59.460 nvme0n1 00:20:59.460 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.460 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:59.460 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.460 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:59.460 [ 00:20:59.460 { 00:20:59.460 "name": "nvme0n1", 00:20:59.460 "aliases": [ 00:20:59.460 "910d83a0-fc10-4347-9d29-9d756ea40dd5" 00:20:59.460 ], 00:20:59.460 "product_name": "NVMe disk", 00:20:59.460 "block_size": 512, 00:20:59.460 "num_blocks": 2097152, 00:20:59.460 "uuid": "910d83a0-fc10-4347-9d29-9d756ea40dd5", 00:20:59.460 "numa_id": 1, 00:20:59.460 "assigned_rate_limits": { 00:20:59.460 "rw_ios_per_sec": 0, 00:20:59.460 "rw_mbytes_per_sec": 0, 00:20:59.460 "r_mbytes_per_sec": 0, 00:20:59.460 "w_mbytes_per_sec": 0 00:20:59.460 }, 00:20:59.460 "claimed": false, 00:20:59.460 "zoned": false, 00:20:59.460 "supported_io_types": { 00:20:59.460 "read": true, 00:20:59.460 "write": true, 00:20:59.460 "unmap": false, 00:20:59.460 "flush": true, 00:20:59.460 "reset": true, 00:20:59.460 "nvme_admin": true, 00:20:59.460 "nvme_io": true, 00:20:59.460 "nvme_io_md": false, 00:20:59.460 "write_zeroes": true, 00:20:59.460 "zcopy": false, 00:20:59.460 "get_zone_info": false, 00:20:59.460 "zone_management": false, 00:20:59.460 "zone_append": false, 00:20:59.460 "compare": true, 00:20:59.460 "compare_and_write": true, 00:20:59.460 "abort": true, 00:20:59.460 "seek_hole": false, 00:20:59.460 "seek_data": false, 00:20:59.460 "copy": true, 00:20:59.460 "nvme_iov_md": false 00:20:59.460 }, 00:20:59.460 "memory_domains": [ 00:20:59.460 { 00:20:59.460 "dma_device_id": "system", 00:20:59.460 "dma_device_type": 1 00:20:59.460 } 00:20:59.460 ], 00:20:59.460 "driver_specific": { 00:20:59.460 "nvme": [ 00:20:59.460 { 00:20:59.460 "trid": { 00:20:59.460 "trtype": "TCP", 00:20:59.460 "adrfam": "IPv4", 00:20:59.460 "traddr": "10.0.0.2", 00:20:59.460 "trsvcid": "4421", 00:20:59.460 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:59.460 }, 00:20:59.460 "ctrlr_data": { 00:20:59.460 "cntlid": 3, 00:20:59.460 "vendor_id": "0x8086", 00:20:59.460 "model_number": "SPDK bdev Controller", 00:20:59.460 "serial_number": "00000000000000000000", 00:20:59.460 "firmware_revision": "25.01", 00:20:59.460 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:59.460 "oacs": { 00:20:59.460 "security": 0, 00:20:59.460 "format": 0, 00:20:59.460 "firmware": 0, 00:20:59.460 "ns_manage": 0 00:20:59.460 }, 00:20:59.460 "multi_ctrlr": true, 00:20:59.460 "ana_reporting": false 00:20:59.460 }, 00:20:59.460 "vs": { 00:20:59.460 "nvme_version": "1.3" 00:20:59.460 }, 00:20:59.460 "ns_data": { 00:20:59.460 "id": 1, 00:20:59.460 "can_share": true 00:20:59.460 } 00:20:59.460 } 00:20:59.460 ], 00:20:59.460 "mp_policy": "active_passive" 00:20:59.460 } 00:20:59.460 } 00:20:59.460 ] 00:20:59.460 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.460 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:59.460 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.460 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:59.460 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.460 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.bGCgLthlnR 00:20:59.460 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:20:59.460 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:20:59.460 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:59.460 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:20:59.460 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:59.460 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:20:59.460 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:59.460 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:59.460 rmmod nvme_tcp 00:20:59.460 rmmod nvme_fabrics 00:20:59.460 rmmod nvme_keyring 00:20:59.460 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:59.460 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:20:59.460 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:20:59.460 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2667277 ']' 00:20:59.460 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2667277 00:20:59.460 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 2667277 ']' 00:20:59.460 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 2667277 00:20:59.460 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:20:59.460 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:59.460 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2667277 00:20:59.460 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:59.460 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:59.460 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2667277' 00:20:59.460 killing process with pid 2667277 00:20:59.460 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 2667277 00:20:59.460 11:22:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 2667277 00:20:59.720 11:22:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:59.720 11:22:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:59.720 11:22:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:59.720 11:22:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:20:59.720 11:22:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:20:59.720 11:22:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:59.720 11:22:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:20:59.720 11:22:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:59.720 11:22:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:59.720 11:22:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:59.720 11:22:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:59.720 11:22:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:02.286 00:21:02.286 real 0m6.033s 00:21:02.286 user 0m2.337s 00:21:02.286 sys 0m2.177s 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:02.286 ************************************ 00:21:02.286 END TEST nvmf_async_init 00:21:02.286 ************************************ 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.286 ************************************ 00:21:02.286 START TEST dma 00:21:02.286 ************************************ 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:02.286 * Looking for test storage... 00:21:02.286 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:02.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.286 --rc genhtml_branch_coverage=1 00:21:02.286 --rc genhtml_function_coverage=1 00:21:02.286 --rc genhtml_legend=1 00:21:02.286 --rc geninfo_all_blocks=1 00:21:02.286 --rc geninfo_unexecuted_blocks=1 00:21:02.286 00:21:02.286 ' 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:02.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.286 --rc genhtml_branch_coverage=1 00:21:02.286 --rc genhtml_function_coverage=1 00:21:02.286 --rc genhtml_legend=1 00:21:02.286 --rc geninfo_all_blocks=1 00:21:02.286 --rc geninfo_unexecuted_blocks=1 00:21:02.286 00:21:02.286 ' 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:02.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.286 --rc genhtml_branch_coverage=1 00:21:02.286 --rc genhtml_function_coverage=1 00:21:02.286 --rc genhtml_legend=1 00:21:02.286 --rc geninfo_all_blocks=1 00:21:02.286 --rc geninfo_unexecuted_blocks=1 00:21:02.286 00:21:02.286 ' 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:02.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.286 --rc genhtml_branch_coverage=1 00:21:02.286 --rc genhtml_function_coverage=1 00:21:02.286 --rc genhtml_legend=1 00:21:02.286 --rc geninfo_all_blocks=1 00:21:02.286 --rc geninfo_unexecuted_blocks=1 00:21:02.286 00:21:02.286 ' 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:02.286 11:22:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:02.287 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:21:02.287 00:21:02.287 real 0m0.171s 00:21:02.287 user 0m0.107s 00:21:02.287 sys 0m0.075s 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:21:02.287 ************************************ 00:21:02.287 END TEST dma 00:21:02.287 ************************************ 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.287 ************************************ 00:21:02.287 START TEST nvmf_identify 00:21:02.287 ************************************ 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:02.287 * Looking for test storage... 00:21:02.287 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:02.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.287 --rc genhtml_branch_coverage=1 00:21:02.287 --rc genhtml_function_coverage=1 00:21:02.287 --rc genhtml_legend=1 00:21:02.287 --rc geninfo_all_blocks=1 00:21:02.287 --rc geninfo_unexecuted_blocks=1 00:21:02.287 00:21:02.287 ' 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:02.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.287 --rc genhtml_branch_coverage=1 00:21:02.287 --rc genhtml_function_coverage=1 00:21:02.287 --rc genhtml_legend=1 00:21:02.287 --rc geninfo_all_blocks=1 00:21:02.287 --rc geninfo_unexecuted_blocks=1 00:21:02.287 00:21:02.287 ' 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:02.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.287 --rc genhtml_branch_coverage=1 00:21:02.287 --rc genhtml_function_coverage=1 00:21:02.287 --rc genhtml_legend=1 00:21:02.287 --rc geninfo_all_blocks=1 00:21:02.287 --rc geninfo_unexecuted_blocks=1 00:21:02.287 00:21:02.287 ' 00:21:02.287 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:02.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.288 --rc genhtml_branch_coverage=1 00:21:02.288 --rc genhtml_function_coverage=1 00:21:02.288 --rc genhtml_legend=1 00:21:02.288 --rc geninfo_all_blocks=1 00:21:02.288 --rc geninfo_unexecuted_blocks=1 00:21:02.288 00:21:02.288 ' 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:02.288 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:21:02.288 11:22:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:21:04.841 Found 0000:82:00.0 (0x8086 - 0x159b) 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:21:04.841 Found 0000:82:00.1 (0x8086 - 0x159b) 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:21:04.841 Found net devices under 0000:82:00.0: cvl_0_0 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:21:04.841 Found net devices under 0000:82:00.1: cvl_0_1 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:04.841 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:04.842 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:04.842 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:04.842 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:04.842 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:04.842 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:04.842 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:04.842 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:04.842 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:04.842 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:04.842 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:21:04.842 00:21:04.842 --- 10.0.0.2 ping statistics --- 00:21:04.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.842 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:21:04.842 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:04.842 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:04.842 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:21:04.842 00:21:04.842 --- 10.0.0.1 ping statistics --- 00:21:04.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.842 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:21:04.842 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:04.842 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:21:04.842 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:04.842 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:04.842 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:04.842 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:04.842 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:04.842 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:04.842 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:04.842 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:04.842 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:04.842 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:04.842 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2669836 00:21:04.842 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:04.842 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:04.842 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2669836 00:21:04.842 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 2669836 ']' 00:21:04.842 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:04.842 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:04.842 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:04.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:04.842 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:04.842 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:04.842 [2024-11-19 11:23:00.331056] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:21:04.842 [2024-11-19 11:23:00.331139] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:05.100 [2024-11-19 11:23:00.425783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:05.100 [2024-11-19 11:23:00.488034] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:05.100 [2024-11-19 11:23:00.488102] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:05.100 [2024-11-19 11:23:00.488124] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:05.100 [2024-11-19 11:23:00.488142] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:05.100 [2024-11-19 11:23:00.488156] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:05.100 [2024-11-19 11:23:00.489859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:05.100 [2024-11-19 11:23:00.489914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:05.100 [2024-11-19 11:23:00.489980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:05.100 [2024-11-19 11:23:00.489983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:05.359 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:05.359 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:21:05.359 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:05.359 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.359 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:05.359 [2024-11-19 11:23:00.619114] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:05.359 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.359 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:05.359 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:05.359 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:05.359 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:05.359 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.359 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:05.359 Malloc0 00:21:05.359 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.359 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:05.359 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.359 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:05.359 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.359 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:05.359 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.359 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:05.359 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.359 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:05.359 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.359 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:05.359 [2024-11-19 11:23:00.717763] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:05.359 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.359 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:05.359 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.359 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:05.359 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.359 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:05.359 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.359 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:05.359 [ 00:21:05.359 { 00:21:05.359 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:05.359 "subtype": "Discovery", 00:21:05.359 "listen_addresses": [ 00:21:05.359 { 00:21:05.359 "trtype": "TCP", 00:21:05.359 "adrfam": "IPv4", 00:21:05.359 "traddr": "10.0.0.2", 00:21:05.359 "trsvcid": "4420" 00:21:05.359 } 00:21:05.359 ], 00:21:05.359 "allow_any_host": true, 00:21:05.359 "hosts": [] 00:21:05.359 }, 00:21:05.359 { 00:21:05.359 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.359 "subtype": "NVMe", 00:21:05.359 "listen_addresses": [ 00:21:05.359 { 00:21:05.359 "trtype": "TCP", 00:21:05.359 "adrfam": "IPv4", 00:21:05.359 "traddr": "10.0.0.2", 00:21:05.359 "trsvcid": "4420" 00:21:05.359 } 00:21:05.359 ], 00:21:05.359 "allow_any_host": true, 00:21:05.359 "hosts": [], 00:21:05.359 "serial_number": "SPDK00000000000001", 00:21:05.359 "model_number": "SPDK bdev Controller", 00:21:05.359 "max_namespaces": 32, 00:21:05.359 "min_cntlid": 1, 00:21:05.359 "max_cntlid": 65519, 00:21:05.359 "namespaces": [ 00:21:05.359 { 00:21:05.359 "nsid": 1, 00:21:05.359 "bdev_name": "Malloc0", 00:21:05.359 "name": "Malloc0", 00:21:05.359 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:05.359 "eui64": "ABCDEF0123456789", 00:21:05.359 "uuid": "59b55299-7c1b-4a75-ba6a-6dd4b328f5e3" 00:21:05.359 } 00:21:05.359 ] 00:21:05.359 } 00:21:05.359 ] 00:21:05.359 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.359 11:23:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:05.359 [2024-11-19 11:23:00.760420] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:21:05.359 [2024-11-19 11:23:00.760467] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2669861 ] 00:21:05.359 [2024-11-19 11:23:00.809142] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:21:05.359 [2024-11-19 11:23:00.809217] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:05.359 [2024-11-19 11:23:00.809228] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:05.359 [2024-11-19 11:23:00.809245] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:05.359 [2024-11-19 11:23:00.809265] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:05.359 [2024-11-19 11:23:00.816823] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:21:05.359 [2024-11-19 11:23:00.816882] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xbe1690 0 00:21:05.359 [2024-11-19 11:23:00.817055] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:05.359 [2024-11-19 11:23:00.817073] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:05.359 [2024-11-19 11:23:00.817081] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:05.359 [2024-11-19 11:23:00.817087] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:05.359 [2024-11-19 11:23:00.817133] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.359 [2024-11-19 11:23:00.817146] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.359 [2024-11-19 11:23:00.817153] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbe1690) 00:21:05.359 [2024-11-19 11:23:00.817172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:05.359 [2024-11-19 11:23:00.817196] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc43100, cid 0, qid 0 00:21:05.359 [2024-11-19 11:23:00.824394] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.359 [2024-11-19 11:23:00.824413] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.359 [2024-11-19 11:23:00.824420] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.359 [2024-11-19 11:23:00.824427] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc43100) on tqpair=0xbe1690 00:21:05.359 [2024-11-19 11:23:00.824449] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:05.360 [2024-11-19 11:23:00.824462] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:21:05.360 [2024-11-19 11:23:00.824472] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:21:05.360 [2024-11-19 11:23:00.824495] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.360 [2024-11-19 11:23:00.824504] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.360 [2024-11-19 11:23:00.824510] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbe1690) 00:21:05.360 [2024-11-19 11:23:00.824522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.360 [2024-11-19 11:23:00.824546] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc43100, cid 0, qid 0 00:21:05.360 [2024-11-19 11:23:00.824644] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.360 [2024-11-19 11:23:00.824672] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.360 [2024-11-19 11:23:00.824679] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.360 [2024-11-19 11:23:00.824685] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc43100) on tqpair=0xbe1690 00:21:05.360 [2024-11-19 11:23:00.824695] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:21:05.360 [2024-11-19 11:23:00.824708] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:21:05.360 [2024-11-19 11:23:00.824735] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.360 [2024-11-19 11:23:00.824743] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.360 [2024-11-19 11:23:00.824748] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbe1690) 00:21:05.360 [2024-11-19 11:23:00.824759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.360 [2024-11-19 11:23:00.824780] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc43100, cid 0, qid 0 00:21:05.360 [2024-11-19 11:23:00.824864] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.360 [2024-11-19 11:23:00.824876] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.360 [2024-11-19 11:23:00.824883] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.360 [2024-11-19 11:23:00.824889] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc43100) on tqpair=0xbe1690 00:21:05.360 [2024-11-19 11:23:00.824898] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:21:05.360 [2024-11-19 11:23:00.824911] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:21:05.360 [2024-11-19 11:23:00.824923] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.360 [2024-11-19 11:23:00.824930] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.360 [2024-11-19 11:23:00.824935] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbe1690) 00:21:05.360 [2024-11-19 11:23:00.824945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.360 [2024-11-19 11:23:00.824965] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc43100, cid 0, qid 0 00:21:05.360 [2024-11-19 11:23:00.825047] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.360 [2024-11-19 11:23:00.825060] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.360 [2024-11-19 11:23:00.825066] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.360 [2024-11-19 11:23:00.825072] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc43100) on tqpair=0xbe1690 00:21:05.360 [2024-11-19 11:23:00.825081] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:05.360 [2024-11-19 11:23:00.825096] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.360 [2024-11-19 11:23:00.825104] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.360 [2024-11-19 11:23:00.825110] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbe1690) 00:21:05.360 [2024-11-19 11:23:00.825120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.360 [2024-11-19 11:23:00.825140] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc43100, cid 0, qid 0 00:21:05.360 [2024-11-19 11:23:00.825215] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.360 [2024-11-19 11:23:00.825226] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.360 [2024-11-19 11:23:00.825232] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.360 [2024-11-19 11:23:00.825238] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc43100) on tqpair=0xbe1690 00:21:05.360 [2024-11-19 11:23:00.825246] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:21:05.360 [2024-11-19 11:23:00.825254] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:21:05.360 [2024-11-19 11:23:00.825266] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:05.360 [2024-11-19 11:23:00.825376] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:21:05.360 [2024-11-19 11:23:00.825387] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:05.360 [2024-11-19 11:23:00.825403] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.360 [2024-11-19 11:23:00.825426] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.360 [2024-11-19 11:23:00.825432] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbe1690) 00:21:05.360 [2024-11-19 11:23:00.825449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.360 [2024-11-19 11:23:00.825473] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc43100, cid 0, qid 0 00:21:05.360 [2024-11-19 11:23:00.825588] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.360 [2024-11-19 11:23:00.825602] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.360 [2024-11-19 11:23:00.825608] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.360 [2024-11-19 11:23:00.825615] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc43100) on tqpair=0xbe1690 00:21:05.360 [2024-11-19 11:23:00.825623] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:05.360 [2024-11-19 11:23:00.825654] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.360 [2024-11-19 11:23:00.825663] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.360 [2024-11-19 11:23:00.825669] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbe1690) 00:21:05.360 [2024-11-19 11:23:00.825679] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.360 [2024-11-19 11:23:00.825700] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc43100, cid 0, qid 0 00:21:05.360 [2024-11-19 11:23:00.825791] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.360 [2024-11-19 11:23:00.825804] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.360 [2024-11-19 11:23:00.825810] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.360 [2024-11-19 11:23:00.825816] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc43100) on tqpair=0xbe1690 00:21:05.360 [2024-11-19 11:23:00.825823] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:05.360 [2024-11-19 11:23:00.825831] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:21:05.360 [2024-11-19 11:23:00.825844] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:21:05.360 [2024-11-19 11:23:00.825865] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:21:05.360 [2024-11-19 11:23:00.825881] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.360 [2024-11-19 11:23:00.825888] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbe1690) 00:21:05.360 [2024-11-19 11:23:00.825898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.360 [2024-11-19 11:23:00.825919] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc43100, cid 0, qid 0 00:21:05.360 [2024-11-19 11:23:00.826045] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:05.360 [2024-11-19 11:23:00.826057] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:05.360 [2024-11-19 11:23:00.826064] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:05.360 [2024-11-19 11:23:00.826070] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbe1690): datao=0, datal=4096, cccid=0 00:21:05.360 [2024-11-19 11:23:00.826077] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc43100) on tqpair(0xbe1690): expected_datao=0, payload_size=4096 00:21:05.360 [2024-11-19 11:23:00.826083] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.360 [2024-11-19 11:23:00.826100] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:05.360 [2024-11-19 11:23:00.826109] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:05.621 [2024-11-19 11:23:00.867459] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.621 [2024-11-19 11:23:00.867482] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.621 [2024-11-19 11:23:00.867491] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.621 [2024-11-19 11:23:00.867498] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc43100) on tqpair=0xbe1690 00:21:05.621 [2024-11-19 11:23:00.867518] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:21:05.621 [2024-11-19 11:23:00.867527] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:21:05.621 [2024-11-19 11:23:00.867534] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:21:05.621 [2024-11-19 11:23:00.867549] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:21:05.621 [2024-11-19 11:23:00.867559] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:21:05.621 [2024-11-19 11:23:00.867567] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:21:05.621 [2024-11-19 11:23:00.867586] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:21:05.621 [2024-11-19 11:23:00.867601] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.621 [2024-11-19 11:23:00.867608] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.621 [2024-11-19 11:23:00.867615] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbe1690) 00:21:05.621 [2024-11-19 11:23:00.867627] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:05.621 [2024-11-19 11:23:00.867650] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc43100, cid 0, qid 0 00:21:05.621 [2024-11-19 11:23:00.867764] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.621 [2024-11-19 11:23:00.867778] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.621 [2024-11-19 11:23:00.867784] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.621 [2024-11-19 11:23:00.867791] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc43100) on tqpair=0xbe1690 00:21:05.621 [2024-11-19 11:23:00.867803] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.621 [2024-11-19 11:23:00.867810] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.621 [2024-11-19 11:23:00.867816] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbe1690) 00:21:05.621 [2024-11-19 11:23:00.867825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:05.621 [2024-11-19 11:23:00.867835] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.621 [2024-11-19 11:23:00.867841] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.621 [2024-11-19 11:23:00.867847] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xbe1690) 00:21:05.621 [2024-11-19 11:23:00.867855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:05.621 [2024-11-19 11:23:00.867864] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.621 [2024-11-19 11:23:00.867870] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.621 [2024-11-19 11:23:00.867875] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xbe1690) 00:21:05.621 [2024-11-19 11:23:00.867883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:05.621 [2024-11-19 11:23:00.867892] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.621 [2024-11-19 11:23:00.867898] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.621 [2024-11-19 11:23:00.867904] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbe1690) 00:21:05.621 [2024-11-19 11:23:00.867915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:05.621 [2024-11-19 11:23:00.867924] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:05.621 [2024-11-19 11:23:00.867939] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:05.621 [2024-11-19 11:23:00.867950] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.621 [2024-11-19 11:23:00.867956] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbe1690) 00:21:05.621 [2024-11-19 11:23:00.867965] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.621 [2024-11-19 11:23:00.867987] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc43100, cid 0, qid 0 00:21:05.621 [2024-11-19 11:23:00.867998] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc43280, cid 1, qid 0 00:21:05.621 [2024-11-19 11:23:00.868005] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc43400, cid 2, qid 0 00:21:05.621 [2024-11-19 11:23:00.868012] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc43580, cid 3, qid 0 00:21:05.622 [2024-11-19 11:23:00.868019] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc43700, cid 4, qid 0 00:21:05.622 [2024-11-19 11:23:00.868136] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.622 [2024-11-19 11:23:00.868147] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.622 [2024-11-19 11:23:00.868153] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.622 [2024-11-19 11:23:00.868159] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc43700) on tqpair=0xbe1690 00:21:05.622 [2024-11-19 11:23:00.868183] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:21:05.622 [2024-11-19 11:23:00.868193] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:21:05.622 [2024-11-19 11:23:00.868210] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.622 [2024-11-19 11:23:00.868218] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbe1690) 00:21:05.622 [2024-11-19 11:23:00.868228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.622 [2024-11-19 11:23:00.868248] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc43700, cid 4, qid 0 00:21:05.622 [2024-11-19 11:23:00.868337] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:05.622 [2024-11-19 11:23:00.872391] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:05.622 [2024-11-19 11:23:00.872403] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:05.622 [2024-11-19 11:23:00.872409] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbe1690): datao=0, datal=4096, cccid=4 00:21:05.622 [2024-11-19 11:23:00.872417] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc43700) on tqpair(0xbe1690): expected_datao=0, payload_size=4096 00:21:05.622 [2024-11-19 11:23:00.872424] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.622 [2024-11-19 11:23:00.872441] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:05.622 [2024-11-19 11:23:00.872450] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:05.622 [2024-11-19 11:23:00.872461] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.622 [2024-11-19 11:23:00.872470] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.622 [2024-11-19 11:23:00.872476] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.622 [2024-11-19 11:23:00.872483] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc43700) on tqpair=0xbe1690 00:21:05.622 [2024-11-19 11:23:00.872509] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:21:05.622 [2024-11-19 11:23:00.872552] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.622 [2024-11-19 11:23:00.872563] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbe1690) 00:21:05.622 [2024-11-19 11:23:00.872573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.622 [2024-11-19 11:23:00.872585] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.622 [2024-11-19 11:23:00.872592] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.622 [2024-11-19 11:23:00.872597] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xbe1690) 00:21:05.622 [2024-11-19 11:23:00.872606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:05.622 [2024-11-19 11:23:00.872635] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc43700, cid 4, qid 0 00:21:05.622 [2024-11-19 11:23:00.872646] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc43880, cid 5, qid 0 00:21:05.622 [2024-11-19 11:23:00.872841] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:05.622 [2024-11-19 11:23:00.872855] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:05.622 [2024-11-19 11:23:00.872861] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:05.622 [2024-11-19 11:23:00.872867] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbe1690): datao=0, datal=1024, cccid=4 00:21:05.622 [2024-11-19 11:23:00.872874] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc43700) on tqpair(0xbe1690): expected_datao=0, payload_size=1024 00:21:05.622 [2024-11-19 11:23:00.872880] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.622 [2024-11-19 11:23:00.872889] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:05.622 [2024-11-19 11:23:00.872896] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:05.622 [2024-11-19 11:23:00.872904] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.622 [2024-11-19 11:23:00.872912] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.622 [2024-11-19 11:23:00.872917] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.622 [2024-11-19 11:23:00.872923] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc43880) on tqpair=0xbe1690 00:21:05.622 [2024-11-19 11:23:00.914378] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.622 [2024-11-19 11:23:00.914396] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.622 [2024-11-19 11:23:00.914403] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.622 [2024-11-19 11:23:00.914410] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc43700) on tqpair=0xbe1690 00:21:05.622 [2024-11-19 11:23:00.914428] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.622 [2024-11-19 11:23:00.914437] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbe1690) 00:21:05.622 [2024-11-19 11:23:00.914449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.622 [2024-11-19 11:23:00.914480] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc43700, cid 4, qid 0 00:21:05.622 [2024-11-19 11:23:00.914655] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:05.622 [2024-11-19 11:23:00.914670] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:05.622 [2024-11-19 11:23:00.914676] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:05.622 [2024-11-19 11:23:00.914682] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbe1690): datao=0, datal=3072, cccid=4 00:21:05.622 [2024-11-19 11:23:00.914689] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc43700) on tqpair(0xbe1690): expected_datao=0, payload_size=3072 00:21:05.622 [2024-11-19 11:23:00.914699] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.622 [2024-11-19 11:23:00.914720] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:05.622 [2024-11-19 11:23:00.914729] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:05.622 [2024-11-19 11:23:00.956461] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.622 [2024-11-19 11:23:00.956478] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.622 [2024-11-19 11:23:00.956485] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.622 [2024-11-19 11:23:00.956492] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc43700) on tqpair=0xbe1690 00:21:05.622 [2024-11-19 11:23:00.956507] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.622 [2024-11-19 11:23:00.956516] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbe1690) 00:21:05.622 [2024-11-19 11:23:00.956527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.622 [2024-11-19 11:23:00.956557] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc43700, cid 4, qid 0 00:21:05.622 [2024-11-19 11:23:00.956658] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:05.622 [2024-11-19 11:23:00.956670] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:05.622 [2024-11-19 11:23:00.956676] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:05.622 [2024-11-19 11:23:00.956682] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbe1690): datao=0, datal=8, cccid=4 00:21:05.622 [2024-11-19 11:23:00.956689] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc43700) on tqpair(0xbe1690): expected_datao=0, payload_size=8 00:21:05.622 [2024-11-19 11:23:00.956696] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.622 [2024-11-19 11:23:00.956705] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:05.622 [2024-11-19 11:23:00.956711] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:05.622 [2024-11-19 11:23:01.001378] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.622 [2024-11-19 11:23:01.001396] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.622 [2024-11-19 11:23:01.001403] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.622 [2024-11-19 11:23:01.001410] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc43700) on tqpair=0xbe1690 00:21:05.622 ===================================================== 00:21:05.622 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:05.622 ===================================================== 00:21:05.622 Controller Capabilities/Features 00:21:05.622 ================================ 00:21:05.622 Vendor ID: 0000 00:21:05.622 Subsystem Vendor ID: 0000 00:21:05.622 Serial Number: .................... 00:21:05.622 Model Number: ........................................ 00:21:05.622 Firmware Version: 25.01 00:21:05.622 Recommended Arb Burst: 0 00:21:05.622 IEEE OUI Identifier: 00 00 00 00:21:05.622 Multi-path I/O 00:21:05.622 May have multiple subsystem ports: No 00:21:05.622 May have multiple controllers: No 00:21:05.622 Associated with SR-IOV VF: No 00:21:05.622 Max Data Transfer Size: 131072 00:21:05.622 Max Number of Namespaces: 0 00:21:05.622 Max Number of I/O Queues: 1024 00:21:05.622 NVMe Specification Version (VS): 1.3 00:21:05.622 NVMe Specification Version (Identify): 1.3 00:21:05.622 Maximum Queue Entries: 128 00:21:05.622 Contiguous Queues Required: Yes 00:21:05.622 Arbitration Mechanisms Supported 00:21:05.622 Weighted Round Robin: Not Supported 00:21:05.622 Vendor Specific: Not Supported 00:21:05.622 Reset Timeout: 15000 ms 00:21:05.622 Doorbell Stride: 4 bytes 00:21:05.622 NVM Subsystem Reset: Not Supported 00:21:05.622 Command Sets Supported 00:21:05.622 NVM Command Set: Supported 00:21:05.622 Boot Partition: Not Supported 00:21:05.622 Memory Page Size Minimum: 4096 bytes 00:21:05.622 Memory Page Size Maximum: 4096 bytes 00:21:05.622 Persistent Memory Region: Not Supported 00:21:05.623 Optional Asynchronous Events Supported 00:21:05.623 Namespace Attribute Notices: Not Supported 00:21:05.623 Firmware Activation Notices: Not Supported 00:21:05.623 ANA Change Notices: Not Supported 00:21:05.623 PLE Aggregate Log Change Notices: Not Supported 00:21:05.623 LBA Status Info Alert Notices: Not Supported 00:21:05.623 EGE Aggregate Log Change Notices: Not Supported 00:21:05.623 Normal NVM Subsystem Shutdown event: Not Supported 00:21:05.623 Zone Descriptor Change Notices: Not Supported 00:21:05.623 Discovery Log Change Notices: Supported 00:21:05.623 Controller Attributes 00:21:05.623 128-bit Host Identifier: Not Supported 00:21:05.623 Non-Operational Permissive Mode: Not Supported 00:21:05.623 NVM Sets: Not Supported 00:21:05.623 Read Recovery Levels: Not Supported 00:21:05.623 Endurance Groups: Not Supported 00:21:05.623 Predictable Latency Mode: Not Supported 00:21:05.623 Traffic Based Keep ALive: Not Supported 00:21:05.623 Namespace Granularity: Not Supported 00:21:05.623 SQ Associations: Not Supported 00:21:05.623 UUID List: Not Supported 00:21:05.623 Multi-Domain Subsystem: Not Supported 00:21:05.623 Fixed Capacity Management: Not Supported 00:21:05.623 Variable Capacity Management: Not Supported 00:21:05.623 Delete Endurance Group: Not Supported 00:21:05.623 Delete NVM Set: Not Supported 00:21:05.623 Extended LBA Formats Supported: Not Supported 00:21:05.623 Flexible Data Placement Supported: Not Supported 00:21:05.623 00:21:05.623 Controller Memory Buffer Support 00:21:05.623 ================================ 00:21:05.623 Supported: No 00:21:05.623 00:21:05.623 Persistent Memory Region Support 00:21:05.623 ================================ 00:21:05.623 Supported: No 00:21:05.623 00:21:05.623 Admin Command Set Attributes 00:21:05.623 ============================ 00:21:05.623 Security Send/Receive: Not Supported 00:21:05.623 Format NVM: Not Supported 00:21:05.623 Firmware Activate/Download: Not Supported 00:21:05.623 Namespace Management: Not Supported 00:21:05.623 Device Self-Test: Not Supported 00:21:05.623 Directives: Not Supported 00:21:05.623 NVMe-MI: Not Supported 00:21:05.623 Virtualization Management: Not Supported 00:21:05.623 Doorbell Buffer Config: Not Supported 00:21:05.623 Get LBA Status Capability: Not Supported 00:21:05.623 Command & Feature Lockdown Capability: Not Supported 00:21:05.623 Abort Command Limit: 1 00:21:05.623 Async Event Request Limit: 4 00:21:05.623 Number of Firmware Slots: N/A 00:21:05.623 Firmware Slot 1 Read-Only: N/A 00:21:05.623 Firmware Activation Without Reset: N/A 00:21:05.623 Multiple Update Detection Support: N/A 00:21:05.623 Firmware Update Granularity: No Information Provided 00:21:05.623 Per-Namespace SMART Log: No 00:21:05.623 Asymmetric Namespace Access Log Page: Not Supported 00:21:05.623 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:05.623 Command Effects Log Page: Not Supported 00:21:05.623 Get Log Page Extended Data: Supported 00:21:05.623 Telemetry Log Pages: Not Supported 00:21:05.623 Persistent Event Log Pages: Not Supported 00:21:05.623 Supported Log Pages Log Page: May Support 00:21:05.623 Commands Supported & Effects Log Page: Not Supported 00:21:05.623 Feature Identifiers & Effects Log Page:May Support 00:21:05.623 NVMe-MI Commands & Effects Log Page: May Support 00:21:05.623 Data Area 4 for Telemetry Log: Not Supported 00:21:05.623 Error Log Page Entries Supported: 128 00:21:05.623 Keep Alive: Not Supported 00:21:05.623 00:21:05.623 NVM Command Set Attributes 00:21:05.623 ========================== 00:21:05.623 Submission Queue Entry Size 00:21:05.623 Max: 1 00:21:05.623 Min: 1 00:21:05.623 Completion Queue Entry Size 00:21:05.623 Max: 1 00:21:05.623 Min: 1 00:21:05.623 Number of Namespaces: 0 00:21:05.623 Compare Command: Not Supported 00:21:05.623 Write Uncorrectable Command: Not Supported 00:21:05.623 Dataset Management Command: Not Supported 00:21:05.623 Write Zeroes Command: Not Supported 00:21:05.623 Set Features Save Field: Not Supported 00:21:05.623 Reservations: Not Supported 00:21:05.623 Timestamp: Not Supported 00:21:05.623 Copy: Not Supported 00:21:05.623 Volatile Write Cache: Not Present 00:21:05.623 Atomic Write Unit (Normal): 1 00:21:05.623 Atomic Write Unit (PFail): 1 00:21:05.623 Atomic Compare & Write Unit: 1 00:21:05.623 Fused Compare & Write: Supported 00:21:05.623 Scatter-Gather List 00:21:05.623 SGL Command Set: Supported 00:21:05.623 SGL Keyed: Supported 00:21:05.623 SGL Bit Bucket Descriptor: Not Supported 00:21:05.623 SGL Metadata Pointer: Not Supported 00:21:05.623 Oversized SGL: Not Supported 00:21:05.623 SGL Metadata Address: Not Supported 00:21:05.623 SGL Offset: Supported 00:21:05.623 Transport SGL Data Block: Not Supported 00:21:05.623 Replay Protected Memory Block: Not Supported 00:21:05.623 00:21:05.623 Firmware Slot Information 00:21:05.623 ========================= 00:21:05.623 Active slot: 0 00:21:05.623 00:21:05.623 00:21:05.623 Error Log 00:21:05.623 ========= 00:21:05.623 00:21:05.623 Active Namespaces 00:21:05.623 ================= 00:21:05.623 Discovery Log Page 00:21:05.623 ================== 00:21:05.623 Generation Counter: 2 00:21:05.623 Number of Records: 2 00:21:05.623 Record Format: 0 00:21:05.623 00:21:05.623 Discovery Log Entry 0 00:21:05.623 ---------------------- 00:21:05.623 Transport Type: 3 (TCP) 00:21:05.623 Address Family: 1 (IPv4) 00:21:05.623 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:05.623 Entry Flags: 00:21:05.623 Duplicate Returned Information: 1 00:21:05.623 Explicit Persistent Connection Support for Discovery: 1 00:21:05.623 Transport Requirements: 00:21:05.623 Secure Channel: Not Required 00:21:05.623 Port ID: 0 (0x0000) 00:21:05.623 Controller ID: 65535 (0xffff) 00:21:05.623 Admin Max SQ Size: 128 00:21:05.623 Transport Service Identifier: 4420 00:21:05.623 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:05.623 Transport Address: 10.0.0.2 00:21:05.623 Discovery Log Entry 1 00:21:05.623 ---------------------- 00:21:05.623 Transport Type: 3 (TCP) 00:21:05.623 Address Family: 1 (IPv4) 00:21:05.623 Subsystem Type: 2 (NVM Subsystem) 00:21:05.623 Entry Flags: 00:21:05.623 Duplicate Returned Information: 0 00:21:05.623 Explicit Persistent Connection Support for Discovery: 0 00:21:05.623 Transport Requirements: 00:21:05.623 Secure Channel: Not Required 00:21:05.623 Port ID: 0 (0x0000) 00:21:05.623 Controller ID: 65535 (0xffff) 00:21:05.623 Admin Max SQ Size: 128 00:21:05.623 Transport Service Identifier: 4420 00:21:05.623 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:05.623 Transport Address: 10.0.0.2 [2024-11-19 11:23:01.001538] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:21:05.623 [2024-11-19 11:23:01.001562] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc43100) on tqpair=0xbe1690 00:21:05.623 [2024-11-19 11:23:01.001576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.623 [2024-11-19 11:23:01.001585] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc43280) on tqpair=0xbe1690 00:21:05.623 [2024-11-19 11:23:01.001592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.623 [2024-11-19 11:23:01.001600] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc43400) on tqpair=0xbe1690 00:21:05.623 [2024-11-19 11:23:01.001607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.623 [2024-11-19 11:23:01.001614] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc43580) on tqpair=0xbe1690 00:21:05.623 [2024-11-19 11:23:01.001621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.623 [2024-11-19 11:23:01.001639] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.623 [2024-11-19 11:23:01.001663] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.623 [2024-11-19 11:23:01.001669] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbe1690) 00:21:05.623 [2024-11-19 11:23:01.001679] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.623 [2024-11-19 11:23:01.001709] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc43580, cid 3, qid 0 00:21:05.623 [2024-11-19 11:23:01.001789] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.623 [2024-11-19 11:23:01.001803] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.623 [2024-11-19 11:23:01.001809] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.623 [2024-11-19 11:23:01.001816] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc43580) on tqpair=0xbe1690 00:21:05.624 [2024-11-19 11:23:01.001827] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.624 [2024-11-19 11:23:01.001834] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.624 [2024-11-19 11:23:01.001840] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbe1690) 00:21:05.624 [2024-11-19 11:23:01.001850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.624 [2024-11-19 11:23:01.001876] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc43580, cid 3, qid 0 00:21:05.624 [2024-11-19 11:23:01.001966] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.624 [2024-11-19 11:23:01.001978] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.624 [2024-11-19 11:23:01.001984] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.624 [2024-11-19 11:23:01.001990] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc43580) on tqpair=0xbe1690 00:21:05.624 [2024-11-19 11:23:01.001999] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:21:05.624 [2024-11-19 11:23:01.002006] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:21:05.624 [2024-11-19 11:23:01.002021] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.624 [2024-11-19 11:23:01.002029] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.624 [2024-11-19 11:23:01.002035] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbe1690) 00:21:05.624 [2024-11-19 11:23:01.002045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.624 [2024-11-19 11:23:01.002064] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc43580, cid 3, qid 0 00:21:05.624 [2024-11-19 11:23:01.002140] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.624 [2024-11-19 11:23:01.002151] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.624 [2024-11-19 11:23:01.002157] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.624 [2024-11-19 11:23:01.002163] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc43580) on tqpair=0xbe1690 00:21:05.624 [2024-11-19 11:23:01.002179] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.624 [2024-11-19 11:23:01.002188] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.624 [2024-11-19 11:23:01.002194] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbe1690) 00:21:05.624 [2024-11-19 11:23:01.002203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.624 [2024-11-19 11:23:01.002223] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc43580, cid 3, qid 0 00:21:05.624 [2024-11-19 11:23:01.002300] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.624 [2024-11-19 11:23:01.002313] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.624 [2024-11-19 11:23:01.002319] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.624 [2024-11-19 11:23:01.002325] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc43580) on tqpair=0xbe1690 00:21:05.624 [2024-11-19 11:23:01.002355] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.624 [2024-11-19 11:23:01.002373] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.624 [2024-11-19 11:23:01.002384] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbe1690) 00:21:05.624 [2024-11-19 11:23:01.002395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.624 [2024-11-19 11:23:01.002418] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc43580, cid 3, qid 0 00:21:05.624 [2024-11-19 11:23:01.002504] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.624 [2024-11-19 11:23:01.002518] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.624 [2024-11-19 11:23:01.002525] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.624 [2024-11-19 11:23:01.002531] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc43580) on tqpair=0xbe1690 00:21:05.624 [2024-11-19 11:23:01.002547] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.624 [2024-11-19 11:23:01.002556] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.624 [2024-11-19 11:23:01.002562] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbe1690) 00:21:05.624 [2024-11-19 11:23:01.002572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.624 [2024-11-19 11:23:01.002594] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc43580, cid 3, qid 0 00:21:05.624 [2024-11-19 11:23:01.002686] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.624 [2024-11-19 11:23:01.002699] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.624 [2024-11-19 11:23:01.002706] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.624 [2024-11-19 11:23:01.002712] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc43580) on tqpair=0xbe1690 00:21:05.624 [2024-11-19 11:23:01.002728] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.624 [2024-11-19 11:23:01.002736] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.624 [2024-11-19 11:23:01.002742] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbe1690) 00:21:05.624 [2024-11-19 11:23:01.002752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.624 [2024-11-19 11:23:01.002772] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc43580, cid 3, qid 0 00:21:05.624 [2024-11-19 11:23:01.002849] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.624 [2024-11-19 11:23:01.002862] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.624 [2024-11-19 11:23:01.002868] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.624 [2024-11-19 11:23:01.002874] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc43580) on tqpair=0xbe1690 00:21:05.624 [2024-11-19 11:23:01.002889] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.624 [2024-11-19 11:23:01.002898] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.624 [2024-11-19 11:23:01.002903] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbe1690) 00:21:05.624 [2024-11-19 11:23:01.002913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.624 [2024-11-19 11:23:01.002933] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc43580, cid 3, qid 0 00:21:05.624 [2024-11-19 11:23:01.003011] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.624 [2024-11-19 11:23:01.003023] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.624 [2024-11-19 11:23:01.003030] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.624 [2024-11-19 11:23:01.003036] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc43580) on tqpair=0xbe1690 00:21:05.624 [2024-11-19 11:23:01.003051] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.624 [2024-11-19 11:23:01.003059] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.624 [2024-11-19 11:23:01.003065] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbe1690) 00:21:05.624 [2024-11-19 11:23:01.003078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.624 [2024-11-19 11:23:01.003100] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc43580, cid 3, qid 0 00:21:05.624 [2024-11-19 11:23:01.003177] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.624 [2024-11-19 11:23:01.003189] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.624 [2024-11-19 11:23:01.003196] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.624 [2024-11-19 11:23:01.003202] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc43580) on tqpair=0xbe1690 00:21:05.624 [2024-11-19 11:23:01.003217] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.624 [2024-11-19 11:23:01.003225] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.624 [2024-11-19 11:23:01.003231] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbe1690) 00:21:05.624 [2024-11-19 11:23:01.003240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.624 [2024-11-19 11:23:01.003261] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc43580, cid 3, qid 0 00:21:05.624 [2024-11-19 11:23:01.003351] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.624 [2024-11-19 11:23:01.003374] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.624 [2024-11-19 11:23:01.003382] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.624 [2024-11-19 11:23:01.003389] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc43580) on tqpair=0xbe1690 00:21:05.624 [2024-11-19 11:23:01.003406] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.624 [2024-11-19 11:23:01.003424] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.624 [2024-11-19 11:23:01.003431] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbe1690) 00:21:05.624 [2024-11-19 11:23:01.003442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.624 [2024-11-19 11:23:01.003464] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc43580, cid 3, qid 0 00:21:05.624 [2024-11-19 11:23:01.003545] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.624 [2024-11-19 11:23:01.003559] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.624 [2024-11-19 11:23:01.003565] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.624 [2024-11-19 11:23:01.003572] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc43580) on tqpair=0xbe1690 00:21:05.624 [2024-11-19 11:23:01.003587] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.624 [2024-11-19 11:23:01.003596] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.624 [2024-11-19 11:23:01.003602] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbe1690) 00:21:05.624 [2024-11-19 11:23:01.003612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.624 [2024-11-19 11:23:01.003633] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc43580, cid 3, qid 0 00:21:05.624 [2024-11-19 11:23:01.003727] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.624 [2024-11-19 11:23:01.003741] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.624 [2024-11-19 11:23:01.003747] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.624 [2024-11-19 11:23:01.003753] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc43580) on tqpair=0xbe1690 00:21:05.624 [2024-11-19 11:23:01.003768] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.625 [2024-11-19 11:23:01.003777] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.625 [2024-11-19 11:23:01.003782] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbe1690) 00:21:05.625 [2024-11-19 11:23:01.003792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.625 [2024-11-19 11:23:01.003817] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc43580, cid 3, qid 0 00:21:05.625 [2024-11-19 11:23:01.003893] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.625 [2024-11-19 11:23:01.003904] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.625 [2024-11-19 11:23:01.003911] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.625 [2024-11-19 11:23:01.003917] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc43580) on tqpair=0xbe1690 00:21:05.625 [2024-11-19 11:23:01.003931] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.625 [2024-11-19 11:23:01.003940] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.625 [2024-11-19 11:23:01.003946] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbe1690) 00:21:05.625 [2024-11-19 11:23:01.003955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.625 [2024-11-19 11:23:01.003975] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc43580, cid 3, qid 0 00:21:05.625 [2024-11-19 11:23:01.004052] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.625 [2024-11-19 11:23:01.004065] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.625 [2024-11-19 11:23:01.004071] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.625 [2024-11-19 11:23:01.004077] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc43580) on tqpair=0xbe1690 00:21:05.625 [2024-11-19 11:23:01.004092] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.625 [2024-11-19 11:23:01.004100] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.625 [2024-11-19 11:23:01.004106] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbe1690) 00:21:05.625 [2024-11-19 11:23:01.004116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.625 [2024-11-19 11:23:01.004136] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc43580, cid 3, qid 0 00:21:05.625 [2024-11-19 11:23:01.004209] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.625 [2024-11-19 11:23:01.004222] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.625 [2024-11-19 11:23:01.004229] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.625 [2024-11-19 11:23:01.004235] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc43580) on tqpair=0xbe1690 00:21:05.625 [2024-11-19 11:23:01.004250] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.625 [2024-11-19 11:23:01.004258] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.625 [2024-11-19 11:23:01.004264] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbe1690) 00:21:05.625 [2024-11-19 11:23:01.004274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.625 [2024-11-19 11:23:01.004294] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc43580, cid 3, qid 0 00:21:05.625 [2024-11-19 11:23:01.004390] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.625 [2024-11-19 11:23:01.004405] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.625 [2024-11-19 11:23:01.004411] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.625 [2024-11-19 11:23:01.004418] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc43580) on tqpair=0xbe1690 00:21:05.625 [2024-11-19 11:23:01.004434] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.625 [2024-11-19 11:23:01.004442] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.625 [2024-11-19 11:23:01.004448] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbe1690) 00:21:05.625 [2024-11-19 11:23:01.004458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.625 [2024-11-19 11:23:01.004494] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc43580, cid 3, qid 0 00:21:05.625 [2024-11-19 11:23:01.004571] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.625 [2024-11-19 11:23:01.004584] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.625 [2024-11-19 11:23:01.004591] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.625 [2024-11-19 11:23:01.004597] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc43580) on tqpair=0xbe1690 00:21:05.625 [2024-11-19 11:23:01.004614] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.625 [2024-11-19 11:23:01.004623] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.625 [2024-11-19 11:23:01.004629] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbe1690) 00:21:05.625 [2024-11-19 11:23:01.004654] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.625 [2024-11-19 11:23:01.004675] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc43580, cid 3, qid 0 00:21:05.625 [2024-11-19 11:23:01.004759] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.625 [2024-11-19 11:23:01.004772] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.625 [2024-11-19 11:23:01.004779] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.625 [2024-11-19 11:23:01.004785] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc43580) on tqpair=0xbe1690 00:21:05.625 [2024-11-19 11:23:01.004816] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.625 [2024-11-19 11:23:01.004825] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.625 [2024-11-19 11:23:01.004831] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbe1690) 00:21:05.625 [2024-11-19 11:23:01.004841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.625 [2024-11-19 11:23:01.004861] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc43580, cid 3, qid 0 00:21:05.625 [2024-11-19 11:23:01.004941] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.625 [2024-11-19 11:23:01.004952] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.625 [2024-11-19 11:23:01.004958] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.625 [2024-11-19 11:23:01.004964] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc43580) on tqpair=0xbe1690 00:21:05.625 [2024-11-19 11:23:01.004979] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.625 [2024-11-19 11:23:01.004987] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.625 [2024-11-19 11:23:01.004993] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbe1690) 00:21:05.625 [2024-11-19 11:23:01.005003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.625 [2024-11-19 11:23:01.005023] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc43580, cid 3, qid 0 00:21:05.625 [2024-11-19 11:23:01.005099] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.625 [2024-11-19 11:23:01.005109] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.625 [2024-11-19 11:23:01.005116] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.625 [2024-11-19 11:23:01.005122] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc43580) on tqpair=0xbe1690 00:21:05.625 [2024-11-19 11:23:01.005137] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.625 [2024-11-19 11:23:01.005145] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.625 [2024-11-19 11:23:01.005151] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbe1690) 00:21:05.625 [2024-11-19 11:23:01.005161] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.625 [2024-11-19 11:23:01.005181] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc43580, cid 3, qid 0 00:21:05.625 [2024-11-19 11:23:01.005255] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.625 [2024-11-19 11:23:01.005272] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.625 [2024-11-19 11:23:01.005279] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.625 [2024-11-19 11:23:01.005285] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc43580) on tqpair=0xbe1690 00:21:05.625 [2024-11-19 11:23:01.005300] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.625 [2024-11-19 11:23:01.005309] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.625 [2024-11-19 11:23:01.005315] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbe1690) 00:21:05.625 [2024-11-19 11:23:01.005324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.625 [2024-11-19 11:23:01.005360] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc43580, cid 3, qid 0 00:21:05.625 [2024-11-19 11:23:01.009389] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.625 [2024-11-19 11:23:01.009400] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.625 [2024-11-19 11:23:01.009406] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.625 [2024-11-19 11:23:01.009413] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc43580) on tqpair=0xbe1690 00:21:05.625 [2024-11-19 11:23:01.009430] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.625 [2024-11-19 11:23:01.009439] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.625 [2024-11-19 11:23:01.009445] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbe1690) 00:21:05.625 [2024-11-19 11:23:01.009455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.625 [2024-11-19 11:23:01.009478] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc43580, cid 3, qid 0 00:21:05.625 [2024-11-19 11:23:01.009592] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.625 [2024-11-19 11:23:01.009606] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.625 [2024-11-19 11:23:01.009612] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.625 [2024-11-19 11:23:01.009619] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc43580) on tqpair=0xbe1690 00:21:05.625 [2024-11-19 11:23:01.009631] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:21:05.625 00:21:05.626 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:05.626 [2024-11-19 11:23:01.046673] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:21:05.626 [2024-11-19 11:23:01.046720] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2669866 ] 00:21:05.626 [2024-11-19 11:23:01.098030] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:21:05.626 [2024-11-19 11:23:01.098089] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:05.626 [2024-11-19 11:23:01.098100] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:05.626 [2024-11-19 11:23:01.098115] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:05.626 [2024-11-19 11:23:01.098129] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:05.626 [2024-11-19 11:23:01.101626] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:21:05.626 [2024-11-19 11:23:01.101684] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x59c690 0 00:21:05.626 [2024-11-19 11:23:01.109378] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:05.626 [2024-11-19 11:23:01.109398] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:05.626 [2024-11-19 11:23:01.109406] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:05.626 [2024-11-19 11:23:01.109412] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:05.626 [2024-11-19 11:23:01.109447] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.626 [2024-11-19 11:23:01.109459] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.626 [2024-11-19 11:23:01.109466] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x59c690) 00:21:05.626 [2024-11-19 11:23:01.109480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:05.626 [2024-11-19 11:23:01.109507] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5fe100, cid 0, qid 0 00:21:05.887 [2024-11-19 11:23:01.117379] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.887 [2024-11-19 11:23:01.117397] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.887 [2024-11-19 11:23:01.117405] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.887 [2024-11-19 11:23:01.117413] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5fe100) on tqpair=0x59c690 00:21:05.887 [2024-11-19 11:23:01.117427] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:05.887 [2024-11-19 11:23:01.117438] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:21:05.887 [2024-11-19 11:23:01.117448] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:21:05.887 [2024-11-19 11:23:01.117466] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.887 [2024-11-19 11:23:01.117476] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.887 [2024-11-19 11:23:01.117483] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x59c690) 00:21:05.887 [2024-11-19 11:23:01.117494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.887 [2024-11-19 11:23:01.117520] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5fe100, cid 0, qid 0 00:21:05.887 [2024-11-19 11:23:01.117704] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.887 [2024-11-19 11:23:01.117719] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.887 [2024-11-19 11:23:01.117726] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.887 [2024-11-19 11:23:01.117732] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5fe100) on tqpair=0x59c690 00:21:05.887 [2024-11-19 11:23:01.117740] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:21:05.887 [2024-11-19 11:23:01.117755] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:21:05.887 [2024-11-19 11:23:01.117767] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.887 [2024-11-19 11:23:01.117775] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.887 [2024-11-19 11:23:01.117781] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x59c690) 00:21:05.887 [2024-11-19 11:23:01.117791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.887 [2024-11-19 11:23:01.117813] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5fe100, cid 0, qid 0 00:21:05.887 [2024-11-19 11:23:01.117954] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.887 [2024-11-19 11:23:01.117967] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.888 [2024-11-19 11:23:01.117978] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.888 [2024-11-19 11:23:01.117985] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5fe100) on tqpair=0x59c690 00:21:05.888 [2024-11-19 11:23:01.117993] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:21:05.888 [2024-11-19 11:23:01.118007] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:21:05.888 [2024-11-19 11:23:01.118019] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.888 [2024-11-19 11:23:01.118027] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.888 [2024-11-19 11:23:01.118033] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x59c690) 00:21:05.888 [2024-11-19 11:23:01.118043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.888 [2024-11-19 11:23:01.118070] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5fe100, cid 0, qid 0 00:21:05.888 [2024-11-19 11:23:01.118216] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.888 [2024-11-19 11:23:01.118233] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.888 [2024-11-19 11:23:01.118241] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.888 [2024-11-19 11:23:01.118247] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5fe100) on tqpair=0x59c690 00:21:05.888 [2024-11-19 11:23:01.118256] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:05.888 [2024-11-19 11:23:01.118275] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.888 [2024-11-19 11:23:01.118284] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.888 [2024-11-19 11:23:01.118290] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x59c690) 00:21:05.888 [2024-11-19 11:23:01.118301] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.888 [2024-11-19 11:23:01.118323] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5fe100, cid 0, qid 0 00:21:05.888 [2024-11-19 11:23:01.118460] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.888 [2024-11-19 11:23:01.118476] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.888 [2024-11-19 11:23:01.118484] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.888 [2024-11-19 11:23:01.118490] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5fe100) on tqpair=0x59c690 00:21:05.888 [2024-11-19 11:23:01.118498] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:21:05.888 [2024-11-19 11:23:01.118506] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:21:05.888 [2024-11-19 11:23:01.118521] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:05.888 [2024-11-19 11:23:01.118631] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:21:05.888 [2024-11-19 11:23:01.118655] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:05.888 [2024-11-19 11:23:01.118667] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.888 [2024-11-19 11:23:01.118675] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.888 [2024-11-19 11:23:01.118681] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x59c690) 00:21:05.888 [2024-11-19 11:23:01.118691] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.888 [2024-11-19 11:23:01.118728] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5fe100, cid 0, qid 0 00:21:05.888 [2024-11-19 11:23:01.118914] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.888 [2024-11-19 11:23:01.118928] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.888 [2024-11-19 11:23:01.118935] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.888 [2024-11-19 11:23:01.118941] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5fe100) on tqpair=0x59c690 00:21:05.888 [2024-11-19 11:23:01.118948] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:05.888 [2024-11-19 11:23:01.118965] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.888 [2024-11-19 11:23:01.118974] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.888 [2024-11-19 11:23:01.118980] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x59c690) 00:21:05.888 [2024-11-19 11:23:01.118990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.888 [2024-11-19 11:23:01.119010] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5fe100, cid 0, qid 0 00:21:05.888 [2024-11-19 11:23:01.119115] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.888 [2024-11-19 11:23:01.119128] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.888 [2024-11-19 11:23:01.119135] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.888 [2024-11-19 11:23:01.119141] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5fe100) on tqpair=0x59c690 00:21:05.888 [2024-11-19 11:23:01.119148] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:05.888 [2024-11-19 11:23:01.119156] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:21:05.888 [2024-11-19 11:23:01.119170] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:21:05.888 [2024-11-19 11:23:01.119184] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:21:05.888 [2024-11-19 11:23:01.119198] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.888 [2024-11-19 11:23:01.119205] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x59c690) 00:21:05.888 [2024-11-19 11:23:01.119216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.888 [2024-11-19 11:23:01.119237] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5fe100, cid 0, qid 0 00:21:05.888 [2024-11-19 11:23:01.119392] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:05.888 [2024-11-19 11:23:01.119408] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:05.888 [2024-11-19 11:23:01.119416] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:05.888 [2024-11-19 11:23:01.119422] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x59c690): datao=0, datal=4096, cccid=0 00:21:05.888 [2024-11-19 11:23:01.119430] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5fe100) on tqpair(0x59c690): expected_datao=0, payload_size=4096 00:21:05.888 [2024-11-19 11:23:01.119438] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.888 [2024-11-19 11:23:01.119448] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:05.888 [2024-11-19 11:23:01.119456] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:05.888 [2024-11-19 11:23:01.119480] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.888 [2024-11-19 11:23:01.119493] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.888 [2024-11-19 11:23:01.119500] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.888 [2024-11-19 11:23:01.119507] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5fe100) on tqpair=0x59c690 00:21:05.888 [2024-11-19 11:23:01.119518] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:21:05.888 [2024-11-19 11:23:01.119531] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:21:05.888 [2024-11-19 11:23:01.119539] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:21:05.888 [2024-11-19 11:23:01.119553] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:21:05.888 [2024-11-19 11:23:01.119562] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:21:05.888 [2024-11-19 11:23:01.119570] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:21:05.888 [2024-11-19 11:23:01.119589] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:21:05.888 [2024-11-19 11:23:01.119603] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.888 [2024-11-19 11:23:01.119611] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.888 [2024-11-19 11:23:01.119617] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x59c690) 00:21:05.888 [2024-11-19 11:23:01.119628] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:05.888 [2024-11-19 11:23:01.119667] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5fe100, cid 0, qid 0 00:21:05.888 [2024-11-19 11:23:01.119818] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.888 [2024-11-19 11:23:01.119832] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.888 [2024-11-19 11:23:01.119839] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.888 [2024-11-19 11:23:01.119845] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5fe100) on tqpair=0x59c690 00:21:05.888 [2024-11-19 11:23:01.119855] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.888 [2024-11-19 11:23:01.119862] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.888 [2024-11-19 11:23:01.119868] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x59c690) 00:21:05.888 [2024-11-19 11:23:01.119877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:05.888 [2024-11-19 11:23:01.119887] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.888 [2024-11-19 11:23:01.119894] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.888 [2024-11-19 11:23:01.119900] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x59c690) 00:21:05.888 [2024-11-19 11:23:01.119908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:05.888 [2024-11-19 11:23:01.119917] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.888 [2024-11-19 11:23:01.119924] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.888 [2024-11-19 11:23:01.119930] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x59c690) 00:21:05.889 [2024-11-19 11:23:01.119938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:05.889 [2024-11-19 11:23:01.119947] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.889 [2024-11-19 11:23:01.119954] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.889 [2024-11-19 11:23:01.119960] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x59c690) 00:21:05.889 [2024-11-19 11:23:01.119968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:05.889 [2024-11-19 11:23:01.119976] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:05.889 [2024-11-19 11:23:01.119991] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:05.889 [2024-11-19 11:23:01.120006] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.889 [2024-11-19 11:23:01.120014] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x59c690) 00:21:05.889 [2024-11-19 11:23:01.120024] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.889 [2024-11-19 11:23:01.120046] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5fe100, cid 0, qid 0 00:21:05.889 [2024-11-19 11:23:01.120056] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5fe280, cid 1, qid 0 00:21:05.889 [2024-11-19 11:23:01.120064] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5fe400, cid 2, qid 0 00:21:05.889 [2024-11-19 11:23:01.120071] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5fe580, cid 3, qid 0 00:21:05.889 [2024-11-19 11:23:01.120078] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5fe700, cid 4, qid 0 00:21:05.889 [2024-11-19 11:23:01.120255] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.889 [2024-11-19 11:23:01.120269] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.889 [2024-11-19 11:23:01.120276] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.889 [2024-11-19 11:23:01.120282] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5fe700) on tqpair=0x59c690 00:21:05.889 [2024-11-19 11:23:01.120294] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:21:05.889 [2024-11-19 11:23:01.120303] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:05.889 [2024-11-19 11:23:01.120318] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:21:05.889 [2024-11-19 11:23:01.120330] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:05.889 [2024-11-19 11:23:01.120355] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.889 [2024-11-19 11:23:01.120374] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.889 [2024-11-19 11:23:01.120382] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x59c690) 00:21:05.889 [2024-11-19 11:23:01.120393] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:05.889 [2024-11-19 11:23:01.120416] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5fe700, cid 4, qid 0 00:21:05.889 [2024-11-19 11:23:01.124374] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.889 [2024-11-19 11:23:01.124391] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.889 [2024-11-19 11:23:01.124399] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.889 [2024-11-19 11:23:01.124406] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5fe700) on tqpair=0x59c690 00:21:05.889 [2024-11-19 11:23:01.124477] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:21:05.889 [2024-11-19 11:23:01.124500] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:05.889 [2024-11-19 11:23:01.124517] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.889 [2024-11-19 11:23:01.124525] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x59c690) 00:21:05.889 [2024-11-19 11:23:01.124536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.889 [2024-11-19 11:23:01.124560] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5fe700, cid 4, qid 0 00:21:05.889 [2024-11-19 11:23:01.124727] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:05.889 [2024-11-19 11:23:01.124742] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:05.889 [2024-11-19 11:23:01.124749] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:05.889 [2024-11-19 11:23:01.124756] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x59c690): datao=0, datal=4096, cccid=4 00:21:05.889 [2024-11-19 11:23:01.124763] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5fe700) on tqpair(0x59c690): expected_datao=0, payload_size=4096 00:21:05.889 [2024-11-19 11:23:01.124771] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.889 [2024-11-19 11:23:01.124789] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:05.889 [2024-11-19 11:23:01.124798] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:05.889 [2024-11-19 11:23:01.167373] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.889 [2024-11-19 11:23:01.167391] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.889 [2024-11-19 11:23:01.167399] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.889 [2024-11-19 11:23:01.167405] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5fe700) on tqpair=0x59c690 00:21:05.889 [2024-11-19 11:23:01.167423] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:21:05.889 [2024-11-19 11:23:01.167446] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:21:05.889 [2024-11-19 11:23:01.167467] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:21:05.889 [2024-11-19 11:23:01.167482] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.889 [2024-11-19 11:23:01.167490] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x59c690) 00:21:05.889 [2024-11-19 11:23:01.167501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.889 [2024-11-19 11:23:01.167526] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5fe700, cid 4, qid 0 00:21:05.889 [2024-11-19 11:23:01.167712] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:05.889 [2024-11-19 11:23:01.167725] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:05.889 [2024-11-19 11:23:01.167732] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:05.889 [2024-11-19 11:23:01.167738] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x59c690): datao=0, datal=4096, cccid=4 00:21:05.889 [2024-11-19 11:23:01.167745] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5fe700) on tqpair(0x59c690): expected_datao=0, payload_size=4096 00:21:05.889 [2024-11-19 11:23:01.167752] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.889 [2024-11-19 11:23:01.167761] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:05.889 [2024-11-19 11:23:01.167769] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:05.889 [2024-11-19 11:23:01.167791] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.889 [2024-11-19 11:23:01.167802] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.889 [2024-11-19 11:23:01.167808] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.889 [2024-11-19 11:23:01.167815] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5fe700) on tqpair=0x59c690 00:21:05.889 [2024-11-19 11:23:01.167838] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:05.889 [2024-11-19 11:23:01.167859] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:05.889 [2024-11-19 11:23:01.167873] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.889 [2024-11-19 11:23:01.167880] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x59c690) 00:21:05.889 [2024-11-19 11:23:01.167894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.889 [2024-11-19 11:23:01.167917] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5fe700, cid 4, qid 0 00:21:05.889 [2024-11-19 11:23:01.168022] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:05.889 [2024-11-19 11:23:01.168036] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:05.889 [2024-11-19 11:23:01.168043] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:05.889 [2024-11-19 11:23:01.168049] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x59c690): datao=0, datal=4096, cccid=4 00:21:05.889 [2024-11-19 11:23:01.168056] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5fe700) on tqpair(0x59c690): expected_datao=0, payload_size=4096 00:21:05.889 [2024-11-19 11:23:01.168063] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.889 [2024-11-19 11:23:01.168079] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:05.889 [2024-11-19 11:23:01.168088] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:05.889 [2024-11-19 11:23:01.209543] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.889 [2024-11-19 11:23:01.209560] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.889 [2024-11-19 11:23:01.209567] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.889 [2024-11-19 11:23:01.209574] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5fe700) on tqpair=0x59c690 00:21:05.889 [2024-11-19 11:23:01.209589] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:05.889 [2024-11-19 11:23:01.209605] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:21:05.889 [2024-11-19 11:23:01.209622] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:21:05.889 [2024-11-19 11:23:01.209634] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:21:05.889 [2024-11-19 11:23:01.209643] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:05.890 [2024-11-19 11:23:01.209652] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:21:05.890 [2024-11-19 11:23:01.209676] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:21:05.890 [2024-11-19 11:23:01.209683] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:21:05.890 [2024-11-19 11:23:01.209692] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:21:05.890 [2024-11-19 11:23:01.209711] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.890 [2024-11-19 11:23:01.209719] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x59c690) 00:21:05.890 [2024-11-19 11:23:01.209730] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.890 [2024-11-19 11:23:01.209741] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.890 [2024-11-19 11:23:01.209748] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.890 [2024-11-19 11:23:01.209754] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x59c690) 00:21:05.890 [2024-11-19 11:23:01.209763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:05.890 [2024-11-19 11:23:01.209789] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5fe700, cid 4, qid 0 00:21:05.890 [2024-11-19 11:23:01.209804] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5fe880, cid 5, qid 0 00:21:05.890 [2024-11-19 11:23:01.209987] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.890 [2024-11-19 11:23:01.210001] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.890 [2024-11-19 11:23:01.210008] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.890 [2024-11-19 11:23:01.210014] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5fe700) on tqpair=0x59c690 00:21:05.890 [2024-11-19 11:23:01.210024] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.890 [2024-11-19 11:23:01.210033] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.890 [2024-11-19 11:23:01.210039] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.890 [2024-11-19 11:23:01.210045] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5fe880) on tqpair=0x59c690 00:21:05.890 [2024-11-19 11:23:01.210061] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.890 [2024-11-19 11:23:01.210070] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x59c690) 00:21:05.890 [2024-11-19 11:23:01.210080] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.890 [2024-11-19 11:23:01.210101] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5fe880, cid 5, qid 0 00:21:05.890 [2024-11-19 11:23:01.210208] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.890 [2024-11-19 11:23:01.210221] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.890 [2024-11-19 11:23:01.210228] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.890 [2024-11-19 11:23:01.210234] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5fe880) on tqpair=0x59c690 00:21:05.890 [2024-11-19 11:23:01.210251] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.890 [2024-11-19 11:23:01.210259] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x59c690) 00:21:05.890 [2024-11-19 11:23:01.210269] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.890 [2024-11-19 11:23:01.210290] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5fe880, cid 5, qid 0 00:21:05.890 [2024-11-19 11:23:01.210407] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.890 [2024-11-19 11:23:01.210422] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.890 [2024-11-19 11:23:01.210428] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.890 [2024-11-19 11:23:01.210435] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5fe880) on tqpair=0x59c690 00:21:05.890 [2024-11-19 11:23:01.210451] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.890 [2024-11-19 11:23:01.210460] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x59c690) 00:21:05.890 [2024-11-19 11:23:01.210471] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.890 [2024-11-19 11:23:01.210493] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5fe880, cid 5, qid 0 00:21:05.890 [2024-11-19 11:23:01.210624] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.890 [2024-11-19 11:23:01.210637] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.890 [2024-11-19 11:23:01.210643] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.890 [2024-11-19 11:23:01.210650] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5fe880) on tqpair=0x59c690 00:21:05.890 [2024-11-19 11:23:01.210674] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.890 [2024-11-19 11:23:01.210700] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x59c690) 00:21:05.890 [2024-11-19 11:23:01.210710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.890 [2024-11-19 11:23:01.210727] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.890 [2024-11-19 11:23:01.210735] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x59c690) 00:21:05.890 [2024-11-19 11:23:01.210744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.890 [2024-11-19 11:23:01.210756] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.890 [2024-11-19 11:23:01.210763] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x59c690) 00:21:05.890 [2024-11-19 11:23:01.210772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.890 [2024-11-19 11:23:01.210783] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.890 [2024-11-19 11:23:01.210790] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x59c690) 00:21:05.890 [2024-11-19 11:23:01.210799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.890 [2024-11-19 11:23:01.210821] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5fe880, cid 5, qid 0 00:21:05.890 [2024-11-19 11:23:01.210832] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5fe700, cid 4, qid 0 00:21:05.890 [2024-11-19 11:23:01.210839] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5fea00, cid 6, qid 0 00:21:05.890 [2024-11-19 11:23:01.210846] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5feb80, cid 7, qid 0 00:21:05.890 [2024-11-19 11:23:01.211073] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:05.890 [2024-11-19 11:23:01.211086] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:05.890 [2024-11-19 11:23:01.211093] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:05.890 [2024-11-19 11:23:01.211099] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x59c690): datao=0, datal=8192, cccid=5 00:21:05.890 [2024-11-19 11:23:01.211106] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5fe880) on tqpair(0x59c690): expected_datao=0, payload_size=8192 00:21:05.890 [2024-11-19 11:23:01.211113] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.890 [2024-11-19 11:23:01.211168] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:05.890 [2024-11-19 11:23:01.211179] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:05.890 [2024-11-19 11:23:01.211187] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:05.890 [2024-11-19 11:23:01.211196] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:05.890 [2024-11-19 11:23:01.211202] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:05.890 [2024-11-19 11:23:01.211208] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x59c690): datao=0, datal=512, cccid=4 00:21:05.890 [2024-11-19 11:23:01.211215] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5fe700) on tqpair(0x59c690): expected_datao=0, payload_size=512 00:21:05.890 [2024-11-19 11:23:01.211222] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.890 [2024-11-19 11:23:01.211231] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:05.890 [2024-11-19 11:23:01.211237] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:05.890 [2024-11-19 11:23:01.211245] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:05.890 [2024-11-19 11:23:01.211254] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:05.890 [2024-11-19 11:23:01.211260] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:05.890 [2024-11-19 11:23:01.211266] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x59c690): datao=0, datal=512, cccid=6 00:21:05.890 [2024-11-19 11:23:01.211273] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5fea00) on tqpair(0x59c690): expected_datao=0, payload_size=512 00:21:05.890 [2024-11-19 11:23:01.211280] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.890 [2024-11-19 11:23:01.211292] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:05.890 [2024-11-19 11:23:01.211300] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:05.890 [2024-11-19 11:23:01.211308] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:05.890 [2024-11-19 11:23:01.211316] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:05.890 [2024-11-19 11:23:01.211322] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:05.890 [2024-11-19 11:23:01.211328] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x59c690): datao=0, datal=4096, cccid=7 00:21:05.890 [2024-11-19 11:23:01.211335] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5feb80) on tqpair(0x59c690): expected_datao=0, payload_size=4096 00:21:05.890 [2024-11-19 11:23:01.211357] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.890 [2024-11-19 11:23:01.215393] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:05.890 [2024-11-19 11:23:01.215413] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:05.890 [2024-11-19 11:23:01.215426] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.890 [2024-11-19 11:23:01.215436] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.890 [2024-11-19 11:23:01.215442] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.890 [2024-11-19 11:23:01.215449] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5fe880) on tqpair=0x59c690 00:21:05.890 [2024-11-19 11:23:01.215471] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.891 [2024-11-19 11:23:01.215482] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.891 [2024-11-19 11:23:01.215490] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.891 [2024-11-19 11:23:01.215496] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5fe700) on tqpair=0x59c690 00:21:05.891 [2024-11-19 11:23:01.215512] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.891 [2024-11-19 11:23:01.215523] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.891 [2024-11-19 11:23:01.215530] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.891 [2024-11-19 11:23:01.215536] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5fea00) on tqpair=0x59c690 00:21:05.891 [2024-11-19 11:23:01.215546] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.891 [2024-11-19 11:23:01.215556] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.891 [2024-11-19 11:23:01.215562] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.891 [2024-11-19 11:23:01.215568] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5feb80) on tqpair=0x59c690 00:21:05.891 ===================================================== 00:21:05.891 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:05.891 ===================================================== 00:21:05.891 Controller Capabilities/Features 00:21:05.891 ================================ 00:21:05.891 Vendor ID: 8086 00:21:05.891 Subsystem Vendor ID: 8086 00:21:05.891 Serial Number: SPDK00000000000001 00:21:05.891 Model Number: SPDK bdev Controller 00:21:05.891 Firmware Version: 25.01 00:21:05.891 Recommended Arb Burst: 6 00:21:05.891 IEEE OUI Identifier: e4 d2 5c 00:21:05.891 Multi-path I/O 00:21:05.891 May have multiple subsystem ports: Yes 00:21:05.891 May have multiple controllers: Yes 00:21:05.891 Associated with SR-IOV VF: No 00:21:05.891 Max Data Transfer Size: 131072 00:21:05.891 Max Number of Namespaces: 32 00:21:05.891 Max Number of I/O Queues: 127 00:21:05.891 NVMe Specification Version (VS): 1.3 00:21:05.891 NVMe Specification Version (Identify): 1.3 00:21:05.891 Maximum Queue Entries: 128 00:21:05.891 Contiguous Queues Required: Yes 00:21:05.891 Arbitration Mechanisms Supported 00:21:05.891 Weighted Round Robin: Not Supported 00:21:05.891 Vendor Specific: Not Supported 00:21:05.891 Reset Timeout: 15000 ms 00:21:05.891 Doorbell Stride: 4 bytes 00:21:05.891 NVM Subsystem Reset: Not Supported 00:21:05.891 Command Sets Supported 00:21:05.891 NVM Command Set: Supported 00:21:05.891 Boot Partition: Not Supported 00:21:05.891 Memory Page Size Minimum: 4096 bytes 00:21:05.891 Memory Page Size Maximum: 4096 bytes 00:21:05.891 Persistent Memory Region: Not Supported 00:21:05.891 Optional Asynchronous Events Supported 00:21:05.891 Namespace Attribute Notices: Supported 00:21:05.891 Firmware Activation Notices: Not Supported 00:21:05.891 ANA Change Notices: Not Supported 00:21:05.891 PLE Aggregate Log Change Notices: Not Supported 00:21:05.891 LBA Status Info Alert Notices: Not Supported 00:21:05.891 EGE Aggregate Log Change Notices: Not Supported 00:21:05.891 Normal NVM Subsystem Shutdown event: Not Supported 00:21:05.891 Zone Descriptor Change Notices: Not Supported 00:21:05.891 Discovery Log Change Notices: Not Supported 00:21:05.891 Controller Attributes 00:21:05.891 128-bit Host Identifier: Supported 00:21:05.891 Non-Operational Permissive Mode: Not Supported 00:21:05.891 NVM Sets: Not Supported 00:21:05.891 Read Recovery Levels: Not Supported 00:21:05.891 Endurance Groups: Not Supported 00:21:05.891 Predictable Latency Mode: Not Supported 00:21:05.891 Traffic Based Keep ALive: Not Supported 00:21:05.891 Namespace Granularity: Not Supported 00:21:05.891 SQ Associations: Not Supported 00:21:05.891 UUID List: Not Supported 00:21:05.891 Multi-Domain Subsystem: Not Supported 00:21:05.891 Fixed Capacity Management: Not Supported 00:21:05.891 Variable Capacity Management: Not Supported 00:21:05.891 Delete Endurance Group: Not Supported 00:21:05.891 Delete NVM Set: Not Supported 00:21:05.891 Extended LBA Formats Supported: Not Supported 00:21:05.891 Flexible Data Placement Supported: Not Supported 00:21:05.891 00:21:05.891 Controller Memory Buffer Support 00:21:05.891 ================================ 00:21:05.891 Supported: No 00:21:05.891 00:21:05.891 Persistent Memory Region Support 00:21:05.891 ================================ 00:21:05.891 Supported: No 00:21:05.891 00:21:05.891 Admin Command Set Attributes 00:21:05.891 ============================ 00:21:05.891 Security Send/Receive: Not Supported 00:21:05.891 Format NVM: Not Supported 00:21:05.891 Firmware Activate/Download: Not Supported 00:21:05.891 Namespace Management: Not Supported 00:21:05.891 Device Self-Test: Not Supported 00:21:05.891 Directives: Not Supported 00:21:05.891 NVMe-MI: Not Supported 00:21:05.891 Virtualization Management: Not Supported 00:21:05.891 Doorbell Buffer Config: Not Supported 00:21:05.891 Get LBA Status Capability: Not Supported 00:21:05.891 Command & Feature Lockdown Capability: Not Supported 00:21:05.891 Abort Command Limit: 4 00:21:05.891 Async Event Request Limit: 4 00:21:05.891 Number of Firmware Slots: N/A 00:21:05.891 Firmware Slot 1 Read-Only: N/A 00:21:05.891 Firmware Activation Without Reset: N/A 00:21:05.891 Multiple Update Detection Support: N/A 00:21:05.891 Firmware Update Granularity: No Information Provided 00:21:05.891 Per-Namespace SMART Log: No 00:21:05.891 Asymmetric Namespace Access Log Page: Not Supported 00:21:05.891 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:05.891 Command Effects Log Page: Supported 00:21:05.891 Get Log Page Extended Data: Supported 00:21:05.891 Telemetry Log Pages: Not Supported 00:21:05.891 Persistent Event Log Pages: Not Supported 00:21:05.891 Supported Log Pages Log Page: May Support 00:21:05.891 Commands Supported & Effects Log Page: Not Supported 00:21:05.891 Feature Identifiers & Effects Log Page:May Support 00:21:05.891 NVMe-MI Commands & Effects Log Page: May Support 00:21:05.891 Data Area 4 for Telemetry Log: Not Supported 00:21:05.891 Error Log Page Entries Supported: 128 00:21:05.891 Keep Alive: Supported 00:21:05.891 Keep Alive Granularity: 10000 ms 00:21:05.891 00:21:05.891 NVM Command Set Attributes 00:21:05.891 ========================== 00:21:05.891 Submission Queue Entry Size 00:21:05.891 Max: 64 00:21:05.891 Min: 64 00:21:05.891 Completion Queue Entry Size 00:21:05.891 Max: 16 00:21:05.891 Min: 16 00:21:05.891 Number of Namespaces: 32 00:21:05.891 Compare Command: Supported 00:21:05.891 Write Uncorrectable Command: Not Supported 00:21:05.891 Dataset Management Command: Supported 00:21:05.891 Write Zeroes Command: Supported 00:21:05.891 Set Features Save Field: Not Supported 00:21:05.891 Reservations: Supported 00:21:05.891 Timestamp: Not Supported 00:21:05.891 Copy: Supported 00:21:05.891 Volatile Write Cache: Present 00:21:05.891 Atomic Write Unit (Normal): 1 00:21:05.891 Atomic Write Unit (PFail): 1 00:21:05.891 Atomic Compare & Write Unit: 1 00:21:05.891 Fused Compare & Write: Supported 00:21:05.891 Scatter-Gather List 00:21:05.891 SGL Command Set: Supported 00:21:05.891 SGL Keyed: Supported 00:21:05.891 SGL Bit Bucket Descriptor: Not Supported 00:21:05.891 SGL Metadata Pointer: Not Supported 00:21:05.891 Oversized SGL: Not Supported 00:21:05.891 SGL Metadata Address: Not Supported 00:21:05.891 SGL Offset: Supported 00:21:05.891 Transport SGL Data Block: Not Supported 00:21:05.891 Replay Protected Memory Block: Not Supported 00:21:05.891 00:21:05.891 Firmware Slot Information 00:21:05.891 ========================= 00:21:05.891 Active slot: 1 00:21:05.891 Slot 1 Firmware Revision: 25.01 00:21:05.891 00:21:05.891 00:21:05.891 Commands Supported and Effects 00:21:05.891 ============================== 00:21:05.891 Admin Commands 00:21:05.891 -------------- 00:21:05.891 Get Log Page (02h): Supported 00:21:05.891 Identify (06h): Supported 00:21:05.891 Abort (08h): Supported 00:21:05.891 Set Features (09h): Supported 00:21:05.891 Get Features (0Ah): Supported 00:21:05.891 Asynchronous Event Request (0Ch): Supported 00:21:05.891 Keep Alive (18h): Supported 00:21:05.891 I/O Commands 00:21:05.891 ------------ 00:21:05.891 Flush (00h): Supported LBA-Change 00:21:05.891 Write (01h): Supported LBA-Change 00:21:05.891 Read (02h): Supported 00:21:05.891 Compare (05h): Supported 00:21:05.891 Write Zeroes (08h): Supported LBA-Change 00:21:05.891 Dataset Management (09h): Supported LBA-Change 00:21:05.891 Copy (19h): Supported LBA-Change 00:21:05.891 00:21:05.891 Error Log 00:21:05.891 ========= 00:21:05.891 00:21:05.891 Arbitration 00:21:05.892 =========== 00:21:05.892 Arbitration Burst: 1 00:21:05.892 00:21:05.892 Power Management 00:21:05.892 ================ 00:21:05.892 Number of Power States: 1 00:21:05.892 Current Power State: Power State #0 00:21:05.892 Power State #0: 00:21:05.892 Max Power: 0.00 W 00:21:05.892 Non-Operational State: Operational 00:21:05.892 Entry Latency: Not Reported 00:21:05.892 Exit Latency: Not Reported 00:21:05.892 Relative Read Throughput: 0 00:21:05.892 Relative Read Latency: 0 00:21:05.892 Relative Write Throughput: 0 00:21:05.892 Relative Write Latency: 0 00:21:05.892 Idle Power: Not Reported 00:21:05.892 Active Power: Not Reported 00:21:05.892 Non-Operational Permissive Mode: Not Supported 00:21:05.892 00:21:05.892 Health Information 00:21:05.892 ================== 00:21:05.892 Critical Warnings: 00:21:05.892 Available Spare Space: OK 00:21:05.892 Temperature: OK 00:21:05.892 Device Reliability: OK 00:21:05.892 Read Only: No 00:21:05.892 Volatile Memory Backup: OK 00:21:05.892 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:05.892 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:21:05.892 Available Spare: 0% 00:21:05.892 Available Spare Threshold: 0% 00:21:05.892 Life Percentage Used:[2024-11-19 11:23:01.215705] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.892 [2024-11-19 11:23:01.215716] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x59c690) 00:21:05.892 [2024-11-19 11:23:01.215727] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.892 [2024-11-19 11:23:01.215751] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5feb80, cid 7, qid 0 00:21:05.892 [2024-11-19 11:23:01.215918] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.892 [2024-11-19 11:23:01.215932] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.892 [2024-11-19 11:23:01.215938] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.892 [2024-11-19 11:23:01.215944] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5feb80) on tqpair=0x59c690 00:21:05.892 [2024-11-19 11:23:01.215987] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:21:05.892 [2024-11-19 11:23:01.216007] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5fe100) on tqpair=0x59c690 00:21:05.892 [2024-11-19 11:23:01.216018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.892 [2024-11-19 11:23:01.216026] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5fe280) on tqpair=0x59c690 00:21:05.892 [2024-11-19 11:23:01.216046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.892 [2024-11-19 11:23:01.216055] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5fe400) on tqpair=0x59c690 00:21:05.892 [2024-11-19 11:23:01.216062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.892 [2024-11-19 11:23:01.216070] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5fe580) on tqpair=0x59c690 00:21:05.892 [2024-11-19 11:23:01.216077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.892 [2024-11-19 11:23:01.216089] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.892 [2024-11-19 11:23:01.216097] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.892 [2024-11-19 11:23:01.216103] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x59c690) 00:21:05.892 [2024-11-19 11:23:01.216113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.892 [2024-11-19 11:23:01.216136] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5fe580, cid 3, qid 0 00:21:05.892 [2024-11-19 11:23:01.216321] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.892 [2024-11-19 11:23:01.216333] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.892 [2024-11-19 11:23:01.216340] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.892 [2024-11-19 11:23:01.216370] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5fe580) on tqpair=0x59c690 00:21:05.892 [2024-11-19 11:23:01.216383] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.892 [2024-11-19 11:23:01.216391] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.892 [2024-11-19 11:23:01.216397] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x59c690) 00:21:05.892 [2024-11-19 11:23:01.216407] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.892 [2024-11-19 11:23:01.216435] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5fe580, cid 3, qid 0 00:21:05.892 [2024-11-19 11:23:01.216542] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.892 [2024-11-19 11:23:01.216556] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.892 [2024-11-19 11:23:01.216563] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.892 [2024-11-19 11:23:01.216570] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5fe580) on tqpair=0x59c690 00:21:05.892 [2024-11-19 11:23:01.216577] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:21:05.892 [2024-11-19 11:23:01.216585] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:21:05.892 [2024-11-19 11:23:01.216602] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.892 [2024-11-19 11:23:01.216612] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.892 [2024-11-19 11:23:01.216619] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x59c690) 00:21:05.892 [2024-11-19 11:23:01.216630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.892 [2024-11-19 11:23:01.216651] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5fe580, cid 3, qid 0 00:21:05.892 [2024-11-19 11:23:01.216750] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.892 [2024-11-19 11:23:01.216762] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.892 [2024-11-19 11:23:01.216769] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.892 [2024-11-19 11:23:01.216775] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5fe580) on tqpair=0x59c690 00:21:05.892 [2024-11-19 11:23:01.216792] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.892 [2024-11-19 11:23:01.216804] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.892 [2024-11-19 11:23:01.216811] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x59c690) 00:21:05.892 [2024-11-19 11:23:01.216821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.892 [2024-11-19 11:23:01.216842] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5fe580, cid 3, qid 0 00:21:05.892 [2024-11-19 11:23:01.216952] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.892 [2024-11-19 11:23:01.216966] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.892 [2024-11-19 11:23:01.216973] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.892 [2024-11-19 11:23:01.216980] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5fe580) on tqpair=0x59c690 00:21:05.892 [2024-11-19 11:23:01.216996] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.892 [2024-11-19 11:23:01.217005] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.892 [2024-11-19 11:23:01.217011] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x59c690) 00:21:05.892 [2024-11-19 11:23:01.217021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.892 [2024-11-19 11:23:01.217041] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5fe580, cid 3, qid 0 00:21:05.892 [2024-11-19 11:23:01.217141] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.892 [2024-11-19 11:23:01.217154] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.892 [2024-11-19 11:23:01.217161] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.892 [2024-11-19 11:23:01.217167] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5fe580) on tqpair=0x59c690 00:21:05.892 [2024-11-19 11:23:01.217183] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.892 [2024-11-19 11:23:01.217192] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.892 [2024-11-19 11:23:01.217198] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x59c690) 00:21:05.893 [2024-11-19 11:23:01.217208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.893 [2024-11-19 11:23:01.217229] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5fe580, cid 3, qid 0 00:21:05.893 [2024-11-19 11:23:01.217307] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.893 [2024-11-19 11:23:01.217320] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.893 [2024-11-19 11:23:01.217327] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.893 [2024-11-19 11:23:01.217333] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5fe580) on tqpair=0x59c690 00:21:05.893 [2024-11-19 11:23:01.217348] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.893 [2024-11-19 11:23:01.217358] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.893 [2024-11-19 11:23:01.217389] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x59c690) 00:21:05.893 [2024-11-19 11:23:01.217401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.893 [2024-11-19 11:23:01.217424] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5fe580, cid 3, qid 0 00:21:05.893 [2024-11-19 11:23:01.217555] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.893 [2024-11-19 11:23:01.217567] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.893 [2024-11-19 11:23:01.217574] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.893 [2024-11-19 11:23:01.217581] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5fe580) on tqpair=0x59c690 00:21:05.893 [2024-11-19 11:23:01.217596] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.893 [2024-11-19 11:23:01.217605] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.893 [2024-11-19 11:23:01.217618] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x59c690) 00:21:05.893 [2024-11-19 11:23:01.217630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.893 [2024-11-19 11:23:01.217651] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5fe580, cid 3, qid 0 00:21:05.893 [2024-11-19 11:23:01.217780] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.893 [2024-11-19 11:23:01.217794] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.893 [2024-11-19 11:23:01.217801] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.893 [2024-11-19 11:23:01.217808] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5fe580) on tqpair=0x59c690 00:21:05.893 [2024-11-19 11:23:01.217825] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.893 [2024-11-19 11:23:01.217835] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.893 [2024-11-19 11:23:01.217841] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x59c690) 00:21:05.893 [2024-11-19 11:23:01.217851] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.893 [2024-11-19 11:23:01.217872] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5fe580, cid 3, qid 0 00:21:05.893 [2024-11-19 11:23:01.217981] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.893 [2024-11-19 11:23:01.217995] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.893 [2024-11-19 11:23:01.218002] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.893 [2024-11-19 11:23:01.218009] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5fe580) on tqpair=0x59c690 00:21:05.893 [2024-11-19 11:23:01.218024] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.893 [2024-11-19 11:23:01.218033] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.893 [2024-11-19 11:23:01.218039] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x59c690) 00:21:05.893 [2024-11-19 11:23:01.218049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.893 [2024-11-19 11:23:01.218069] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5fe580, cid 3, qid 0 00:21:05.893 [2024-11-19 11:23:01.218146] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.893 [2024-11-19 11:23:01.218159] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.893 [2024-11-19 11:23:01.218166] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.893 [2024-11-19 11:23:01.218172] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5fe580) on tqpair=0x59c690 00:21:05.893 [2024-11-19 11:23:01.218188] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.893 [2024-11-19 11:23:01.218197] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.893 [2024-11-19 11:23:01.218203] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x59c690) 00:21:05.893 [2024-11-19 11:23:01.218213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.893 [2024-11-19 11:23:01.218233] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5fe580, cid 3, qid 0 00:21:05.893 [2024-11-19 11:23:01.218333] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.893 [2024-11-19 11:23:01.218359] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.893 [2024-11-19 11:23:01.218377] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.893 [2024-11-19 11:23:01.218384] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5fe580) on tqpair=0x59c690 00:21:05.893 [2024-11-19 11:23:01.218401] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.893 [2024-11-19 11:23:01.218410] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.893 [2024-11-19 11:23:01.218416] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x59c690) 00:21:05.893 [2024-11-19 11:23:01.218430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.893 [2024-11-19 11:23:01.218453] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5fe580, cid 3, qid 0 00:21:05.893 [2024-11-19 11:23:01.218581] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.893 [2024-11-19 11:23:01.218593] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.893 [2024-11-19 11:23:01.218600] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.893 [2024-11-19 11:23:01.218606] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5fe580) on tqpair=0x59c690 00:21:05.893 [2024-11-19 11:23:01.218622] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.893 [2024-11-19 11:23:01.218631] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.893 [2024-11-19 11:23:01.218637] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x59c690) 00:21:05.893 [2024-11-19 11:23:01.218662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.893 [2024-11-19 11:23:01.218683] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5fe580, cid 3, qid 0 00:21:05.893 [2024-11-19 11:23:01.218812] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.893 [2024-11-19 11:23:01.218826] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.893 [2024-11-19 11:23:01.218833] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.893 [2024-11-19 11:23:01.218840] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5fe580) on tqpair=0x59c690 00:21:05.893 [2024-11-19 11:23:01.218855] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.893 [2024-11-19 11:23:01.218865] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.893 [2024-11-19 11:23:01.218871] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x59c690) 00:21:05.893 [2024-11-19 11:23:01.218881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.893 [2024-11-19 11:23:01.218902] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5fe580, cid 3, qid 0 00:21:05.893 [2024-11-19 11:23:01.218979] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.893 [2024-11-19 11:23:01.218992] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.893 [2024-11-19 11:23:01.219000] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.893 [2024-11-19 11:23:01.219006] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5fe580) on tqpair=0x59c690 00:21:05.893 [2024-11-19 11:23:01.219022] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.893 [2024-11-19 11:23:01.219031] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.893 [2024-11-19 11:23:01.219037] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x59c690) 00:21:05.893 [2024-11-19 11:23:01.219047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.893 [2024-11-19 11:23:01.219068] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5fe580, cid 3, qid 0 00:21:05.893 [2024-11-19 11:23:01.219164] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.893 [2024-11-19 11:23:01.219178] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.893 [2024-11-19 11:23:01.219184] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.893 [2024-11-19 11:23:01.219191] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5fe580) on tqpair=0x59c690 00:21:05.893 [2024-11-19 11:23:01.219206] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.893 [2024-11-19 11:23:01.219215] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.893 [2024-11-19 11:23:01.219221] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x59c690) 00:21:05.893 [2024-11-19 11:23:01.219231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.893 [2024-11-19 11:23:01.219255] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5fe580, cid 3, qid 0 00:21:05.893 [2024-11-19 11:23:01.219332] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.893 [2024-11-19 11:23:01.219344] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.893 [2024-11-19 11:23:01.219350] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.893 [2024-11-19 11:23:01.219357] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5fe580) on tqpair=0x59c690 00:21:05.893 [2024-11-19 11:23:01.223414] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:05.893 [2024-11-19 11:23:01.223427] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:05.893 [2024-11-19 11:23:01.223433] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x59c690) 00:21:05.893 [2024-11-19 11:23:01.223444] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.893 [2024-11-19 11:23:01.223466] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5fe580, cid 3, qid 0 00:21:05.893 [2024-11-19 11:23:01.223603] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:05.893 [2024-11-19 11:23:01.223616] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:05.894 [2024-11-19 11:23:01.223623] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:05.894 [2024-11-19 11:23:01.223629] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5fe580) on tqpair=0x59c690 00:21:05.894 [2024-11-19 11:23:01.223642] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:21:05.894 0% 00:21:05.894 Data Units Read: 0 00:21:05.894 Data Units Written: 0 00:21:05.894 Host Read Commands: 0 00:21:05.894 Host Write Commands: 0 00:21:05.894 Controller Busy Time: 0 minutes 00:21:05.894 Power Cycles: 0 00:21:05.894 Power On Hours: 0 hours 00:21:05.894 Unsafe Shutdowns: 0 00:21:05.894 Unrecoverable Media Errors: 0 00:21:05.894 Lifetime Error Log Entries: 0 00:21:05.894 Warning Temperature Time: 0 minutes 00:21:05.894 Critical Temperature Time: 0 minutes 00:21:05.894 00:21:05.894 Number of Queues 00:21:05.894 ================ 00:21:05.894 Number of I/O Submission Queues: 127 00:21:05.894 Number of I/O Completion Queues: 127 00:21:05.894 00:21:05.894 Active Namespaces 00:21:05.894 ================= 00:21:05.894 Namespace ID:1 00:21:05.894 Error Recovery Timeout: Unlimited 00:21:05.894 Command Set Identifier: NVM (00h) 00:21:05.894 Deallocate: Supported 00:21:05.894 Deallocated/Unwritten Error: Not Supported 00:21:05.894 Deallocated Read Value: Unknown 00:21:05.894 Deallocate in Write Zeroes: Not Supported 00:21:05.894 Deallocated Guard Field: 0xFFFF 00:21:05.894 Flush: Supported 00:21:05.894 Reservation: Supported 00:21:05.894 Namespace Sharing Capabilities: Multiple Controllers 00:21:05.894 Size (in LBAs): 131072 (0GiB) 00:21:05.894 Capacity (in LBAs): 131072 (0GiB) 00:21:05.894 Utilization (in LBAs): 131072 (0GiB) 00:21:05.894 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:05.894 EUI64: ABCDEF0123456789 00:21:05.894 UUID: 59b55299-7c1b-4a75-ba6a-6dd4b328f5e3 00:21:05.894 Thin Provisioning: Not Supported 00:21:05.894 Per-NS Atomic Units: Yes 00:21:05.894 Atomic Boundary Size (Normal): 0 00:21:05.894 Atomic Boundary Size (PFail): 0 00:21:05.894 Atomic Boundary Offset: 0 00:21:05.894 Maximum Single Source Range Length: 65535 00:21:05.894 Maximum Copy Length: 65535 00:21:05.894 Maximum Source Range Count: 1 00:21:05.894 NGUID/EUI64 Never Reused: No 00:21:05.894 Namespace Write Protected: No 00:21:05.894 Number of LBA Formats: 1 00:21:05.894 Current LBA Format: LBA Format #00 00:21:05.894 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:05.894 00:21:05.894 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:21:05.894 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:05.894 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.894 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:05.894 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.894 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:05.894 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:21:05.894 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:05.894 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:21:05.894 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:05.894 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:21:05.894 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:05.894 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:05.894 rmmod nvme_tcp 00:21:05.894 rmmod nvme_fabrics 00:21:05.894 rmmod nvme_keyring 00:21:05.894 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:05.894 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:21:05.894 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:21:05.894 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2669836 ']' 00:21:05.894 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2669836 00:21:05.894 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 2669836 ']' 00:21:05.894 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 2669836 00:21:05.894 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:21:05.894 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:05.894 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2669836 00:21:05.894 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:05.894 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:05.894 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2669836' 00:21:05.894 killing process with pid 2669836 00:21:05.894 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 2669836 00:21:05.894 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 2669836 00:21:06.152 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:06.152 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:06.152 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:06.152 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:21:06.152 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:21:06.152 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:06.152 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:21:06.152 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:06.152 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:06.152 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.152 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:06.152 11:23:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.690 11:23:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:08.690 00:21:08.690 real 0m6.206s 00:21:08.690 user 0m5.294s 00:21:08.690 sys 0m2.346s 00:21:08.690 11:23:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:08.690 11:23:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:08.690 ************************************ 00:21:08.690 END TEST nvmf_identify 00:21:08.690 ************************************ 00:21:08.690 11:23:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:08.690 11:23:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:08.690 11:23:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:08.690 11:23:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.690 ************************************ 00:21:08.690 START TEST nvmf_perf 00:21:08.690 ************************************ 00:21:08.690 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:08.690 * Looking for test storage... 00:21:08.690 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:08.690 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:08.690 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:21:08.690 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:08.690 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:08.690 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:08.690 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:08.690 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:08.690 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:21:08.690 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:21:08.690 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:21:08.690 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:21:08.690 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:21:08.690 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:21:08.690 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:21:08.690 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:08.690 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:21:08.690 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:21:08.690 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:08.690 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:08.690 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:21:08.690 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:21:08.690 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:08.690 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:21:08.690 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:08.690 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:21:08.690 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:21:08.690 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:08.690 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:21:08.690 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:08.690 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:08.690 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:08.690 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:21:08.690 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:08.690 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:08.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:08.690 --rc genhtml_branch_coverage=1 00:21:08.690 --rc genhtml_function_coverage=1 00:21:08.690 --rc genhtml_legend=1 00:21:08.690 --rc geninfo_all_blocks=1 00:21:08.690 --rc geninfo_unexecuted_blocks=1 00:21:08.690 00:21:08.690 ' 00:21:08.690 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:08.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:08.690 --rc genhtml_branch_coverage=1 00:21:08.690 --rc genhtml_function_coverage=1 00:21:08.690 --rc genhtml_legend=1 00:21:08.690 --rc geninfo_all_blocks=1 00:21:08.690 --rc geninfo_unexecuted_blocks=1 00:21:08.690 00:21:08.690 ' 00:21:08.690 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:08.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:08.690 --rc genhtml_branch_coverage=1 00:21:08.690 --rc genhtml_function_coverage=1 00:21:08.690 --rc genhtml_legend=1 00:21:08.691 --rc geninfo_all_blocks=1 00:21:08.691 --rc geninfo_unexecuted_blocks=1 00:21:08.691 00:21:08.691 ' 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:08.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:08.691 --rc genhtml_branch_coverage=1 00:21:08.691 --rc genhtml_function_coverage=1 00:21:08.691 --rc genhtml_legend=1 00:21:08.691 --rc geninfo_all_blocks=1 00:21:08.691 --rc geninfo_unexecuted_blocks=1 00:21:08.691 00:21:08.691 ' 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:08.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:08.691 11:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:11.286 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:11.286 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:11.286 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:11.286 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:11.286 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:11.286 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:11.286 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:11.286 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:21:11.286 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:21:11.287 Found 0000:82:00.0 (0x8086 - 0x159b) 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:21:11.287 Found 0000:82:00.1 (0x8086 - 0x159b) 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:21:11.287 Found net devices under 0000:82:00.0: cvl_0_0 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:21:11.287 Found net devices under 0000:82:00.1: cvl_0_1 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:11.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:11.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:21:11.287 00:21:11.287 --- 10.0.0.2 ping statistics --- 00:21:11.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.287 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:11.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:11.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:21:11.287 00:21:11.287 --- 10.0.0.1 ping statistics --- 00:21:11.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.287 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2672225 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2672225 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 2672225 ']' 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:11.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:11.287 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:11.287 [2024-11-19 11:23:06.735429] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:21:11.287 [2024-11-19 11:23:06.735502] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:11.546 [2024-11-19 11:23:06.820536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:11.546 [2024-11-19 11:23:06.878095] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:11.546 [2024-11-19 11:23:06.878159] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:11.547 [2024-11-19 11:23:06.878195] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:11.547 [2024-11-19 11:23:06.878213] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:11.547 [2024-11-19 11:23:06.878228] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:11.547 [2024-11-19 11:23:06.883383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:11.547 [2024-11-19 11:23:06.883410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:11.547 [2024-11-19 11:23:06.883463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:11.547 [2024-11-19 11:23:06.883466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.547 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:11.547 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:21:11.547 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:11.547 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:11.547 11:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:11.547 11:23:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:11.547 11:23:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:21:11.547 11:23:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:21:14.825 11:23:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:21:14.825 11:23:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:21:15.082 11:23:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:81:00.0 00:21:15.083 11:23:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:15.341 11:23:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:21:15.341 11:23:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:81:00.0 ']' 00:21:15.341 11:23:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:21:15.341 11:23:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:21:15.341 11:23:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:15.601 [2024-11-19 11:23:10.983878] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:15.601 11:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:15.858 11:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:15.858 11:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:16.116 11:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:16.116 11:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:16.373 11:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:16.631 [2024-11-19 11:23:12.079926] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:16.631 11:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:16.889 11:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:81:00.0 ']' 00:21:16.889 11:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:81:00.0' 00:21:16.889 11:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:21:16.889 11:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:81:00.0' 00:21:18.263 Initializing NVMe Controllers 00:21:18.263 Attached to NVMe Controller at 0000:81:00.0 [8086:0a54] 00:21:18.263 Associating PCIE (0000:81:00.0) NSID 1 with lcore 0 00:21:18.263 Initialization complete. Launching workers. 00:21:18.263 ======================================================== 00:21:18.263 Latency(us) 00:21:18.263 Device Information : IOPS MiB/s Average min max 00:21:18.263 PCIE (0000:81:00.0) NSID 1 from core 0: 85185.27 332.75 375.01 28.75 5649.88 00:21:18.263 ======================================================== 00:21:18.263 Total : 85185.27 332.75 375.01 28.75 5649.88 00:21:18.263 00:21:18.263 11:23:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:19.636 Initializing NVMe Controllers 00:21:19.636 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:19.636 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:19.636 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:19.636 Initialization complete. Launching workers. 00:21:19.636 ======================================================== 00:21:19.636 Latency(us) 00:21:19.636 Device Information : IOPS MiB/s Average min max 00:21:19.636 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 83.00 0.32 12393.46 133.15 45743.65 00:21:19.636 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 57.00 0.22 17766.66 7190.30 47903.50 00:21:19.636 ======================================================== 00:21:19.636 Total : 140.00 0.55 14581.12 133.15 47903.50 00:21:19.636 00:21:19.636 11:23:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:21.009 Initializing NVMe Controllers 00:21:21.009 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:21.009 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:21.009 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:21.009 Initialization complete. Launching workers. 00:21:21.009 ======================================================== 00:21:21.009 Latency(us) 00:21:21.009 Device Information : IOPS MiB/s Average min max 00:21:21.009 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8551.81 33.41 3741.54 654.38 10304.31 00:21:21.009 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3801.58 14.85 8415.19 6305.54 18602.25 00:21:21.009 ======================================================== 00:21:21.009 Total : 12353.39 48.26 5179.79 654.38 18602.25 00:21:21.009 00:21:21.009 11:23:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:21:21.009 11:23:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:21:21.009 11:23:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:23.539 Initializing NVMe Controllers 00:21:23.539 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:23.539 Controller IO queue size 128, less than required. 00:21:23.539 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:23.539 Controller IO queue size 128, less than required. 00:21:23.539 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:23.539 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:23.539 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:23.539 Initialization complete. Launching workers. 00:21:23.539 ======================================================== 00:21:23.539 Latency(us) 00:21:23.539 Device Information : IOPS MiB/s Average min max 00:21:23.539 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1354.99 338.75 97089.18 67172.95 156009.58 00:21:23.539 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 561.50 140.37 237459.16 122163.84 371303.04 00:21:23.539 ======================================================== 00:21:23.539 Total : 1916.48 479.12 138215.06 67172.95 371303.04 00:21:23.539 00:21:23.539 11:23:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:21:23.539 No valid NVMe controllers or AIO or URING devices found 00:21:23.539 Initializing NVMe Controllers 00:21:23.539 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:23.539 Controller IO queue size 128, less than required. 00:21:23.539 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:23.539 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:21:23.539 Controller IO queue size 128, less than required. 00:21:23.539 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:23.539 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:21:23.539 WARNING: Some requested NVMe devices were skipped 00:21:23.539 11:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:21:26.068 Initializing NVMe Controllers 00:21:26.068 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:26.068 Controller IO queue size 128, less than required. 00:21:26.068 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:26.068 Controller IO queue size 128, less than required. 00:21:26.068 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:26.068 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:26.068 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:26.068 Initialization complete. Launching workers. 00:21:26.068 00:21:26.068 ==================== 00:21:26.068 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:26.068 TCP transport: 00:21:26.068 polls: 7810 00:21:26.068 idle_polls: 5390 00:21:26.068 sock_completions: 2420 00:21:26.068 nvme_completions: 4855 00:21:26.068 submitted_requests: 7292 00:21:26.068 queued_requests: 1 00:21:26.068 00:21:26.068 ==================== 00:21:26.068 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:26.068 TCP transport: 00:21:26.068 polls: 8055 00:21:26.068 idle_polls: 5594 00:21:26.068 sock_completions: 2461 00:21:26.068 nvme_completions: 4895 00:21:26.068 submitted_requests: 7328 00:21:26.068 queued_requests: 1 00:21:26.068 ======================================================== 00:21:26.068 Latency(us) 00:21:26.068 Device Information : IOPS MiB/s Average min max 00:21:26.068 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1211.59 302.90 109037.74 70024.37 171073.00 00:21:26.068 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1221.58 305.39 106028.51 55178.39 158927.16 00:21:26.068 ======================================================== 00:21:26.068 Total : 2433.17 608.29 107526.95 55178.39 171073.00 00:21:26.068 00:21:26.068 11:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:21:26.068 11:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:26.634 11:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:21:26.634 11:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:26.634 11:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:21:26.634 11:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:26.634 11:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:21:26.634 11:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:26.634 11:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:21:26.634 11:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:26.634 11:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:26.634 rmmod nvme_tcp 00:21:26.634 rmmod nvme_fabrics 00:21:26.634 rmmod nvme_keyring 00:21:26.634 11:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:26.634 11:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:21:26.634 11:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:21:26.634 11:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2672225 ']' 00:21:26.634 11:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2672225 00:21:26.634 11:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 2672225 ']' 00:21:26.634 11:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 2672225 00:21:26.634 11:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:21:26.634 11:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:26.634 11:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2672225 00:21:26.634 11:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:26.634 11:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:26.634 11:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2672225' 00:21:26.634 killing process with pid 2672225 00:21:26.634 11:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 2672225 00:21:26.634 11:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 2672225 00:21:29.161 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:29.161 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:29.161 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:29.161 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:21:29.161 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:21:29.161 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:29.161 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:21:29.161 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:29.161 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:29.161 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:29.161 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:29.161 11:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.068 11:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:31.068 00:21:31.068 real 0m22.854s 00:21:31.068 user 1m9.338s 00:21:31.068 sys 0m6.315s 00:21:31.068 11:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:31.068 11:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:31.068 ************************************ 00:21:31.068 END TEST nvmf_perf 00:21:31.068 ************************************ 00:21:31.068 11:23:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:31.068 11:23:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:31.068 11:23:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:31.068 11:23:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.327 ************************************ 00:21:31.327 START TEST nvmf_fio_host 00:21:31.327 ************************************ 00:21:31.327 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:31.327 * Looking for test storage... 00:21:31.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:31.327 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:31.327 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:21:31.327 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:31.327 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:31.327 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:31.327 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:31.327 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:31.327 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:31.327 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:31.327 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:31.327 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:31.327 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:31.327 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:31.327 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:31.327 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:31.327 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:21:31.327 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:21:31.327 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:31.327 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:31.327 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:21:31.327 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:21:31.327 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:31.327 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:21:31.327 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:31.327 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:21:31.327 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:21:31.327 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:31.327 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:21:31.327 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:31.327 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:31.327 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:31.327 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:21:31.327 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:31.327 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:31.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.327 --rc genhtml_branch_coverage=1 00:21:31.327 --rc genhtml_function_coverage=1 00:21:31.327 --rc genhtml_legend=1 00:21:31.327 --rc geninfo_all_blocks=1 00:21:31.327 --rc geninfo_unexecuted_blocks=1 00:21:31.327 00:21:31.327 ' 00:21:31.327 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:31.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.327 --rc genhtml_branch_coverage=1 00:21:31.327 --rc genhtml_function_coverage=1 00:21:31.327 --rc genhtml_legend=1 00:21:31.327 --rc geninfo_all_blocks=1 00:21:31.327 --rc geninfo_unexecuted_blocks=1 00:21:31.327 00:21:31.327 ' 00:21:31.327 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:31.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.327 --rc genhtml_branch_coverage=1 00:21:31.327 --rc genhtml_function_coverage=1 00:21:31.327 --rc genhtml_legend=1 00:21:31.327 --rc geninfo_all_blocks=1 00:21:31.327 --rc geninfo_unexecuted_blocks=1 00:21:31.327 00:21:31.327 ' 00:21:31.327 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:31.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.327 --rc genhtml_branch_coverage=1 00:21:31.327 --rc genhtml_function_coverage=1 00:21:31.327 --rc genhtml_legend=1 00:21:31.327 --rc geninfo_all_blocks=1 00:21:31.327 --rc geninfo_unexecuted_blocks=1 00:21:31.327 00:21:31.327 ' 00:21:31.327 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:31.327 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:31.327 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:31.327 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:31.327 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:31.328 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:21:31.328 11:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.861 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:33.861 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:21:33.861 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:33.861 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:33.861 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:33.861 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:33.861 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:33.861 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:21:33.861 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:33.861 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:21:33.861 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:21:33.861 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:21:33.861 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:21:33.861 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:21:33.861 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:21:33.861 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:33.861 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:33.861 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:33.861 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:33.861 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:33.861 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:33.861 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:21:33.862 Found 0000:82:00.0 (0x8086 - 0x159b) 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:21:33.862 Found 0000:82:00.1 (0x8086 - 0x159b) 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:21:33.862 Found net devices under 0000:82:00.0: cvl_0_0 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:21:33.862 Found net devices under 0000:82:00.1: cvl_0_1 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:33.862 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:34.121 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:34.121 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:34.121 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:34.121 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:34.121 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:34.121 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:34.121 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:34.121 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:34.121 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:34.121 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:21:34.121 00:21:34.121 --- 10.0.0.2 ping statistics --- 00:21:34.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:34.121 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:21:34.121 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:34.121 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:34.121 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:21:34.121 00:21:34.121 --- 10.0.0.1 ping statistics --- 00:21:34.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:34.121 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:21:34.121 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:34.121 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:21:34.121 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:34.121 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:34.121 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:34.121 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:34.121 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:34.121 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:34.121 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:34.121 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:21:34.121 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:34.121 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:34.121 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.121 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2676609 00:21:34.121 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:34.121 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:34.121 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2676609 00:21:34.121 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 2676609 ']' 00:21:34.121 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:34.121 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:34.121 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:34.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:34.121 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:34.121 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.121 [2024-11-19 11:23:29.532008] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:21:34.121 [2024-11-19 11:23:29.532092] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:34.380 [2024-11-19 11:23:29.617769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:34.380 [2024-11-19 11:23:29.677954] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:34.380 [2024-11-19 11:23:29.678018] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:34.380 [2024-11-19 11:23:29.678032] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:34.380 [2024-11-19 11:23:29.678043] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:34.380 [2024-11-19 11:23:29.678053] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:34.380 [2024-11-19 11:23:29.679905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:34.380 [2024-11-19 11:23:29.679972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:34.380 [2024-11-19 11:23:29.680037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:34.380 [2024-11-19 11:23:29.680040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:34.380 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:34.380 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:21:34.380 11:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:34.639 [2024-11-19 11:23:30.105299] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:34.639 11:23:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:34.639 11:23:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:34.639 11:23:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.897 11:23:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:35.154 Malloc1 00:21:35.154 11:23:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:35.412 11:23:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:35.670 11:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:35.927 [2024-11-19 11:23:31.245653] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:35.928 11:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:36.185 11:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:21:36.185 11:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:36.185 11:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:36.185 11:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:36.185 11:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:36.185 11:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:36.185 11:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:36.185 11:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:21:36.186 11:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:36.186 11:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:36.186 11:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:36.186 11:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:21:36.186 11:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:36.186 11:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:36.186 11:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:36.186 11:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:36.186 11:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:36.186 11:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:21:36.186 11:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:36.186 11:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:36.186 11:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:36.186 11:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:36.186 11:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:36.443 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:36.443 fio-3.35 00:21:36.443 Starting 1 thread 00:21:38.971 00:21:38.971 test: (groupid=0, jobs=1): err= 0: pid=2676979: Tue Nov 19 11:23:34 2024 00:21:38.971 read: IOPS=8821, BW=34.5MiB/s (36.1MB/s)(69.2MiB/2007msec) 00:21:38.971 slat (usec): min=2, max=120, avg= 2.76, stdev= 2.04 00:21:38.971 clat (usec): min=2466, max=13043, avg=7926.11, stdev=647.54 00:21:38.971 lat (usec): min=2488, max=13046, avg=7928.86, stdev=647.44 00:21:38.971 clat percentiles (usec): 00:21:38.971 | 1.00th=[ 6521], 5.00th=[ 6915], 10.00th=[ 7111], 20.00th=[ 7373], 00:21:38.971 | 30.00th=[ 7635], 40.00th=[ 7767], 50.00th=[ 7963], 60.00th=[ 8094], 00:21:38.971 | 70.00th=[ 8291], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[ 8979], 00:21:38.971 | 99.00th=[ 9372], 99.50th=[ 9503], 99.90th=[10945], 99.95th=[12780], 00:21:38.971 | 99.99th=[13042] 00:21:38.971 bw ( KiB/s): min=34363, max=35792, per=99.95%, avg=35268.75, stdev=626.25, samples=4 00:21:38.971 iops : min= 8590, max= 8948, avg=8817.00, stdev=156.92, samples=4 00:21:38.972 write: IOPS=8833, BW=34.5MiB/s (36.2MB/s)(69.3MiB/2007msec); 0 zone resets 00:21:38.972 slat (usec): min=2, max=106, avg= 2.87, stdev= 1.76 00:21:38.972 clat (usec): min=1042, max=13023, avg=6519.54, stdev=556.38 00:21:38.972 lat (usec): min=1048, max=13026, avg=6522.41, stdev=556.30 00:21:38.972 clat percentiles (usec): 00:21:38.972 | 1.00th=[ 5342], 5.00th=[ 5735], 10.00th=[ 5932], 20.00th=[ 6128], 00:21:38.972 | 30.00th=[ 6259], 40.00th=[ 6390], 50.00th=[ 6521], 60.00th=[ 6652], 00:21:38.972 | 70.00th=[ 6783], 80.00th=[ 6915], 90.00th=[ 7111], 95.00th=[ 7308], 00:21:38.972 | 99.00th=[ 7635], 99.50th=[ 7898], 99.90th=[11076], 99.95th=[12649], 00:21:38.972 | 99.99th=[13042] 00:21:38.972 bw ( KiB/s): min=35064, max=35592, per=99.93%, avg=35308.25, stdev=243.36, samples=4 00:21:38.972 iops : min= 8766, max= 8898, avg=8827.00, stdev=60.89, samples=4 00:21:38.972 lat (msec) : 2=0.03%, 4=0.12%, 10=99.70%, 20=0.16% 00:21:38.972 cpu : usr=70.89%, sys=27.87%, ctx=54, majf=0, minf=28 00:21:38.972 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:38.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:38.972 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:38.972 issued rwts: total=17704,17729,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:38.972 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:38.972 00:21:38.972 Run status group 0 (all jobs): 00:21:38.972 READ: bw=34.5MiB/s (36.1MB/s), 34.5MiB/s-34.5MiB/s (36.1MB/s-36.1MB/s), io=69.2MiB (72.5MB), run=2007-2007msec 00:21:38.972 WRITE: bw=34.5MiB/s (36.2MB/s), 34.5MiB/s-34.5MiB/s (36.2MB/s-36.2MB/s), io=69.3MiB (72.6MB), run=2007-2007msec 00:21:38.972 11:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:38.972 11:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:38.972 11:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:38.972 11:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:38.972 11:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:38.972 11:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:38.972 11:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:21:38.972 11:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:38.972 11:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:38.972 11:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:38.972 11:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:21:38.972 11:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:38.972 11:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:38.972 11:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:38.972 11:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:38.972 11:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:38.972 11:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:21:38.972 11:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:38.972 11:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:38.972 11:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:38.972 11:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:38.972 11:23:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:38.972 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:38.972 fio-3.35 00:21:38.972 Starting 1 thread 00:21:41.501 00:21:41.501 test: (groupid=0, jobs=1): err= 0: pid=2677418: Tue Nov 19 11:23:36 2024 00:21:41.501 read: IOPS=8167, BW=128MiB/s (134MB/s)(256MiB/2006msec) 00:21:41.501 slat (usec): min=2, max=133, avg= 4.21, stdev= 2.43 00:21:41.501 clat (usec): min=2068, max=18204, avg=8979.54, stdev=2093.18 00:21:41.501 lat (usec): min=2072, max=18207, avg=8983.74, stdev=2093.29 00:21:41.501 clat percentiles (usec): 00:21:41.501 | 1.00th=[ 4948], 5.00th=[ 5800], 10.00th=[ 6325], 20.00th=[ 7111], 00:21:41.501 | 30.00th=[ 7635], 40.00th=[ 8291], 50.00th=[ 8848], 60.00th=[ 9372], 00:21:41.501 | 70.00th=[ 9896], 80.00th=[10814], 90.00th=[11994], 95.00th=[12518], 00:21:41.501 | 99.00th=[13698], 99.50th=[14222], 99.90th=[16450], 99.95th=[16909], 00:21:41.501 | 99.99th=[18220] 00:21:41.501 bw ( KiB/s): min=58752, max=76608, per=51.15%, avg=66840.00, stdev=9013.91, samples=4 00:21:41.501 iops : min= 3672, max= 4788, avg=4177.50, stdev=563.37, samples=4 00:21:41.501 write: IOPS=4911, BW=76.7MiB/s (80.5MB/s)(137MiB/1787msec); 0 zone resets 00:21:41.501 slat (usec): min=30, max=192, avg=37.79, stdev= 6.21 00:21:41.501 clat (usec): min=4556, max=17736, avg=11706.08, stdev=1969.21 00:21:41.501 lat (usec): min=4604, max=17786, avg=11743.88, stdev=1969.02 00:21:41.501 clat percentiles (usec): 00:21:41.501 | 1.00th=[ 7963], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[10028], 00:21:41.501 | 30.00th=[10552], 40.00th=[11076], 50.00th=[11469], 60.00th=[11994], 00:21:41.501 | 70.00th=[12518], 80.00th=[13304], 90.00th=[14615], 95.00th=[15401], 00:21:41.501 | 99.00th=[16712], 99.50th=[17171], 99.90th=[17433], 99.95th=[17695], 00:21:41.501 | 99.99th=[17695] 00:21:41.501 bw ( KiB/s): min=60032, max=79840, per=88.81%, avg=69784.00, stdev=9517.85, samples=4 00:21:41.501 iops : min= 3752, max= 4990, avg=4361.50, stdev=594.87, samples=4 00:21:41.501 lat (msec) : 4=0.14%, 10=52.97%, 20=46.90% 00:21:41.501 cpu : usr=80.70%, sys=17.76%, ctx=57, majf=0, minf=61 00:21:41.501 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:21:41.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.501 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:41.501 issued rwts: total=16384,8776,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.501 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:41.501 00:21:41.501 Run status group 0 (all jobs): 00:21:41.501 READ: bw=128MiB/s (134MB/s), 128MiB/s-128MiB/s (134MB/s-134MB/s), io=256MiB (268MB), run=2006-2006msec 00:21:41.501 WRITE: bw=76.7MiB/s (80.5MB/s), 76.7MiB/s-76.7MiB/s (80.5MB/s-80.5MB/s), io=137MiB (144MB), run=1787-1787msec 00:21:41.501 11:23:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:41.501 11:23:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:21:41.501 11:23:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:41.501 11:23:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:41.501 11:23:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:21:41.501 11:23:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:41.501 11:23:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:21:41.501 11:23:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:41.501 11:23:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:21:41.501 11:23:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:41.501 11:23:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:41.501 rmmod nvme_tcp 00:21:41.501 rmmod nvme_fabrics 00:21:41.759 rmmod nvme_keyring 00:21:41.759 11:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:41.759 11:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:21:41.759 11:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:21:41.759 11:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2676609 ']' 00:21:41.759 11:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2676609 00:21:41.759 11:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 2676609 ']' 00:21:41.759 11:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 2676609 00:21:41.759 11:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:21:41.759 11:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:41.759 11:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2676609 00:21:41.759 11:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:41.759 11:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:41.759 11:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2676609' 00:21:41.759 killing process with pid 2676609 00:21:41.759 11:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 2676609 00:21:41.759 11:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 2676609 00:21:42.017 11:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:42.017 11:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:42.017 11:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:42.017 11:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:21:42.017 11:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:21:42.017 11:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:42.017 11:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:21:42.017 11:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:42.017 11:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:42.017 11:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.017 11:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:42.017 11:23:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.921 11:23:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:43.921 00:21:43.921 real 0m12.799s 00:21:43.921 user 0m36.797s 00:21:43.921 sys 0m4.189s 00:21:43.921 11:23:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:43.921 11:23:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.921 ************************************ 00:21:43.921 END TEST nvmf_fio_host 00:21:43.921 ************************************ 00:21:43.921 11:23:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:43.921 11:23:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:43.921 11:23:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:43.921 11:23:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.179 ************************************ 00:21:44.179 START TEST nvmf_failover 00:21:44.179 ************************************ 00:21:44.179 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:44.179 * Looking for test storage... 00:21:44.179 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:44.179 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:44.179 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:21:44.179 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:44.179 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:44.179 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:44.179 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:44.179 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:44.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.180 --rc genhtml_branch_coverage=1 00:21:44.180 --rc genhtml_function_coverage=1 00:21:44.180 --rc genhtml_legend=1 00:21:44.180 --rc geninfo_all_blocks=1 00:21:44.180 --rc geninfo_unexecuted_blocks=1 00:21:44.180 00:21:44.180 ' 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:44.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.180 --rc genhtml_branch_coverage=1 00:21:44.180 --rc genhtml_function_coverage=1 00:21:44.180 --rc genhtml_legend=1 00:21:44.180 --rc geninfo_all_blocks=1 00:21:44.180 --rc geninfo_unexecuted_blocks=1 00:21:44.180 00:21:44.180 ' 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:44.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.180 --rc genhtml_branch_coverage=1 00:21:44.180 --rc genhtml_function_coverage=1 00:21:44.180 --rc genhtml_legend=1 00:21:44.180 --rc geninfo_all_blocks=1 00:21:44.180 --rc geninfo_unexecuted_blocks=1 00:21:44.180 00:21:44.180 ' 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:44.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.180 --rc genhtml_branch_coverage=1 00:21:44.180 --rc genhtml_function_coverage=1 00:21:44.180 --rc genhtml_legend=1 00:21:44.180 --rc geninfo_all_blocks=1 00:21:44.180 --rc geninfo_unexecuted_blocks=1 00:21:44.180 00:21:44.180 ' 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:44.180 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:44.180 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:44.181 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:44.181 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:44.181 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.181 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:44.181 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.181 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:44.181 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:44.181 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:21:44.181 11:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:47.467 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:47.467 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:21:47.467 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:47.467 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:47.467 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:47.467 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:47.467 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:47.467 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:21:47.467 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:47.467 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:21:47.467 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:21:47.467 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:21:47.467 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:21:47.467 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:21:47.467 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:21:47.467 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:47.467 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:47.467 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:47.467 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:47.467 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:47.467 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:47.467 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:47.467 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:47.467 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:47.467 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:47.467 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:47.467 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:47.467 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:47.467 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:47.467 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:47.467 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:47.467 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:47.467 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:47.467 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:47.467 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:21:47.467 Found 0000:82:00.0 (0x8086 - 0x159b) 00:21:47.467 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:47.467 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:47.467 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.467 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.467 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:21:47.468 Found 0000:82:00.1 (0x8086 - 0x159b) 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:21:47.468 Found net devices under 0000:82:00.0: cvl_0_0 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:21:47.468 Found net devices under 0000:82:00.1: cvl_0_1 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:47.468 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:47.468 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:21:47.468 00:21:47.468 --- 10.0.0.2 ping statistics --- 00:21:47.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.468 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:47.468 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:47.468 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:21:47.468 00:21:47.468 --- 10.0.0.1 ping statistics --- 00:21:47.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.468 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2680039 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2680039 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2680039 ']' 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:47.468 [2024-11-19 11:23:42.489796] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:21:47.468 [2024-11-19 11:23:42.489872] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:47.468 [2024-11-19 11:23:42.571139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:47.468 [2024-11-19 11:23:42.628551] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:47.468 [2024-11-19 11:23:42.628600] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:47.468 [2024-11-19 11:23:42.628630] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:47.468 [2024-11-19 11:23:42.628643] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:47.468 [2024-11-19 11:23:42.628653] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:47.468 [2024-11-19 11:23:42.630115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:47.468 [2024-11-19 11:23:42.630143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:47.468 [2024-11-19 11:23:42.630147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:47.468 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:47.469 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:47.469 11:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:47.727 [2024-11-19 11:23:43.077208] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:47.727 11:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:47.984 Malloc0 00:21:47.985 11:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:48.242 11:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:48.500 11:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:49.064 [2024-11-19 11:23:44.254681] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:49.065 11:23:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:49.065 [2024-11-19 11:23:44.551520] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:49.323 11:23:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:49.323 [2024-11-19 11:23:44.816446] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:49.581 11:23:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2680332 00:21:49.581 11:23:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:49.581 11:23:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:49.581 11:23:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2680332 /var/tmp/bdevperf.sock 00:21:49.581 11:23:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2680332 ']' 00:21:49.581 11:23:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:49.581 11:23:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:49.581 11:23:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:49.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:49.581 11:23:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:49.581 11:23:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:49.838 11:23:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:49.838 11:23:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:21:49.838 11:23:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:50.095 NVMe0n1 00:21:50.095 11:23:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:50.660 00:21:50.660 11:23:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2680468 00:21:50.660 11:23:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:50.660 11:23:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:21:51.593 11:23:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:52.160 [2024-11-19 11:23:47.378467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.378591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.378609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.378622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.378634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.378646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.378674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.378695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.378708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.378720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.378731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.378743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.378754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.378765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.378783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.378795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.378806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.378817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.378829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.378848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.378859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.378870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.378882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.378893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.378910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.378921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.378933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.378945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.378956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.378967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.378978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.378989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.379001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.379012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.379027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.379040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.379051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.379063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.379074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.379085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.379096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.379107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.379119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.379130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.379141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.379152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.379163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.379175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.379195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.379207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.379218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.379229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.379240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.160 [2024-11-19 11:23:47.379260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.161 [2024-11-19 11:23:47.379271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.161 [2024-11-19 11:23:47.379282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.161 [2024-11-19 11:23:47.379294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.161 [2024-11-19 11:23:47.379305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.161 [2024-11-19 11:23:47.379316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.161 [2024-11-19 11:23:47.379327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.161 [2024-11-19 11:23:47.379339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.161 [2024-11-19 11:23:47.379376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.161 [2024-11-19 11:23:47.379393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.161 [2024-11-19 11:23:47.379406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.161 [2024-11-19 11:23:47.379418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.161 [2024-11-19 11:23:47.379430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.161 [2024-11-19 11:23:47.379443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.161 [2024-11-19 11:23:47.379456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.161 [2024-11-19 11:23:47.379468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.161 [2024-11-19 11:23:47.379482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.161 [2024-11-19 11:23:47.379494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.161 [2024-11-19 11:23:47.379505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.161 [2024-11-19 11:23:47.379517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.161 [2024-11-19 11:23:47.379531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.161 [2024-11-19 11:23:47.379547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.161 [2024-11-19 11:23:47.379559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.161 [2024-11-19 11:23:47.379570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.161 [2024-11-19 11:23:47.379582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.161 [2024-11-19 11:23:47.379594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.161 [2024-11-19 11:23:47.379606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.161 [2024-11-19 11:23:47.379617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.161 [2024-11-19 11:23:47.379629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.161 [2024-11-19 11:23:47.379655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.161 [2024-11-19 11:23:47.379667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.161 [2024-11-19 11:23:47.379678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.161 [2024-11-19 11:23:47.379689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.161 [2024-11-19 11:23:47.379700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.161 [2024-11-19 11:23:47.379711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.161 [2024-11-19 11:23:47.379723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.161 [2024-11-19 11:23:47.379738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.161 [2024-11-19 11:23:47.379751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.161 [2024-11-19 11:23:47.379762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.161 [2024-11-19 11:23:47.379773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d340 is same with the state(6) to be set 00:21:52.161 11:23:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:21:55.446 11:23:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:55.446 00:21:55.446 11:23:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:56.012 [2024-11-19 11:23:51.255955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9de40 is same with the state(6) to be set 00:21:56.012 [2024-11-19 11:23:51.256024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9de40 is same with the state(6) to be set 00:21:56.012 [2024-11-19 11:23:51.256054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9de40 is same with the state(6) to be set 00:21:56.012 [2024-11-19 11:23:51.256067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9de40 is same with the state(6) to be set 00:21:56.012 [2024-11-19 11:23:51.256090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9de40 is same with the state(6) to be set 00:21:56.012 [2024-11-19 11:23:51.256102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9de40 is same with the state(6) to be set 00:21:56.012 [2024-11-19 11:23:51.256113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9de40 is same with the state(6) to be set 00:21:56.012 [2024-11-19 11:23:51.256125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9de40 is same with the state(6) to be set 00:21:56.012 [2024-11-19 11:23:51.256137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9de40 is same with the state(6) to be set 00:21:56.012 [2024-11-19 11:23:51.256148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9de40 is same with the state(6) to be set 00:21:56.012 [2024-11-19 11:23:51.256160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9de40 is same with the state(6) to be set 00:21:56.013 [2024-11-19 11:23:51.256172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9de40 is same with the state(6) to be set 00:21:56.013 [2024-11-19 11:23:51.256183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9de40 is same with the state(6) to be set 00:21:56.013 [2024-11-19 11:23:51.256195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9de40 is same with the state(6) to be set 00:21:56.013 [2024-11-19 11:23:51.256207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9de40 is same with the state(6) to be set 00:21:56.013 [2024-11-19 11:23:51.256218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9de40 is same with the state(6) to be set 00:21:56.013 [2024-11-19 11:23:51.256231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9de40 is same with the state(6) to be set 00:21:56.013 [2024-11-19 11:23:51.256243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9de40 is same with the state(6) to be set 00:21:56.013 [2024-11-19 11:23:51.256254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9de40 is same with the state(6) to be set 00:21:56.013 [2024-11-19 11:23:51.256274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9de40 is same with the state(6) to be set 00:21:56.013 [2024-11-19 11:23:51.256287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9de40 is same with the state(6) to be set 00:21:56.013 [2024-11-19 11:23:51.256299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9de40 is same with the state(6) to be set 00:21:56.013 [2024-11-19 11:23:51.256311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9de40 is same with the state(6) to be set 00:21:56.013 [2024-11-19 11:23:51.256322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9de40 is same with the state(6) to be set 00:21:56.013 [2024-11-19 11:23:51.256334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9de40 is same with the state(6) to be set 00:21:56.013 [2024-11-19 11:23:51.256360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9de40 is same with the state(6) to be set 00:21:56.013 [2024-11-19 11:23:51.256381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9de40 is same with the state(6) to be set 00:21:56.013 [2024-11-19 11:23:51.256394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9de40 is same with the state(6) to be set 00:21:56.013 [2024-11-19 11:23:51.256406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9de40 is same with the state(6) to be set 00:21:56.013 [2024-11-19 11:23:51.256417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9de40 is same with the state(6) to be set 00:21:56.013 [2024-11-19 11:23:51.256429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9de40 is same with the state(6) to be set 00:21:56.013 [2024-11-19 11:23:51.256441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9de40 is same with the state(6) to be set 00:21:56.013 [2024-11-19 11:23:51.256452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9de40 is same with the state(6) to be set 00:21:56.013 [2024-11-19 11:23:51.256464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9de40 is same with the state(6) to be set 00:21:56.013 [2024-11-19 11:23:51.256476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9de40 is same with the state(6) to be set 00:21:56.013 [2024-11-19 11:23:51.256489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9de40 is same with the state(6) to be set 00:21:56.013 [2024-11-19 11:23:51.256500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9de40 is same with the state(6) to be set 00:21:56.013 11:23:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:21:59.296 11:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:59.296 [2024-11-19 11:23:54.589816] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:59.296 11:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:22:00.230 11:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:00.489 [2024-11-19 11:23:55.932220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.489 [2024-11-19 11:23:55.932309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.489 [2024-11-19 11:23:55.932340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.489 [2024-11-19 11:23:55.932372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.489 [2024-11-19 11:23:55.932401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.489 [2024-11-19 11:23:55.932425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.489 [2024-11-19 11:23:55.932437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.489 [2024-11-19 11:23:55.932449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.489 [2024-11-19 11:23:55.932460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.489 [2024-11-19 11:23:55.932472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.489 [2024-11-19 11:23:55.932484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.489 [2024-11-19 11:23:55.932496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.489 [2024-11-19 11:23:55.932508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.489 [2024-11-19 11:23:55.932520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.489 [2024-11-19 11:23:55.932532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.489 [2024-11-19 11:23:55.932551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.489 [2024-11-19 11:23:55.932563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.489 [2024-11-19 11:23:55.932574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.489 [2024-11-19 11:23:55.932586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.489 [2024-11-19 11:23:55.932597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.489 [2024-11-19 11:23:55.932609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.489 [2024-11-19 11:23:55.932623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.489 [2024-11-19 11:23:55.932634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.489 [2024-11-19 11:23:55.932646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.489 [2024-11-19 11:23:55.932657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.489 [2024-11-19 11:23:55.932668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.489 [2024-11-19 11:23:55.932698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.489 [2024-11-19 11:23:55.932709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.489 [2024-11-19 11:23:55.932720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.489 [2024-11-19 11:23:55.932731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.489 [2024-11-19 11:23:55.932742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.489 [2024-11-19 11:23:55.932753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.489 [2024-11-19 11:23:55.932768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.489 [2024-11-19 11:23:55.932779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.489 [2024-11-19 11:23:55.932792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.932803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.932814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.932825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.932836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.932848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.932858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.932870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.932881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.932892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.932904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.932914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.932926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.932937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.932947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.932974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.932986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.932997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 [2024-11-19 11:23:55.933725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63220 is same with the state(6) to be set 00:22:00.490 11:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2680468 00:22:05.884 { 00:22:05.884 "results": [ 00:22:05.884 { 00:22:05.884 "job": "NVMe0n1", 00:22:05.884 "core_mask": "0x1", 00:22:05.884 "workload": "verify", 00:22:05.884 "status": "finished", 00:22:05.884 "verify_range": { 00:22:05.884 "start": 0, 00:22:05.884 "length": 16384 00:22:05.884 }, 00:22:05.884 "queue_depth": 128, 00:22:05.884 "io_size": 4096, 00:22:05.884 "runtime": 15.002696, 00:22:05.884 "iops": 8663.77616396413, 00:22:05.884 "mibps": 33.842875640484884, 00:22:05.884 "io_failed": 4341, 00:22:05.884 "io_timeout": 0, 00:22:05.884 "avg_latency_us": 14270.774669524386, 00:22:05.884 "min_latency_us": 591.6444444444444, 00:22:05.884 "max_latency_us": 18252.98962962963 00:22:05.884 } 00:22:05.884 ], 00:22:05.884 "core_count": 1 00:22:05.884 } 00:22:05.884 11:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2680332 00:22:05.884 11:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2680332 ']' 00:22:05.884 11:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2680332 00:22:05.884 11:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:05.884 11:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:05.884 11:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2680332 00:22:05.884 11:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:05.884 11:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:05.884 11:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2680332' 00:22:05.884 killing process with pid 2680332 00:22:05.884 11:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2680332 00:22:05.884 11:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2680332 00:22:06.152 11:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:06.152 [2024-11-19 11:23:44.879868] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:22:06.152 [2024-11-19 11:23:44.879956] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2680332 ] 00:22:06.152 [2024-11-19 11:23:44.958033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.152 [2024-11-19 11:23:45.016329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:06.152 Running I/O for 15 seconds... 00:22:06.152 8531.00 IOPS, 33.32 MiB/s [2024-11-19T10:24:01.649Z] [2024-11-19 11:23:47.380813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.152 [2024-11-19 11:23:47.380861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.152 [2024-11-19 11:23:47.380887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.152 [2024-11-19 11:23:47.380904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.152 [2024-11-19 11:23:47.380922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.152 [2024-11-19 11:23:47.380937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.152 [2024-11-19 11:23:47.380953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:82304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.152 [2024-11-19 11:23:47.380969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.152 [2024-11-19 11:23:47.380985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:82312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.152 [2024-11-19 11:23:47.380999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.152 [2024-11-19 11:23:47.381015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:82320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.152 [2024-11-19 11:23:47.381030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.152 [2024-11-19 11:23:47.381045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:82328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.152 [2024-11-19 11:23:47.381060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.152 [2024-11-19 11:23:47.381076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:82336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.152 [2024-11-19 11:23:47.381091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.152 [2024-11-19 11:23:47.381107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:82344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.152 [2024-11-19 11:23:47.381122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.152 [2024-11-19 11:23:47.381138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:82352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.152 [2024-11-19 11:23:47.381152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.152 [2024-11-19 11:23:47.381168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:82360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.152 [2024-11-19 11:23:47.381183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.152 [2024-11-19 11:23:47.381207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:82368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.152 [2024-11-19 11:23:47.381222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.152 [2024-11-19 11:23:47.381238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:82376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.152 [2024-11-19 11:23:47.381252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.152 [2024-11-19 11:23:47.381268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:82384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.152 [2024-11-19 11:23:47.381282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.152 [2024-11-19 11:23:47.381297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:82392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.152 [2024-11-19 11:23:47.381312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.152 [2024-11-19 11:23:47.381327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:82400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.152 [2024-11-19 11:23:47.381342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.152 [2024-11-19 11:23:47.381358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.152 [2024-11-19 11:23:47.381382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.152 [2024-11-19 11:23:47.381399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:82416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.152 [2024-11-19 11:23:47.381414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.152 [2024-11-19 11:23:47.381429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:82424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.152 [2024-11-19 11:23:47.381445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.152 [2024-11-19 11:23:47.381460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.152 [2024-11-19 11:23:47.381475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.152 [2024-11-19 11:23:47.381490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:82440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.152 [2024-11-19 11:23:47.381505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.152 [2024-11-19 11:23:47.381520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:82448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.152 [2024-11-19 11:23:47.381535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.152 [2024-11-19 11:23:47.381550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.152 [2024-11-19 11:23:47.381564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.152 [2024-11-19 11:23:47.381579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.152 [2024-11-19 11:23:47.381598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.152 [2024-11-19 11:23:47.381614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.152 [2024-11-19 11:23:47.381628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.152 [2024-11-19 11:23:47.381643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:82480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.152 [2024-11-19 11:23:47.381658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-11-19 11:23:47.381673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:82488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-11-19 11:23:47.381687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-11-19 11:23:47.381702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:82496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-11-19 11:23:47.381716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-11-19 11:23:47.381731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:82504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-11-19 11:23:47.381746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-11-19 11:23:47.381761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-11-19 11:23:47.381775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-11-19 11:23:47.381790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:82520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-11-19 11:23:47.381805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-11-19 11:23:47.381820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:82528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-11-19 11:23:47.381835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-11-19 11:23:47.381851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:82536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-11-19 11:23:47.381865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-11-19 11:23:47.381881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:82544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-11-19 11:23:47.381895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-11-19 11:23:47.381910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-11-19 11:23:47.381925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-11-19 11:23:47.381940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-11-19 11:23:47.381954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-11-19 11:23:47.381973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:82568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-11-19 11:23:47.381988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-11-19 11:23:47.382003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:82576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-11-19 11:23:47.382018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-11-19 11:23:47.382033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:82584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-11-19 11:23:47.382047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-11-19 11:23:47.382063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-11-19 11:23:47.382077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-11-19 11:23:47.382092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:82600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-11-19 11:23:47.382106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-11-19 11:23:47.382122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:82608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-11-19 11:23:47.382136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-11-19 11:23:47.382151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:82616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-11-19 11:23:47.382164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-11-19 11:23:47.382179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:82624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-11-19 11:23:47.382194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-11-19 11:23:47.382209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:82632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-11-19 11:23:47.382223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-11-19 11:23:47.382238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:82640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-11-19 11:23:47.382252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-11-19 11:23:47.382267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:82648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-11-19 11:23:47.382282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-11-19 11:23:47.382297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-11-19 11:23:47.382311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-11-19 11:23:47.382327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:82664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-11-19 11:23:47.382349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-11-19 11:23:47.382372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:82672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-11-19 11:23:47.382389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-11-19 11:23:47.382404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-11-19 11:23:47.382419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-11-19 11:23:47.382434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:82688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-11-19 11:23:47.382448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-11-19 11:23:47.382464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:82696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-11-19 11:23:47.382478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-11-19 11:23:47.382493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-11-19 11:23:47.382507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-11-19 11:23:47.382523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:82712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-11-19 11:23:47.382537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-11-19 11:23:47.382552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.153 [2024-11-19 11:23:47.382566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-11-19 11:23:47.382582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.153 [2024-11-19 11:23:47.382596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-11-19 11:23:47.382612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:82752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.153 [2024-11-19 11:23:47.382626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-11-19 11:23:47.382641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.153 [2024-11-19 11:23:47.382655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-11-19 11:23:47.382670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.153 [2024-11-19 11:23:47.382684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-11-19 11:23:47.382700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.153 [2024-11-19 11:23:47.382713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-11-19 11:23:47.382729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.153 [2024-11-19 11:23:47.382746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-11-19 11:23:47.382763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:82720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-11-19 11:23:47.382777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.153 [2024-11-19 11:23:47.382792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.153 [2024-11-19 11:23:47.382806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-11-19 11:23:47.382832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.154 [2024-11-19 11:23:47.382854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-11-19 11:23:47.382870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:82800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.154 [2024-11-19 11:23:47.382884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-11-19 11:23:47.382899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.154 [2024-11-19 11:23:47.382914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-11-19 11:23:47.382929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.154 [2024-11-19 11:23:47.382944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-11-19 11:23:47.382959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:82824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.154 [2024-11-19 11:23:47.382973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-11-19 11:23:47.382988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.154 [2024-11-19 11:23:47.383002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-11-19 11:23:47.383017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:82840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.154 [2024-11-19 11:23:47.383032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-11-19 11:23:47.383047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.154 [2024-11-19 11:23:47.383061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-11-19 11:23:47.383076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.154 [2024-11-19 11:23:47.383090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-11-19 11:23:47.383105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.154 [2024-11-19 11:23:47.383120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-11-19 11:23:47.383138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.154 [2024-11-19 11:23:47.383153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-11-19 11:23:47.383168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.154 [2024-11-19 11:23:47.383182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-11-19 11:23:47.383197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.154 [2024-11-19 11:23:47.383211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-11-19 11:23:47.383226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.154 [2024-11-19 11:23:47.383241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-11-19 11:23:47.383256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.154 [2024-11-19 11:23:47.383270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-11-19 11:23:47.383285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.154 [2024-11-19 11:23:47.383299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-11-19 11:23:47.383320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.154 [2024-11-19 11:23:47.383335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-11-19 11:23:47.383350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.154 [2024-11-19 11:23:47.383372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-11-19 11:23:47.383389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.154 [2024-11-19 11:23:47.383403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-11-19 11:23:47.383418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.154 [2024-11-19 11:23:47.383439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-11-19 11:23:47.383455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.154 [2024-11-19 11:23:47.383469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-11-19 11:23:47.383485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.154 [2024-11-19 11:23:47.383499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-11-19 11:23:47.383514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.154 [2024-11-19 11:23:47.383532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-11-19 11:23:47.383548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.154 [2024-11-19 11:23:47.383562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-11-19 11:23:47.383578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.154 [2024-11-19 11:23:47.383592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-11-19 11:23:47.383607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:82992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.154 [2024-11-19 11:23:47.383621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-11-19 11:23:47.383636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:83000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.154 [2024-11-19 11:23:47.383650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-11-19 11:23:47.383665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:83008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.154 [2024-11-19 11:23:47.383679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-11-19 11:23:47.383694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:83016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.154 [2024-11-19 11:23:47.383708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-11-19 11:23:47.383723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.154 [2024-11-19 11:23:47.383737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-11-19 11:23:47.383752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:83032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.154 [2024-11-19 11:23:47.383766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-11-19 11:23:47.383781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:83040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.154 [2024-11-19 11:23:47.383795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-11-19 11:23:47.383811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:83048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.154 [2024-11-19 11:23:47.383825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-11-19 11:23:47.383840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:83056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.154 [2024-11-19 11:23:47.383854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-11-19 11:23:47.383869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:83064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.154 [2024-11-19 11:23:47.383882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-11-19 11:23:47.383901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:83072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.154 [2024-11-19 11:23:47.383921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-11-19 11:23:47.383936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:83080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.154 [2024-11-19 11:23:47.383951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.154 [2024-11-19 11:23:47.383966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:83088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.155 [2024-11-19 11:23:47.383980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-11-19 11:23:47.383995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:83096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.155 [2024-11-19 11:23:47.384009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-11-19 11:23:47.384024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.155 [2024-11-19 11:23:47.384038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-11-19 11:23:47.384053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:83112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.155 [2024-11-19 11:23:47.384067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-11-19 11:23:47.384082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:83120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.155 [2024-11-19 11:23:47.384096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-11-19 11:23:47.384111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:83128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.155 [2024-11-19 11:23:47.384125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-11-19 11:23:47.384140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.155 [2024-11-19 11:23:47.384154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-11-19 11:23:47.384168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:83144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.155 [2024-11-19 11:23:47.384182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-11-19 11:23:47.384197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:83152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.155 [2024-11-19 11:23:47.384211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-11-19 11:23:47.384226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:83160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.155 [2024-11-19 11:23:47.384239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-11-19 11:23:47.384254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:83168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.155 [2024-11-19 11:23:47.384268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-11-19 11:23:47.384302] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.155 [2024-11-19 11:23:47.384319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83176 len:8 PRP1 0x0 PRP2 0x0 00:22:06.155 [2024-11-19 11:23:47.384333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-11-19 11:23:47.384351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.155 [2024-11-19 11:23:47.384370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.155 [2024-11-19 11:23:47.384384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83184 len:8 PRP1 0x0 PRP2 0x0 00:22:06.155 [2024-11-19 11:23:47.384397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-11-19 11:23:47.384416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.155 [2024-11-19 11:23:47.384428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.155 [2024-11-19 11:23:47.384439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83192 len:8 PRP1 0x0 PRP2 0x0 00:22:06.155 [2024-11-19 11:23:47.384452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-11-19 11:23:47.384465] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.155 [2024-11-19 11:23:47.384476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.155 [2024-11-19 11:23:47.384487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83200 len:8 PRP1 0x0 PRP2 0x0 00:22:06.155 [2024-11-19 11:23:47.384500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-11-19 11:23:47.384512] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.155 [2024-11-19 11:23:47.384523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.155 [2024-11-19 11:23:47.384535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83208 len:8 PRP1 0x0 PRP2 0x0 00:22:06.155 [2024-11-19 11:23:47.384547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-11-19 11:23:47.384560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.155 [2024-11-19 11:23:47.384571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.155 [2024-11-19 11:23:47.384582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83216 len:8 PRP1 0x0 PRP2 0x0 00:22:06.155 [2024-11-19 11:23:47.384595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-11-19 11:23:47.384608] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.155 [2024-11-19 11:23:47.384619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.155 [2024-11-19 11:23:47.384630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83224 len:8 PRP1 0x0 PRP2 0x0 00:22:06.155 [2024-11-19 11:23:47.384643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-11-19 11:23:47.384655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.155 [2024-11-19 11:23:47.384666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.155 [2024-11-19 11:23:47.384676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83232 len:8 PRP1 0x0 PRP2 0x0 00:22:06.155 [2024-11-19 11:23:47.384693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-11-19 11:23:47.384707] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.155 [2024-11-19 11:23:47.384717] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.155 [2024-11-19 11:23:47.384728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83240 len:8 PRP1 0x0 PRP2 0x0 00:22:06.155 [2024-11-19 11:23:47.384741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-11-19 11:23:47.384754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.155 [2024-11-19 11:23:47.384764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.155 [2024-11-19 11:23:47.384775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83248 len:8 PRP1 0x0 PRP2 0x0 00:22:06.155 [2024-11-19 11:23:47.384787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-11-19 11:23:47.384801] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.155 [2024-11-19 11:23:47.384811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.155 [2024-11-19 11:23:47.384822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83256 len:8 PRP1 0x0 PRP2 0x0 00:22:06.155 [2024-11-19 11:23:47.384835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-11-19 11:23:47.384847] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.155 [2024-11-19 11:23:47.384858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.155 [2024-11-19 11:23:47.384868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83264 len:8 PRP1 0x0 PRP2 0x0 00:22:06.155 [2024-11-19 11:23:47.384881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-11-19 11:23:47.384893] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.155 [2024-11-19 11:23:47.384903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.155 [2024-11-19 11:23:47.384914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83272 len:8 PRP1 0x0 PRP2 0x0 00:22:06.155 [2024-11-19 11:23:47.384926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-11-19 11:23:47.384939] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.155 [2024-11-19 11:23:47.384950] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.155 [2024-11-19 11:23:47.384960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83280 len:8 PRP1 0x0 PRP2 0x0 00:22:06.155 [2024-11-19 11:23:47.384973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-11-19 11:23:47.384985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.155 [2024-11-19 11:23:47.384996] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.155 [2024-11-19 11:23:47.385007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83288 len:8 PRP1 0x0 PRP2 0x0 00:22:06.155 [2024-11-19 11:23:47.385019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.155 [2024-11-19 11:23:47.385031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.155 [2024-11-19 11:23:47.385042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.155 [2024-11-19 11:23:47.385057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83296 len:8 PRP1 0x0 PRP2 0x0 00:22:06.155 [2024-11-19 11:23:47.385070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-11-19 11:23:47.385136] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:06.156 [2024-11-19 11:23:47.385175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.156 [2024-11-19 11:23:47.385194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-11-19 11:23:47.385209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.156 [2024-11-19 11:23:47.385222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-11-19 11:23:47.385235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.156 [2024-11-19 11:23:47.385248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-11-19 11:23:47.385261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.156 [2024-11-19 11:23:47.385273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-11-19 11:23:47.385287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:06.156 [2024-11-19 11:23:47.385347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1545560 (9): Bad file descriptor 00:22:06.156 [2024-11-19 11:23:47.388594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:06.156 [2024-11-19 11:23:47.426133] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:22:06.156 8396.50 IOPS, 32.80 MiB/s [2024-11-19T10:24:01.653Z] 8512.00 IOPS, 33.25 MiB/s [2024-11-19T10:24:01.653Z] 8592.00 IOPS, 33.56 MiB/s [2024-11-19T10:24:01.653Z] 8601.40 IOPS, 33.60 MiB/s [2024-11-19T10:24:01.653Z] [2024-11-19 11:23:51.258096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:100320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-11-19 11:23:51.258138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-11-19 11:23:51.258179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:100328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-11-19 11:23:51.258196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-11-19 11:23:51.258212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:100336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-11-19 11:23:51.258227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-11-19 11:23:51.258243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:100344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-11-19 11:23:51.258257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-11-19 11:23:51.258273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:100352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-11-19 11:23:51.258287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-11-19 11:23:51.258309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:100360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-11-19 11:23:51.258325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-11-19 11:23:51.258340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-11-19 11:23:51.258382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-11-19 11:23:51.258399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:100376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-11-19 11:23:51.258414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-11-19 11:23:51.258430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:100384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-11-19 11:23:51.258445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-11-19 11:23:51.258461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:100392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-11-19 11:23:51.258476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-11-19 11:23:51.258492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-11-19 11:23:51.258506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-11-19 11:23:51.258522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:100408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-11-19 11:23:51.258537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-11-19 11:23:51.258553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:100416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-11-19 11:23:51.258567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-11-19 11:23:51.258582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:100424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-11-19 11:23:51.258597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-11-19 11:23:51.258613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:100432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-11-19 11:23:51.258628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-11-19 11:23:51.258643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:100440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-11-19 11:23:51.258675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-11-19 11:23:51.258691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-11-19 11:23:51.258705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-11-19 11:23:51.258736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:100456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-11-19 11:23:51.258751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-11-19 11:23:51.258772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:100464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-11-19 11:23:51.258787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-11-19 11:23:51.258803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:100472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-11-19 11:23:51.258818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-11-19 11:23:51.258833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:100480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-11-19 11:23:51.258848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-11-19 11:23:51.258864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:100488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-11-19 11:23:51.258879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-11-19 11:23:51.258894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:100496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-11-19 11:23:51.258908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-11-19 11:23:51.258923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:100504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-11-19 11:23:51.258937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.156 [2024-11-19 11:23:51.258953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:100512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.156 [2024-11-19 11:23:51.258967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-11-19 11:23:51.258982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:100520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-11-19 11:23:51.258997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-11-19 11:23:51.259013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:100528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-11-19 11:23:51.259028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-11-19 11:23:51.259043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:100536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-11-19 11:23:51.259058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-11-19 11:23:51.259073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:100544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-11-19 11:23:51.259087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-11-19 11:23:51.259103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:100552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-11-19 11:23:51.259117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-11-19 11:23:51.259132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:100560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-11-19 11:23:51.259151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-11-19 11:23:51.259177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:100568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-11-19 11:23:51.259192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-11-19 11:23:51.259208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:100576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-11-19 11:23:51.259222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-11-19 11:23:51.259238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:100584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-11-19 11:23:51.259253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-11-19 11:23:51.259268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:100592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-11-19 11:23:51.259282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-11-19 11:23:51.259298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:100600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-11-19 11:23:51.259312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-11-19 11:23:51.259327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:100608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-11-19 11:23:51.259342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-11-19 11:23:51.259371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:100616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-11-19 11:23:51.259388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-11-19 11:23:51.259405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:100624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.157 [2024-11-19 11:23:51.259419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-11-19 11:23:51.259435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.157 [2024-11-19 11:23:51.259449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-11-19 11:23:51.259465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.157 [2024-11-19 11:23:51.259479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-11-19 11:23:51.259495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.157 [2024-11-19 11:23:51.259509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-11-19 11:23:51.259525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.157 [2024-11-19 11:23:51.259540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-11-19 11:23:51.259559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.157 [2024-11-19 11:23:51.259574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-11-19 11:23:51.259590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.157 [2024-11-19 11:23:51.259604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-11-19 11:23:51.259619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.157 [2024-11-19 11:23:51.259634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-11-19 11:23:51.259650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.157 [2024-11-19 11:23:51.259666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-11-19 11:23:51.259681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.157 [2024-11-19 11:23:51.259695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-11-19 11:23:51.259710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.157 [2024-11-19 11:23:51.259731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-11-19 11:23:51.259746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.157 [2024-11-19 11:23:51.259760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-11-19 11:23:51.259775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.157 [2024-11-19 11:23:51.259789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-11-19 11:23:51.259805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.157 [2024-11-19 11:23:51.259820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-11-19 11:23:51.259835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.157 [2024-11-19 11:23:51.259849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-11-19 11:23:51.259865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.157 [2024-11-19 11:23:51.259880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-11-19 11:23:51.259906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.157 [2024-11-19 11:23:51.259922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-11-19 11:23:51.259937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.157 [2024-11-19 11:23:51.259956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-11-19 11:23:51.259972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.157 [2024-11-19 11:23:51.259986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-11-19 11:23:51.260001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.157 [2024-11-19 11:23:51.260016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-11-19 11:23:51.260031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.157 [2024-11-19 11:23:51.260046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-11-19 11:23:51.260062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.157 [2024-11-19 11:23:51.260076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-11-19 11:23:51.260092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.157 [2024-11-19 11:23:51.260106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-11-19 11:23:51.260121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.157 [2024-11-19 11:23:51.260136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-11-19 11:23:51.260151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.158 [2024-11-19 11:23:51.260165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-11-19 11:23:51.260180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.158 [2024-11-19 11:23:51.260194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-11-19 11:23:51.260209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.158 [2024-11-19 11:23:51.260223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-11-19 11:23:51.260239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.158 [2024-11-19 11:23:51.260253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-11-19 11:23:51.260268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.158 [2024-11-19 11:23:51.260282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-11-19 11:23:51.260297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.158 [2024-11-19 11:23:51.260311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-11-19 11:23:51.260330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.158 [2024-11-19 11:23:51.260345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-11-19 11:23:51.260375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.158 [2024-11-19 11:23:51.260392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-11-19 11:23:51.260413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.158 [2024-11-19 11:23:51.260428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-11-19 11:23:51.260444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.158 [2024-11-19 11:23:51.260459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-11-19 11:23:51.260474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.158 [2024-11-19 11:23:51.260489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-11-19 11:23:51.260504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.158 [2024-11-19 11:23:51.260519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-11-19 11:23:51.260534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.158 [2024-11-19 11:23:51.260549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-11-19 11:23:51.260564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.158 [2024-11-19 11:23:51.260578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-11-19 11:23:51.260593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.158 [2024-11-19 11:23:51.260608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-11-19 11:23:51.260623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.158 [2024-11-19 11:23:51.260637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-11-19 11:23:51.260653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.158 [2024-11-19 11:23:51.260677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-11-19 11:23:51.260693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.158 [2024-11-19 11:23:51.260707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-11-19 11:23:51.260722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.158 [2024-11-19 11:23:51.260740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-11-19 11:23:51.260756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.158 [2024-11-19 11:23:51.260771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-11-19 11:23:51.260786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.158 [2024-11-19 11:23:51.260800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-11-19 11:23:51.260815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:101000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.158 [2024-11-19 11:23:51.260830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-11-19 11:23:51.260845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.158 [2024-11-19 11:23:51.260860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-11-19 11:23:51.260875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:101016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.158 [2024-11-19 11:23:51.260889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-11-19 11:23:51.260905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:101024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.158 [2024-11-19 11:23:51.260919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-11-19 11:23:51.260934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:101032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.158 [2024-11-19 11:23:51.260949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-11-19 11:23:51.260964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:101040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.158 [2024-11-19 11:23:51.260978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-11-19 11:23:51.260993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:101048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.158 [2024-11-19 11:23:51.261008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-11-19 11:23:51.261023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:101056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.158 [2024-11-19 11:23:51.261038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-11-19 11:23:51.261053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:101064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.158 [2024-11-19 11:23:51.261067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-11-19 11:23:51.261082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:101072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.158 [2024-11-19 11:23:51.261096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-11-19 11:23:51.261111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:101080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.158 [2024-11-19 11:23:51.261129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-11-19 11:23:51.261144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:101088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.158 [2024-11-19 11:23:51.261159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-11-19 11:23:51.261174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:101096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.158 [2024-11-19 11:23:51.261188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-11-19 11:23:51.261203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:101104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.158 [2024-11-19 11:23:51.261216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-11-19 11:23:51.261231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:101112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.158 [2024-11-19 11:23:51.261246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-11-19 11:23:51.261261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.158 [2024-11-19 11:23:51.261275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.158 [2024-11-19 11:23:51.261290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:101128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.159 [2024-11-19 11:23:51.261304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.159 [2024-11-19 11:23:51.261320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:101136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.159 [2024-11-19 11:23:51.261334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.159 [2024-11-19 11:23:51.261348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:101144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.159 [2024-11-19 11:23:51.261373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.159 [2024-11-19 11:23:51.261390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:101152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.159 [2024-11-19 11:23:51.261404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.159 [2024-11-19 11:23:51.261419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:101160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.159 [2024-11-19 11:23:51.261433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.159 [2024-11-19 11:23:51.261474] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.159 [2024-11-19 11:23:51.261492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101168 len:8 PRP1 0x0 PRP2 0x0 00:22:06.159 [2024-11-19 11:23:51.261506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.159 [2024-11-19 11:23:51.261525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.159 [2024-11-19 11:23:51.261537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.159 [2024-11-19 11:23:51.261552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101176 len:8 PRP1 0x0 PRP2 0x0 00:22:06.159 [2024-11-19 11:23:51.261571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.159 [2024-11-19 11:23:51.261586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.159 [2024-11-19 11:23:51.261597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.159 [2024-11-19 11:23:51.261608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101184 len:8 PRP1 0x0 PRP2 0x0 00:22:06.159 [2024-11-19 11:23:51.261621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.159 [2024-11-19 11:23:51.261634] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.159 [2024-11-19 11:23:51.261645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.159 [2024-11-19 11:23:51.261656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101192 len:8 PRP1 0x0 PRP2 0x0 00:22:06.159 [2024-11-19 11:23:51.261669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.159 [2024-11-19 11:23:51.261682] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.159 [2024-11-19 11:23:51.261693] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.159 [2024-11-19 11:23:51.261704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101200 len:8 PRP1 0x0 PRP2 0x0 00:22:06.159 [2024-11-19 11:23:51.261717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.159 [2024-11-19 11:23:51.261730] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.159 [2024-11-19 11:23:51.261741] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.159 [2024-11-19 11:23:51.261751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101208 len:8 PRP1 0x0 PRP2 0x0 00:22:06.159 [2024-11-19 11:23:51.261764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.159 [2024-11-19 11:23:51.261777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.159 [2024-11-19 11:23:51.261789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.159 [2024-11-19 11:23:51.261799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101216 len:8 PRP1 0x0 PRP2 0x0 00:22:06.159 [2024-11-19 11:23:51.261812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.159 [2024-11-19 11:23:51.261825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.159 [2024-11-19 11:23:51.261837] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.159 [2024-11-19 11:23:51.261847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101224 len:8 PRP1 0x0 PRP2 0x0 00:22:06.159 [2024-11-19 11:23:51.261860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.159 [2024-11-19 11:23:51.261873] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.159 [2024-11-19 11:23:51.261884] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.159 [2024-11-19 11:23:51.261895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101232 len:8 PRP1 0x0 PRP2 0x0 00:22:06.159 [2024-11-19 11:23:51.261908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.159 [2024-11-19 11:23:51.261925] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.159 [2024-11-19 11:23:51.261937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.159 [2024-11-19 11:23:51.261948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101240 len:8 PRP1 0x0 PRP2 0x0 00:22:06.159 [2024-11-19 11:23:51.261967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.159 [2024-11-19 11:23:51.261981] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.159 [2024-11-19 11:23:51.261992] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.159 [2024-11-19 11:23:51.262009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101248 len:8 PRP1 0x0 PRP2 0x0 00:22:06.159 [2024-11-19 11:23:51.262022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.159 [2024-11-19 11:23:51.262035] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.159 [2024-11-19 11:23:51.262046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.159 [2024-11-19 11:23:51.262057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101256 len:8 PRP1 0x0 PRP2 0x0 00:22:06.159 [2024-11-19 11:23:51.262069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.159 [2024-11-19 11:23:51.262082] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.159 [2024-11-19 11:23:51.262093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.159 [2024-11-19 11:23:51.262104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101264 len:8 PRP1 0x0 PRP2 0x0 00:22:06.159 [2024-11-19 11:23:51.262117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.159 [2024-11-19 11:23:51.262130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.159 [2024-11-19 11:23:51.262141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.159 [2024-11-19 11:23:51.262152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101272 len:8 PRP1 0x0 PRP2 0x0 00:22:06.159 [2024-11-19 11:23:51.262165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.159 [2024-11-19 11:23:51.262178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.159 [2024-11-19 11:23:51.262189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.159 [2024-11-19 11:23:51.262203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101280 len:8 PRP1 0x0 PRP2 0x0 00:22:06.159 [2024-11-19 11:23:51.262216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.159 [2024-11-19 11:23:51.262229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.159 [2024-11-19 11:23:51.262240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.159 [2024-11-19 11:23:51.262251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101288 len:8 PRP1 0x0 PRP2 0x0 00:22:06.159 [2024-11-19 11:23:51.262268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.159 [2024-11-19 11:23:51.262281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.159 [2024-11-19 11:23:51.262291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.159 [2024-11-19 11:23:51.262302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101296 len:8 PRP1 0x0 PRP2 0x0 00:22:06.159 [2024-11-19 11:23:51.262319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.159 [2024-11-19 11:23:51.262333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.159 [2024-11-19 11:23:51.262344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.159 [2024-11-19 11:23:51.262372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101304 len:8 PRP1 0x0 PRP2 0x0 00:22:06.159 [2024-11-19 11:23:51.262393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.159 [2024-11-19 11:23:51.262407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.159 [2024-11-19 11:23:51.262418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.159 [2024-11-19 11:23:51.262429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101312 len:8 PRP1 0x0 PRP2 0x0 00:22:06.159 [2024-11-19 11:23:51.262442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.159 [2024-11-19 11:23:51.262455] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.159 [2024-11-19 11:23:51.262466] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.159 [2024-11-19 11:23:51.262478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101320 len:8 PRP1 0x0 PRP2 0x0 00:22:06.160 [2024-11-19 11:23:51.262490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.160 [2024-11-19 11:23:51.262503] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.160 [2024-11-19 11:23:51.262514] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.160 [2024-11-19 11:23:51.262525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101328 len:8 PRP1 0x0 PRP2 0x0 00:22:06.160 [2024-11-19 11:23:51.262538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.160 [2024-11-19 11:23:51.262550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.160 [2024-11-19 11:23:51.262561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.160 [2024-11-19 11:23:51.262572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101336 len:8 PRP1 0x0 PRP2 0x0 00:22:06.160 [2024-11-19 11:23:51.262585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.160 [2024-11-19 11:23:51.262598] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.160 [2024-11-19 11:23:51.262608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.160 [2024-11-19 11:23:51.262619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100632 len:8 PRP1 0x0 PRP2 0x0 00:22:06.160 [2024-11-19 11:23:51.262632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.160 [2024-11-19 11:23:51.262645] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.160 [2024-11-19 11:23:51.262660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.160 [2024-11-19 11:23:51.262671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100640 len:8 PRP1 0x0 PRP2 0x0 00:22:06.160 [2024-11-19 11:23:51.262684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.160 [2024-11-19 11:23:51.262757] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:06.160 [2024-11-19 11:23:51.262797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.160 [2024-11-19 11:23:51.262831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.160 [2024-11-19 11:23:51.262846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.160 [2024-11-19 11:23:51.262860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.160 [2024-11-19 11:23:51.262879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.160 [2024-11-19 11:23:51.262891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.160 [2024-11-19 11:23:51.262911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.160 [2024-11-19 11:23:51.262925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.160 [2024-11-19 11:23:51.262937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:06.160 [2024-11-19 11:23:51.262979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1545560 (9): Bad file descriptor 00:22:06.160 [2024-11-19 11:23:51.266277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:06.160 [2024-11-19 11:23:51.288623] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:22:06.160 8561.67 IOPS, 33.44 MiB/s [2024-11-19T10:24:01.657Z] 8588.86 IOPS, 33.55 MiB/s [2024-11-19T10:24:01.657Z] 8616.00 IOPS, 33.66 MiB/s [2024-11-19T10:24:01.657Z] 8639.89 IOPS, 33.75 MiB/s [2024-11-19T10:24:01.657Z] [2024-11-19 11:23:55.934293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:42008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.160 [2024-11-19 11:23:55.934356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.160 [2024-11-19 11:23:55.934393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.160 [2024-11-19 11:23:55.934411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.160 [2024-11-19 11:23:55.934428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:42024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.160 [2024-11-19 11:23:55.934442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.160 [2024-11-19 11:23:55.934458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:42032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.160 [2024-11-19 11:23:55.934473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.160 [2024-11-19 11:23:55.934489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:42040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.160 [2024-11-19 11:23:55.934503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.160 [2024-11-19 11:23:55.934519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:42048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.160 [2024-11-19 11:23:55.934534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.160 [2024-11-19 11:23:55.934549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:42056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.160 [2024-11-19 11:23:55.934575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.160 [2024-11-19 11:23:55.934591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:42064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.160 [2024-11-19 11:23:55.934606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.160 [2024-11-19 11:23:55.934621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:42072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.160 [2024-11-19 11:23:55.934635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.160 [2024-11-19 11:23:55.934651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:42080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.160 [2024-11-19 11:23:55.934665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.160 [2024-11-19 11:23:55.934679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:42088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.160 [2024-11-19 11:23:55.934694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.160 [2024-11-19 11:23:55.934710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:42096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.160 [2024-11-19 11:23:55.934725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.160 [2024-11-19 11:23:55.934741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:42104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.160 [2024-11-19 11:23:55.934755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.160 [2024-11-19 11:23:55.934771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:42112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.160 [2024-11-19 11:23:55.934786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.160 [2024-11-19 11:23:55.934802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:42120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.160 [2024-11-19 11:23:55.934816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.160 [2024-11-19 11:23:55.934832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:42128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.160 [2024-11-19 11:23:55.934846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.160 [2024-11-19 11:23:55.934861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:42136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.160 [2024-11-19 11:23:55.934877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.160 [2024-11-19 11:23:55.934892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:42144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.160 [2024-11-19 11:23:55.934907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.160 [2024-11-19 11:23:55.934922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:42152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.160 [2024-11-19 11:23:55.934936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.160 [2024-11-19 11:23:55.934952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:42160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.160 [2024-11-19 11:23:55.934971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.160 [2024-11-19 11:23:55.934987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:42168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.160 [2024-11-19 11:23:55.935001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.160 [2024-11-19 11:23:55.935017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:42176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.160 [2024-11-19 11:23:55.935032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.160 [2024-11-19 11:23:55.935048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:42184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.160 [2024-11-19 11:23:55.935063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.160 [2024-11-19 11:23:55.935079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:42192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.161 [2024-11-19 11:23:55.935094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.161 [2024-11-19 11:23:55.935109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:42200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.161 [2024-11-19 11:23:55.935124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.161 [2024-11-19 11:23:55.935140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:42208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.161 [2024-11-19 11:23:55.935154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.161 [2024-11-19 11:23:55.935170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:42216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.161 [2024-11-19 11:23:55.935184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.161 [2024-11-19 11:23:55.935200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:42224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.161 [2024-11-19 11:23:55.935214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.161 [2024-11-19 11:23:55.935229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:42232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.161 [2024-11-19 11:23:55.935244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.161 [2024-11-19 11:23:55.935259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:42240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.161 [2024-11-19 11:23:55.935274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.161 [2024-11-19 11:23:55.935289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:42248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.161 [2024-11-19 11:23:55.935304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.161 [2024-11-19 11:23:55.935319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:42256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.161 [2024-11-19 11:23:55.935334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.161 [2024-11-19 11:23:55.935354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:42264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.161 [2024-11-19 11:23:55.935378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.161 [2024-11-19 11:23:55.935395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:42272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.161 [2024-11-19 11:23:55.935411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.161 [2024-11-19 11:23:55.935426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:42280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.161 [2024-11-19 11:23:55.935441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.161 [2024-11-19 11:23:55.935457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:42288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.161 [2024-11-19 11:23:55.935471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.161 [2024-11-19 11:23:55.935488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:42296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.161 [2024-11-19 11:23:55.935503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.161 [2024-11-19 11:23:55.935519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:42304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.161 [2024-11-19 11:23:55.935534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.161 [2024-11-19 11:23:55.935549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:42312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.161 [2024-11-19 11:23:55.935564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.161 [2024-11-19 11:23:55.935579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:42320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.161 [2024-11-19 11:23:55.935593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.161 [2024-11-19 11:23:55.935609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:42328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.161 [2024-11-19 11:23:55.935624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.161 [2024-11-19 11:23:55.935639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:42336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.161 [2024-11-19 11:23:55.935654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.161 [2024-11-19 11:23:55.935670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:42344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.161 [2024-11-19 11:23:55.935684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.161 [2024-11-19 11:23:55.935700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:42352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.161 [2024-11-19 11:23:55.935714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.161 [2024-11-19 11:23:55.935730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:42360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.161 [2024-11-19 11:23:55.935749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.161 [2024-11-19 11:23:55.935765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:42368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.161 [2024-11-19 11:23:55.935779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.161 [2024-11-19 11:23:55.935795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:42376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.161 [2024-11-19 11:23:55.935810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.161 [2024-11-19 11:23:55.935826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:42384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.161 [2024-11-19 11:23:55.935840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.161 [2024-11-19 11:23:55.935857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:42392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.161 [2024-11-19 11:23:55.935871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.161 [2024-11-19 11:23:55.935887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:42400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.161 [2024-11-19 11:23:55.935902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.161 [2024-11-19 11:23:55.935917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:42408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.161 [2024-11-19 11:23:55.935933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.161 [2024-11-19 11:23:55.935948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:42416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.161 [2024-11-19 11:23:55.935963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.161 [2024-11-19 11:23:55.935979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:42424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.161 [2024-11-19 11:23:55.935993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.161 [2024-11-19 11:23:55.936008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:42432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.161 [2024-11-19 11:23:55.936023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.161 [2024-11-19 11:23:55.936039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:42440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.161 [2024-11-19 11:23:55.936054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.161 [2024-11-19 11:23:55.936069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:42448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.161 [2024-11-19 11:23:55.936084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.162 [2024-11-19 11:23:55.936100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:42456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.162 [2024-11-19 11:23:55.936115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.162 [2024-11-19 11:23:55.936135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:42464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.162 [2024-11-19 11:23:55.936150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.162 [2024-11-19 11:23:55.936166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:42472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.162 [2024-11-19 11:23:55.936181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.162 [2024-11-19 11:23:55.936197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.162 [2024-11-19 11:23:55.936211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.162 [2024-11-19 11:23:55.936226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:42488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.162 [2024-11-19 11:23:55.936241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.162 [2024-11-19 11:23:55.936256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:42496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.162 [2024-11-19 11:23:55.936271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.162 [2024-11-19 11:23:55.936287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:42504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.162 [2024-11-19 11:23:55.936301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.162 [2024-11-19 11:23:55.936316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:42512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.162 [2024-11-19 11:23:55.936331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.162 [2024-11-19 11:23:55.936346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:42520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.162 [2024-11-19 11:23:55.936369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.162 [2024-11-19 11:23:55.936388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:42528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.162 [2024-11-19 11:23:55.936402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.162 [2024-11-19 11:23:55.936417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:42536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.162 [2024-11-19 11:23:55.936432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.162 [2024-11-19 11:23:55.936447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:42544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.162 [2024-11-19 11:23:55.936462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.162 [2024-11-19 11:23:55.936477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:42552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.162 [2024-11-19 11:23:55.936491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.162 [2024-11-19 11:23:55.936507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:42560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.162 [2024-11-19 11:23:55.936525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.162 [2024-11-19 11:23:55.936541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:42568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.162 [2024-11-19 11:23:55.936556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.162 [2024-11-19 11:23:55.936571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:42576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.162 [2024-11-19 11:23:55.936585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.162 [2024-11-19 11:23:55.936601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:42584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.162 [2024-11-19 11:23:55.936615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.162 [2024-11-19 11:23:55.936631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.162 [2024-11-19 11:23:55.936645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.162 [2024-11-19 11:23:55.936660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:42600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.162 [2024-11-19 11:23:55.936674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.162 [2024-11-19 11:23:55.936689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:42608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.162 [2024-11-19 11:23:55.936703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.162 [2024-11-19 11:23:55.936718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:42616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.162 [2024-11-19 11:23:55.936732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.162 [2024-11-19 11:23:55.936747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:42624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.162 [2024-11-19 11:23:55.936761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.162 [2024-11-19 11:23:55.936776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:42632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.162 [2024-11-19 11:23:55.936790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.162 [2024-11-19 11:23:55.936804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:42640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.162 [2024-11-19 11:23:55.936818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.162 [2024-11-19 11:23:55.936834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:42648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.162 [2024-11-19 11:23:55.936848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.162 [2024-11-19 11:23:55.936863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:42656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.162 [2024-11-19 11:23:55.936877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.162 [2024-11-19 11:23:55.936892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:42664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.162 [2024-11-19 11:23:55.936910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.162 [2024-11-19 11:23:55.936926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:42672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.162 [2024-11-19 11:23:55.936941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.162 [2024-11-19 11:23:55.936956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:42680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.162 [2024-11-19 11:23:55.936970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.162 [2024-11-19 11:23:55.936985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:42688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.162 [2024-11-19 11:23:55.937000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.162 [2024-11-19 11:23:55.937015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:42696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.162 [2024-11-19 11:23:55.937029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.162 [2024-11-19 11:23:55.937044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:42704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.162 [2024-11-19 11:23:55.937058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.162 [2024-11-19 11:23:55.937074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.162 [2024-11-19 11:23:55.937088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.162 [2024-11-19 11:23:55.937103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:42720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.162 [2024-11-19 11:23:55.937117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.162 [2024-11-19 11:23:55.937132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:42728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.162 [2024-11-19 11:23:55.937146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.162 [2024-11-19 11:23:55.937161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:42736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.162 [2024-11-19 11:23:55.937176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.162 [2024-11-19 11:23:55.937191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:42744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.162 [2024-11-19 11:23:55.937205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.162 [2024-11-19 11:23:55.937220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:42752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.162 [2024-11-19 11:23:55.937234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.163 [2024-11-19 11:23:55.937249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:42760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.163 [2024-11-19 11:23:55.937263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.163 [2024-11-19 11:23:55.937282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:42768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.163 [2024-11-19 11:23:55.937297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.163 [2024-11-19 11:23:55.937312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:42776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.163 [2024-11-19 11:23:55.937327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.163 [2024-11-19 11:23:55.937341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:42784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.163 [2024-11-19 11:23:55.937355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.163 [2024-11-19 11:23:55.937379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:42792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.163 [2024-11-19 11:23:55.937394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.163 [2024-11-19 11:23:55.937409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:42800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.163 [2024-11-19 11:23:55.937423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.163 [2024-11-19 11:23:55.937438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:42808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.163 [2024-11-19 11:23:55.937452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.163 [2024-11-19 11:23:55.937467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:42816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.163 [2024-11-19 11:23:55.937481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.163 [2024-11-19 11:23:55.937512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.163 [2024-11-19 11:23:55.937529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42824 len:8 PRP1 0x0 PRP2 0x0 00:22:06.163 [2024-11-19 11:23:55.937543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.163 [2024-11-19 11:23:55.937561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.163 [2024-11-19 11:23:55.937573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.163 [2024-11-19 11:23:55.937584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42832 len:8 PRP1 0x0 PRP2 0x0 00:22:06.163 [2024-11-19 11:23:55.937597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.163 [2024-11-19 11:23:55.937610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.163 [2024-11-19 11:23:55.937621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.163 [2024-11-19 11:23:55.937632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42840 len:8 PRP1 0x0 PRP2 0x0 00:22:06.163 [2024-11-19 11:23:55.937644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.163 [2024-11-19 11:23:55.937657] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.163 [2024-11-19 11:23:55.937677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.163 [2024-11-19 11:23:55.937689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42848 len:8 PRP1 0x0 PRP2 0x0 00:22:06.163 [2024-11-19 11:23:55.937707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.163 [2024-11-19 11:23:55.937720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.163 [2024-11-19 11:23:55.937731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.163 [2024-11-19 11:23:55.937743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42856 len:8 PRP1 0x0 PRP2 0x0 00:22:06.163 [2024-11-19 11:23:55.937755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.163 [2024-11-19 11:23:55.937768] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.163 [2024-11-19 11:23:55.937779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.163 [2024-11-19 11:23:55.937790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42864 len:8 PRP1 0x0 PRP2 0x0 00:22:06.163 [2024-11-19 11:23:55.937803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.163 [2024-11-19 11:23:55.937816] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.163 [2024-11-19 11:23:55.937827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.163 [2024-11-19 11:23:55.937837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42872 len:8 PRP1 0x0 PRP2 0x0 00:22:06.163 [2024-11-19 11:23:55.937850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.163 [2024-11-19 11:23:55.937862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.163 [2024-11-19 11:23:55.937873] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.163 [2024-11-19 11:23:55.937884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42880 len:8 PRP1 0x0 PRP2 0x0 00:22:06.163 [2024-11-19 11:23:55.937897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.163 [2024-11-19 11:23:55.937909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.163 [2024-11-19 11:23:55.937920] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.163 [2024-11-19 11:23:55.937931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42888 len:8 PRP1 0x0 PRP2 0x0 00:22:06.163 [2024-11-19 11:23:55.937944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.163 [2024-11-19 11:23:55.937957] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.163 [2024-11-19 11:23:55.937967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.163 [2024-11-19 11:23:55.937978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42896 len:8 PRP1 0x0 PRP2 0x0 00:22:06.163 [2024-11-19 11:23:55.937991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.163 [2024-11-19 11:23:55.938003] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.163 [2024-11-19 11:23:55.938014] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.163 [2024-11-19 11:23:55.938025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42904 len:8 PRP1 0x0 PRP2 0x0 00:22:06.163 [2024-11-19 11:23:55.938037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.163 [2024-11-19 11:23:55.938050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.163 [2024-11-19 11:23:55.938067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.163 [2024-11-19 11:23:55.938082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42912 len:8 PRP1 0x0 PRP2 0x0 00:22:06.163 [2024-11-19 11:23:55.938096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.163 [2024-11-19 11:23:55.938109] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.163 [2024-11-19 11:23:55.938120] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.163 [2024-11-19 11:23:55.938131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42920 len:8 PRP1 0x0 PRP2 0x0 00:22:06.163 [2024-11-19 11:23:55.938144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.163 [2024-11-19 11:23:55.938157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.163 [2024-11-19 11:23:55.938174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.163 [2024-11-19 11:23:55.938186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42928 len:8 PRP1 0x0 PRP2 0x0 00:22:06.163 [2024-11-19 11:23:55.938199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.163 [2024-11-19 11:23:55.938212] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.163 [2024-11-19 11:23:55.938223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.163 [2024-11-19 11:23:55.938233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42936 len:8 PRP1 0x0 PRP2 0x0 00:22:06.163 [2024-11-19 11:23:55.938247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.163 [2024-11-19 11:23:55.938260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.163 [2024-11-19 11:23:55.938270] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.163 [2024-11-19 11:23:55.938281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42944 len:8 PRP1 0x0 PRP2 0x0 00:22:06.163 [2024-11-19 11:23:55.938294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.163 [2024-11-19 11:23:55.938307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.163 [2024-11-19 11:23:55.938318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.163 [2024-11-19 11:23:55.938329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42952 len:8 PRP1 0x0 PRP2 0x0 00:22:06.163 [2024-11-19 11:23:55.938342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.163 [2024-11-19 11:23:55.938355] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.163 [2024-11-19 11:23:55.938373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.163 [2024-11-19 11:23:55.938386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42960 len:8 PRP1 0x0 PRP2 0x0 00:22:06.164 [2024-11-19 11:23:55.938399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.164 [2024-11-19 11:23:55.938412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.164 [2024-11-19 11:23:55.938423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.164 [2024-11-19 11:23:55.938434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42968 len:8 PRP1 0x0 PRP2 0x0 00:22:06.164 [2024-11-19 11:23:55.938446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.164 [2024-11-19 11:23:55.938463] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.164 [2024-11-19 11:23:55.938475] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.164 [2024-11-19 11:23:55.938487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42976 len:8 PRP1 0x0 PRP2 0x0 00:22:06.164 [2024-11-19 11:23:55.938500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.164 [2024-11-19 11:23:55.938513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.164 [2024-11-19 11:23:55.938523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.164 [2024-11-19 11:23:55.938534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42984 len:8 PRP1 0x0 PRP2 0x0 00:22:06.164 [2024-11-19 11:23:55.938547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.164 [2024-11-19 11:23:55.938560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.164 [2024-11-19 11:23:55.938576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.164 [2024-11-19 11:23:55.938588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42992 len:8 PRP1 0x0 PRP2 0x0 00:22:06.164 [2024-11-19 11:23:55.938601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.164 [2024-11-19 11:23:55.938614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.164 [2024-11-19 11:23:55.938625] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.164 [2024-11-19 11:23:55.938636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43000 len:8 PRP1 0x0 PRP2 0x0 00:22:06.164 [2024-11-19 11:23:55.938649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.164 [2024-11-19 11:23:55.938661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.164 [2024-11-19 11:23:55.938672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.164 [2024-11-19 11:23:55.938683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43008 len:8 PRP1 0x0 PRP2 0x0 00:22:06.164 [2024-11-19 11:23:55.938703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.164 [2024-11-19 11:23:55.938717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.164 [2024-11-19 11:23:55.938728] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.164 [2024-11-19 11:23:55.938739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43016 len:8 PRP1 0x0 PRP2 0x0 00:22:06.164 [2024-11-19 11:23:55.938752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.164 [2024-11-19 11:23:55.938765] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.164 [2024-11-19 11:23:55.938776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.164 [2024-11-19 11:23:55.938787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43024 len:8 PRP1 0x0 PRP2 0x0 00:22:06.164 [2024-11-19 11:23:55.938800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.164 [2024-11-19 11:23:55.938869] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:06.164 [2024-11-19 11:23:55.938910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.164 [2024-11-19 11:23:55.938929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.164 [2024-11-19 11:23:55.938950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.164 [2024-11-19 11:23:55.938964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.164 [2024-11-19 11:23:55.938980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.164 [2024-11-19 11:23:55.938993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.164 [2024-11-19 11:23:55.939007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.164 [2024-11-19 11:23:55.939020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.164 [2024-11-19 11:23:55.939032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:06.164 [2024-11-19 11:23:55.939107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1545560 (9): Bad file descriptor 00:22:06.164 [2024-11-19 11:23:55.942336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:06.164 [2024-11-19 11:23:55.977251] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:22:06.164 8622.70 IOPS, 33.68 MiB/s [2024-11-19T10:24:01.661Z] 8645.00 IOPS, 33.77 MiB/s [2024-11-19T10:24:01.661Z] 8642.75 IOPS, 33.76 MiB/s [2024-11-19T10:24:01.661Z] 8654.62 IOPS, 33.81 MiB/s [2024-11-19T10:24:01.661Z] 8653.21 IOPS, 33.80 MiB/s 00:22:06.164 Latency(us) 00:22:06.164 [2024-11-19T10:24:01.661Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:06.164 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:06.164 Verification LBA range: start 0x0 length 0x4000 00:22:06.164 NVMe0n1 : 15.00 8663.78 33.84 289.35 0.00 14270.77 591.64 18252.99 00:22:06.164 [2024-11-19T10:24:01.661Z] =================================================================================================================== 00:22:06.164 [2024-11-19T10:24:01.661Z] Total : 8663.78 33.84 289.35 0.00 14270.77 591.64 18252.99 00:22:06.164 Received shutdown signal, test time was about 15.000000 seconds 00:22:06.164 00:22:06.164 Latency(us) 00:22:06.164 [2024-11-19T10:24:01.661Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:06.164 [2024-11-19T10:24:01.661Z] =================================================================================================================== 00:22:06.164 [2024-11-19T10:24:01.661Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:06.164 11:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:06.164 11:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:22:06.164 11:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:22:06.164 11:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2682397 00:22:06.164 11:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:06.164 11:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2682397 /var/tmp/bdevperf.sock 00:22:06.164 11:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2682397 ']' 00:22:06.164 11:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:06.164 11:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:06.164 11:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:06.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:06.164 11:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:06.164 11:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:06.422 11:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:06.422 11:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:06.422 11:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:06.679 [2024-11-19 11:24:02.029965] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:06.679 11:24:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:06.937 [2024-11-19 11:24:02.302693] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:06.937 11:24:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:07.502 NVMe0n1 00:22:07.502 11:24:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:07.759 00:22:07.759 11:24:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:08.016 00:22:08.274 11:24:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:08.274 11:24:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:22:08.532 11:24:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:08.790 11:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:22:12.071 11:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:12.071 11:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:22:12.071 11:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2683247 00:22:12.071 11:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:12.071 11:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2683247 00:22:13.445 { 00:22:13.445 "results": [ 00:22:13.445 { 00:22:13.445 "job": "NVMe0n1", 00:22:13.445 "core_mask": "0x1", 00:22:13.445 "workload": "verify", 00:22:13.445 "status": "finished", 00:22:13.445 "verify_range": { 00:22:13.445 "start": 0, 00:22:13.445 "length": 16384 00:22:13.445 }, 00:22:13.445 "queue_depth": 128, 00:22:13.445 "io_size": 4096, 00:22:13.445 "runtime": 1.01062, 00:22:13.445 "iops": 8625.398270368685, 00:22:13.445 "mibps": 33.69296199362768, 00:22:13.445 "io_failed": 0, 00:22:13.445 "io_timeout": 0, 00:22:13.445 "avg_latency_us": 14760.983733615458, 00:22:13.445 "min_latency_us": 3082.6192592592593, 00:22:13.445 "max_latency_us": 12233.386666666667 00:22:13.445 } 00:22:13.445 ], 00:22:13.445 "core_count": 1 00:22:13.445 } 00:22:13.445 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:13.445 [2024-11-19 11:24:01.525401] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:22:13.445 [2024-11-19 11:24:01.525499] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2682397 ] 00:22:13.445 [2024-11-19 11:24:01.605729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.445 [2024-11-19 11:24:01.663248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:13.445 [2024-11-19 11:24:04.068322] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:13.445 [2024-11-19 11:24:04.068435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.445 [2024-11-19 11:24:04.068459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.445 [2024-11-19 11:24:04.068477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.445 [2024-11-19 11:24:04.068502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.445 [2024-11-19 11:24:04.068516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.445 [2024-11-19 11:24:04.068530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.445 [2024-11-19 11:24:04.068544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.445 [2024-11-19 11:24:04.068557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.445 [2024-11-19 11:24:04.068571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:22:13.445 [2024-11-19 11:24:04.068627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:22:13.445 [2024-11-19 11:24:04.068659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a06560 (9): Bad file descriptor 00:22:13.445 [2024-11-19 11:24:04.081317] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:22:13.445 Running I/O for 1 seconds... 00:22:13.445 8589.00 IOPS, 33.55 MiB/s 00:22:13.445 Latency(us) 00:22:13.445 [2024-11-19T10:24:08.942Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:13.446 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:13.446 Verification LBA range: start 0x0 length 0x4000 00:22:13.446 NVMe0n1 : 1.01 8625.40 33.69 0.00 0.00 14760.98 3082.62 12233.39 00:22:13.446 [2024-11-19T10:24:08.943Z] =================================================================================================================== 00:22:13.446 [2024-11-19T10:24:08.943Z] Total : 8625.40 33.69 0.00 0.00 14760.98 3082.62 12233.39 00:22:13.446 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:13.446 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:22:13.446 11:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:13.703 11:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:13.703 11:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:22:13.960 11:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:14.218 11:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:22:17.497 11:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:17.497 11:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:22:17.755 11:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2682397 00:22:17.755 11:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2682397 ']' 00:22:17.755 11:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2682397 00:22:17.755 11:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:17.755 11:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:17.755 11:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2682397 00:22:17.755 11:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:17.755 11:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:17.755 11:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2682397' 00:22:17.755 killing process with pid 2682397 00:22:17.755 11:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2682397 00:22:17.755 11:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2682397 00:22:18.013 11:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:22:18.013 11:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:18.271 11:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:18.271 11:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:18.271 11:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:22:18.271 11:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:18.271 11:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:22:18.271 11:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:18.271 11:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:22:18.271 11:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:18.271 11:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:18.271 rmmod nvme_tcp 00:22:18.271 rmmod nvme_fabrics 00:22:18.271 rmmod nvme_keyring 00:22:18.271 11:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:18.271 11:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:22:18.271 11:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:22:18.271 11:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2680039 ']' 00:22:18.271 11:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2680039 00:22:18.271 11:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2680039 ']' 00:22:18.271 11:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2680039 00:22:18.271 11:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:18.271 11:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:18.271 11:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2680039 00:22:18.271 11:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:18.271 11:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:18.271 11:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2680039' 00:22:18.271 killing process with pid 2680039 00:22:18.271 11:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2680039 00:22:18.271 11:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2680039 00:22:18.531 11:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:18.531 11:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:18.531 11:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:18.531 11:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:22:18.531 11:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:22:18.531 11:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:22:18.531 11:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:18.532 11:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:18.532 11:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:18.532 11:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:18.532 11:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:18.532 11:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.437 11:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:20.437 00:22:20.437 real 0m36.486s 00:22:20.437 user 2m7.154s 00:22:20.437 sys 0m6.739s 00:22:20.437 11:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:20.437 11:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:20.437 ************************************ 00:22:20.437 END TEST nvmf_failover 00:22:20.437 ************************************ 00:22:20.696 11:24:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:20.696 11:24:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:20.696 11:24:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:20.696 11:24:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.696 ************************************ 00:22:20.696 START TEST nvmf_host_discovery 00:22:20.696 ************************************ 00:22:20.696 11:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:20.696 * Looking for test storage... 00:22:20.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:20.696 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:20.696 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:22:20.696 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:20.696 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:20.696 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:20.696 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:20.696 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:20.696 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:20.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.697 --rc genhtml_branch_coverage=1 00:22:20.697 --rc genhtml_function_coverage=1 00:22:20.697 --rc genhtml_legend=1 00:22:20.697 --rc geninfo_all_blocks=1 00:22:20.697 --rc geninfo_unexecuted_blocks=1 00:22:20.697 00:22:20.697 ' 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:20.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.697 --rc genhtml_branch_coverage=1 00:22:20.697 --rc genhtml_function_coverage=1 00:22:20.697 --rc genhtml_legend=1 00:22:20.697 --rc geninfo_all_blocks=1 00:22:20.697 --rc geninfo_unexecuted_blocks=1 00:22:20.697 00:22:20.697 ' 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:20.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.697 --rc genhtml_branch_coverage=1 00:22:20.697 --rc genhtml_function_coverage=1 00:22:20.697 --rc genhtml_legend=1 00:22:20.697 --rc geninfo_all_blocks=1 00:22:20.697 --rc geninfo_unexecuted_blocks=1 00:22:20.697 00:22:20.697 ' 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:20.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.697 --rc genhtml_branch_coverage=1 00:22:20.697 --rc genhtml_function_coverage=1 00:22:20.697 --rc genhtml_legend=1 00:22:20.697 --rc geninfo_all_blocks=1 00:22:20.697 --rc geninfo_unexecuted_blocks=1 00:22:20.697 00:22:20.697 ' 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:20.697 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.698 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.698 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.698 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:22:20.698 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.698 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:22:20.698 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:20.698 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:20.698 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:20.698 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:20.698 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:20.698 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:20.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:20.698 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:20.698 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:20.698 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:20.698 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:20.698 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:20.698 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:20.698 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:20.698 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:20.698 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:20.698 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:22:20.698 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:20.698 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:20.698 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:20.698 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:20.698 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:20.698 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.698 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:20.698 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.698 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:20.698 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:20.698 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:22:20.698 11:24:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:22:23.233 Found 0000:82:00.0 (0x8086 - 0x159b) 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:22:23.233 Found 0000:82:00.1 (0x8086 - 0x159b) 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:22:23.233 Found net devices under 0000:82:00.0: cvl_0_0 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:22:23.233 Found net devices under 0000:82:00.1: cvl_0_1 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:23.233 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:23.492 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:23.492 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:23.492 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:23.492 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:23.492 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:23.492 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:23.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:23.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:22:23.492 00:22:23.492 --- 10.0.0.2 ping statistics --- 00:22:23.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.492 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:22:23.492 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:23.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:23.492 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:22:23.492 00:22:23.492 --- 10.0.0.1 ping statistics --- 00:22:23.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.492 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:22:23.492 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:23.492 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:22:23.492 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:23.492 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:23.492 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:23.492 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:23.492 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:23.492 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:23.492 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:23.492 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:23.492 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:23.492 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:23.492 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:23.492 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=2686623 00:22:23.492 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:23.492 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 2686623 00:22:23.492 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2686623 ']' 00:22:23.492 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.492 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:23.492 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:23.492 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:23.492 11:24:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:23.492 [2024-11-19 11:24:18.857160] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:22:23.492 [2024-11-19 11:24:18.857236] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:23.492 [2024-11-19 11:24:18.940265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.751 [2024-11-19 11:24:18.998540] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:23.751 [2024-11-19 11:24:18.998589] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:23.751 [2024-11-19 11:24:18.998619] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:23.751 [2024-11-19 11:24:18.998631] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:23.751 [2024-11-19 11:24:18.998641] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:23.751 [2024-11-19 11:24:18.999282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:23.751 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:23.751 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:22:23.751 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:23.751 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:23.751 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:23.751 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:23.751 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:23.751 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.751 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:23.751 [2024-11-19 11:24:19.143634] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:23.751 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.751 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:23.751 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.751 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:23.751 [2024-11-19 11:24:19.151869] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:23.751 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.751 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:23.751 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.751 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:23.751 null0 00:22:23.751 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.751 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:23.751 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.751 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:23.751 null1 00:22:23.751 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.751 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:23.751 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.751 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:23.751 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.751 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2686648 00:22:23.751 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2686648 /tmp/host.sock 00:22:23.751 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:23.751 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2686648 ']' 00:22:23.751 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:22:23.751 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:23.751 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:23.751 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:23.751 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:23.751 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:23.751 [2024-11-19 11:24:19.228359] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:22:23.751 [2024-11-19 11:24:19.228456] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2686648 ] 00:22:24.009 [2024-11-19 11:24:19.302914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.009 [2024-11-19 11:24:19.361398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:24.009 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:24.009 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:22:24.009 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:24.009 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:24.009 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.009 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.009 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.009 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:24.009 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.009 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.009 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.009 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:22:24.009 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:22:24.009 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:24.009 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:24.009 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.010 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.010 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:24.010 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:24.010 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.267 [2024-11-19 11:24:19.753488] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:24.267 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:24.526 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.526 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:22:24.526 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:22:24.526 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:24.526 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.526 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:24.526 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.526 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:24.526 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:24.526 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.526 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:22:24.526 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:22:24.526 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:24.526 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:24.526 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:24.526 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:24.526 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:24.526 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:24.526 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:24.526 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:24.526 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:24.526 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.526 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.526 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.526 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:24.526 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:22:24.526 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:24.526 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:24.526 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:24.526 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.526 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.526 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.526 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:24.526 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:24.526 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:24.526 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:24.526 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:24.526 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:24.526 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:24.526 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:24.526 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.526 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.526 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:24.526 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:24.526 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.526 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:22:24.526 11:24:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:22:25.092 [2024-11-19 11:24:20.532532] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:25.092 [2024-11-19 11:24:20.532565] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:25.092 [2024-11-19 11:24:20.532589] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:25.350 [2024-11-19 11:24:20.618860] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:25.350 [2024-11-19 11:24:20.679652] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:22:25.350 [2024-11-19 11:24:20.680604] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x23f1f80:1 started. 00:22:25.350 [2024-11-19 11:24:20.682273] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:25.350 [2024-11-19 11:24:20.682293] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:25.350 [2024-11-19 11:24:20.689137] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x23f1f80 was disconnected and freed. delete nvme_qpair. 00:22:25.608 11:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:25.608 11:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:25.608 11:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:25.608 11:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:25.608 11:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:25.608 11:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.608 11:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:25.608 11:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:25.608 11:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:25.608 11:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.608 11:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.608 11:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:25.608 11:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:25.608 11:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:25.608 11:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:25.608 11:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:25.608 11:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:22:25.608 11:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:25.608 11:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:25.608 11:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.608 11:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:25.608 11:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:25.608 11:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:25.608 11:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:25.608 11:24:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.608 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:25.608 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:25.608 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:25.608 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:25.608 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:25.608 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:25.608 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:22:25.608 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:25.608 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:25.608 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:25.608 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.608 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:25.608 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:25.608 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:25.608 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.608 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:22:25.608 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:25.608 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:22:25.608 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:25.608 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:25.608 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:25.608 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:25.608 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:25.608 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:25.608 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:25.608 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:25.608 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:25.608 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.608 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:25.608 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.867 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:25.867 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:22:25.867 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:25.867 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:25.867 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:25.867 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.867 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:25.867 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.867 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:25.867 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:25.867 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:25.867 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:25.867 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:25.867 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:25.867 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:25.867 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:25.867 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.867 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:25.867 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:25.867 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:25.867 [2024-11-19 11:24:21.329425] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x23f2660:1 started. 00:22:25.867 [2024-11-19 11:24:21.332740] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x23f2660 was disconnected and freed. delete nvme_qpair. 00:22:25.867 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.867 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:25.867 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:25.867 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:22:25.867 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:25.867 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:25.867 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:25.867 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:25.867 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:25.867 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:25.867 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:25.867 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:25.867 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:25.867 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.867 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:26.151 [2024-11-19 11:24:21.402699] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:26.151 [2024-11-19 11:24:21.403094] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:26.151 [2024-11-19 11:24:21.403132] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.151 [2024-11-19 11:24:21.488820] bdev_nvme.c:7402:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:22:26.151 11:24:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:22:26.445 [2024-11-19 11:24:21.748229] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:22:26.445 [2024-11-19 11:24:21.748278] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:26.445 [2024-11-19 11:24:21.748294] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:26.445 [2024-11-19 11:24:21.748301] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:27.380 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:27.380 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:27.380 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:27.380 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:27.380 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.380 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:27.380 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:27.380 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:27.380 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:27.380 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.380 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:27.380 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:27.380 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:22:27.380 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:27.380 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:27.380 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:27.380 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:27.380 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:27.380 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:27.380 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:27.380 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:27.380 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.380 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:27.380 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:27.380 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.380 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:27.380 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:27.380 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:27.380 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:27.380 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:27.380 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.380 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:27.380 [2024-11-19 11:24:22.622801] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:27.380 [2024-11-19 11:24:22.622843] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:27.380 [2024-11-19 11:24:22.624104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.380 [2024-11-19 11:24:22.624138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.380 [2024-11-19 11:24:22.624154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.380 [2024-11-19 11:24:22.624172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.380 [2024-11-19 11:24:22.624189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.380 [2024-11-19 11:24:22.624201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.380 [2024-11-19 11:24:22.624214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.380 [2024-11-19 11:24:22.624226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.380 [2024-11-19 11:24:22.624238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c2550 is same with the state(6) to be set 00:22:27.380 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.380 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:27.380 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:27.380 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:27.380 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:27.380 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:27.380 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:27.380 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:27.380 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.380 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:27.380 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:27.380 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:27.380 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:27.380 [2024-11-19 11:24:22.634116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c2550 (9): Bad file descriptor 00:22:27.380 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.380 [2024-11-19 11:24:22.644156] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:27.380 [2024-11-19 11:24:22.644186] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:27.380 [2024-11-19 11:24:22.644196] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:27.380 [2024-11-19 11:24:22.644204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:27.380 [2024-11-19 11:24:22.644248] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:27.380 [2024-11-19 11:24:22.644516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:27.381 [2024-11-19 11:24:22.644547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23c2550 with addr=10.0.0.2, port=4420 00:22:27.381 [2024-11-19 11:24:22.644564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c2550 is same with the state(6) to be set 00:22:27.381 [2024-11-19 11:24:22.644588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c2550 (9): Bad file descriptor 00:22:27.381 [2024-11-19 11:24:22.644610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:27.381 [2024-11-19 11:24:22.644625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:27.381 [2024-11-19 11:24:22.644662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:27.381 [2024-11-19 11:24:22.644675] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:27.381 [2024-11-19 11:24:22.644684] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:27.381 [2024-11-19 11:24:22.644692] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:27.381 [2024-11-19 11:24:22.654281] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:27.381 [2024-11-19 11:24:22.654302] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:27.381 [2024-11-19 11:24:22.654311] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:27.381 [2024-11-19 11:24:22.654318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:27.381 [2024-11-19 11:24:22.654357] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:27.381 [2024-11-19 11:24:22.654556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:27.381 [2024-11-19 11:24:22.654585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23c2550 with addr=10.0.0.2, port=4420 00:22:27.381 [2024-11-19 11:24:22.654601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c2550 is same with the state(6) to be set 00:22:27.381 [2024-11-19 11:24:22.654635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c2550 (9): Bad file descriptor 00:22:27.381 [2024-11-19 11:24:22.654659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:27.381 [2024-11-19 11:24:22.654673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:27.381 [2024-11-19 11:24:22.654686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:27.381 [2024-11-19 11:24:22.654698] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:27.381 [2024-11-19 11:24:22.654707] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:27.381 [2024-11-19 11:24:22.654714] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:27.381 [2024-11-19 11:24:22.664400] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:27.381 [2024-11-19 11:24:22.664423] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:27.381 [2024-11-19 11:24:22.664432] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:27.381 [2024-11-19 11:24:22.664440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:27.381 [2024-11-19 11:24:22.664465] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:27.381 [2024-11-19 11:24:22.664597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:27.381 [2024-11-19 11:24:22.664624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23c2550 with addr=10.0.0.2, port=4420 00:22:27.381 [2024-11-19 11:24:22.664640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c2550 is same with the state(6) to be set 00:22:27.381 [2024-11-19 11:24:22.664676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c2550 (9): Bad file descriptor 00:22:27.381 [2024-11-19 11:24:22.664696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:27.381 [2024-11-19 11:24:22.664729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:27.381 [2024-11-19 11:24:22.664741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:27.381 [2024-11-19 11:24:22.664752] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:27.381 [2024-11-19 11:24:22.664760] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:27.381 [2024-11-19 11:24:22.664767] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:27.381 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.381 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:27.381 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:27.381 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:27.381 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:27.381 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:27.381 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:27.381 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:27.381 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:27.381 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.381 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:27.381 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:27.381 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:27.381 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:27.381 [2024-11-19 11:24:22.674501] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:27.381 [2024-11-19 11:24:22.674526] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:27.381 [2024-11-19 11:24:22.674536] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:27.381 [2024-11-19 11:24:22.674544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:27.381 [2024-11-19 11:24:22.674572] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:27.381 [2024-11-19 11:24:22.674718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:27.381 [2024-11-19 11:24:22.674745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23c2550 with addr=10.0.0.2, port=4420 00:22:27.381 [2024-11-19 11:24:22.674761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c2550 is same with the state(6) to be set 00:22:27.381 [2024-11-19 11:24:22.674782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c2550 (9): Bad file descriptor 00:22:27.381 [2024-11-19 11:24:22.674803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:27.381 [2024-11-19 11:24:22.674816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:27.381 [2024-11-19 11:24:22.674829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:27.381 [2024-11-19 11:24:22.674841] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:27.381 [2024-11-19 11:24:22.674854] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:27.381 [2024-11-19 11:24:22.674863] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:27.381 [2024-11-19 11:24:22.684607] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:27.381 [2024-11-19 11:24:22.684629] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:27.381 [2024-11-19 11:24:22.684638] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:27.381 [2024-11-19 11:24:22.684646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:27.381 [2024-11-19 11:24:22.684685] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:27.381 [2024-11-19 11:24:22.684868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:27.381 [2024-11-19 11:24:22.684895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23c2550 with addr=10.0.0.2, port=4420 00:22:27.381 [2024-11-19 11:24:22.684910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c2550 is same with the state(6) to be set 00:22:27.381 [2024-11-19 11:24:22.684932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c2550 (9): Bad file descriptor 00:22:27.381 [2024-11-19 11:24:22.684952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:27.382 [2024-11-19 11:24:22.684965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:27.382 [2024-11-19 11:24:22.684978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:27.382 [2024-11-19 11:24:22.684990] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:27.382 [2024-11-19 11:24:22.684998] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:27.382 [2024-11-19 11:24:22.685006] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:27.382 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.382 [2024-11-19 11:24:22.694725] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:27.382 [2024-11-19 11:24:22.694745] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:27.382 [2024-11-19 11:24:22.694753] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:27.382 [2024-11-19 11:24:22.694760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:27.382 [2024-11-19 11:24:22.694798] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:27.382 [2024-11-19 11:24:22.695025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:27.382 [2024-11-19 11:24:22.695050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23c2550 with addr=10.0.0.2, port=4420 00:22:27.382 [2024-11-19 11:24:22.695065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c2550 is same with the state(6) to be set 00:22:27.382 [2024-11-19 11:24:22.695086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c2550 (9): Bad file descriptor 00:22:27.382 [2024-11-19 11:24:22.695105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:27.382 [2024-11-19 11:24:22.695119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:27.382 [2024-11-19 11:24:22.695136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:27.382 [2024-11-19 11:24:22.695148] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:27.382 [2024-11-19 11:24:22.695156] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:27.382 [2024-11-19 11:24:22.695164] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:27.382 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:27.382 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:27.382 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:27.382 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:27.382 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:27.382 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:27.382 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:22:27.382 [2024-11-19 11:24:22.704832] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:27.382 [2024-11-19 11:24:22.704851] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:27.382 [2024-11-19 11:24:22.704859] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:27.382 [2024-11-19 11:24:22.704866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:27.382 [2024-11-19 11:24:22.704902] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:27.382 [2024-11-19 11:24:22.705087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:27.382 [2024-11-19 11:24:22.705112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23c2550 with addr=10.0.0.2, port=4420 00:22:27.382 [2024-11-19 11:24:22.705126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c2550 is same with the state(6) to be set 00:22:27.382 [2024-11-19 11:24:22.705145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c2550 (9): Bad file descriptor 00:22:27.382 [2024-11-19 11:24:22.705164] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:27.382 [2024-11-19 11:24:22.705176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:27.382 [2024-11-19 11:24:22.705189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:27.382 [2024-11-19 11:24:22.705199] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:27.382 [2024-11-19 11:24:22.705207] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:27.382 [2024-11-19 11:24:22.705214] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:27.382 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:27.382 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:27.382 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.382 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:27.382 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:27.382 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:27.382 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:27.382 [2024-11-19 11:24:22.714938] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:27.382 [2024-11-19 11:24:22.714962] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:27.382 [2024-11-19 11:24:22.714972] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:27.382 [2024-11-19 11:24:22.714979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:27.382 [2024-11-19 11:24:22.715020] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:27.382 [2024-11-19 11:24:22.715254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:27.382 [2024-11-19 11:24:22.715280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23c2550 with addr=10.0.0.2, port=4420 00:22:27.382 [2024-11-19 11:24:22.715295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c2550 is same with the state(6) to be set 00:22:27.382 [2024-11-19 11:24:22.715317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c2550 (9): Bad file descriptor 00:22:27.382 [2024-11-19 11:24:22.715337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:27.382 [2024-11-19 11:24:22.715376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:27.382 [2024-11-19 11:24:22.715391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:27.382 [2024-11-19 11:24:22.715404] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:27.382 [2024-11-19 11:24:22.715412] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:27.382 [2024-11-19 11:24:22.715420] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:27.382 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.382 [2024-11-19 11:24:22.725053] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:27.382 [2024-11-19 11:24:22.725072] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:27.382 [2024-11-19 11:24:22.725081] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:27.382 [2024-11-19 11:24:22.725088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:27.382 [2024-11-19 11:24:22.725124] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:27.382 [2024-11-19 11:24:22.725384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:27.382 [2024-11-19 11:24:22.725412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23c2550 with addr=10.0.0.2, port=4420 00:22:27.382 [2024-11-19 11:24:22.725428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c2550 is same with the state(6) to be set 00:22:27.382 [2024-11-19 11:24:22.725449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c2550 (9): Bad file descriptor 00:22:27.382 [2024-11-19 11:24:22.725482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:27.382 [2024-11-19 11:24:22.725499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:27.382 [2024-11-19 11:24:22.725513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:27.382 [2024-11-19 11:24:22.725534] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:27.382 [2024-11-19 11:24:22.725544] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:27.383 [2024-11-19 11:24:22.725552] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:27.383 [2024-11-19 11:24:22.735157] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:27.383 [2024-11-19 11:24:22.735178] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:27.383 [2024-11-19 11:24:22.735186] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:27.383 [2024-11-19 11:24:22.735193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:27.383 [2024-11-19 11:24:22.735232] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:27.383 [2024-11-19 11:24:22.735461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:27.383 [2024-11-19 11:24:22.735489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23c2550 with addr=10.0.0.2, port=4420 00:22:27.383 [2024-11-19 11:24:22.735506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c2550 is same with the state(6) to be set 00:22:27.383 [2024-11-19 11:24:22.735527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c2550 (9): Bad file descriptor 00:22:27.383 [2024-11-19 11:24:22.735560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:27.383 [2024-11-19 11:24:22.735577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:27.383 [2024-11-19 11:24:22.735591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:27.383 [2024-11-19 11:24:22.735603] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:27.383 [2024-11-19 11:24:22.735612] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:27.383 [2024-11-19 11:24:22.735619] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:27.383 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:22:27.383 11:24:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:22:27.383 [2024-11-19 11:24:22.745264] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:27.383 [2024-11-19 11:24:22.745284] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:27.383 [2024-11-19 11:24:22.745292] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:27.383 [2024-11-19 11:24:22.745299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:27.383 [2024-11-19 11:24:22.745336] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:27.383 [2024-11-19 11:24:22.745475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:27.383 [2024-11-19 11:24:22.745503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23c2550 with addr=10.0.0.2, port=4420 00:22:27.383 [2024-11-19 11:24:22.745519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c2550 is same with the state(6) to be set 00:22:27.383 [2024-11-19 11:24:22.745541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c2550 (9): Bad file descriptor 00:22:27.383 [2024-11-19 11:24:22.745562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:27.383 [2024-11-19 11:24:22.745582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:27.383 [2024-11-19 11:24:22.745596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:27.383 [2024-11-19 11:24:22.745607] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:27.383 [2024-11-19 11:24:22.745616] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:27.383 [2024-11-19 11:24:22.745623] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:27.383 [2024-11-19 11:24:22.749606] bdev_nvme.c:7265:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:27.383 [2024-11-19 11:24:22.749635] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:28.316 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:28.316 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:22:28.316 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:28.316 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:28.316 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:28.316 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:28.316 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.316 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:28.316 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:28.316 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.316 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:22:28.316 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:28.316 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:22:28.316 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:28.316 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:28.316 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:28.316 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:28.316 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:28.316 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:28.316 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:28.316 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:28.316 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:28.316 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.316 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:28.316 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.574 11:24:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.947 [2024-11-19 11:24:25.043978] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:29.947 [2024-11-19 11:24:25.044001] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:29.947 [2024-11-19 11:24:25.044021] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:29.947 [2024-11-19 11:24:25.172449] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:22:30.206 [2024-11-19 11:24:25.476881] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:22:30.206 [2024-11-19 11:24:25.477627] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x23bf270:1 started. 00:22:30.206 [2024-11-19 11:24:25.479709] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:30.206 [2024-11-19 11:24:25.479739] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:30.206 [2024-11-19 11:24:25.482113] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x23bf270 was disconnected and freed. delete nvme_qpair. 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:30.206 request: 00:22:30.206 { 00:22:30.206 "name": "nvme", 00:22:30.206 "trtype": "tcp", 00:22:30.206 "traddr": "10.0.0.2", 00:22:30.206 "adrfam": "ipv4", 00:22:30.206 "trsvcid": "8009", 00:22:30.206 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:30.206 "wait_for_attach": true, 00:22:30.206 "method": "bdev_nvme_start_discovery", 00:22:30.206 "req_id": 1 00:22:30.206 } 00:22:30.206 Got JSON-RPC error response 00:22:30.206 response: 00:22:30.206 { 00:22:30.206 "code": -17, 00:22:30.206 "message": "File exists" 00:22:30.206 } 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:30.206 request: 00:22:30.206 { 00:22:30.206 "name": "nvme_second", 00:22:30.206 "trtype": "tcp", 00:22:30.206 "traddr": "10.0.0.2", 00:22:30.206 "adrfam": "ipv4", 00:22:30.206 "trsvcid": "8009", 00:22:30.206 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:30.206 "wait_for_attach": true, 00:22:30.206 "method": "bdev_nvme_start_discovery", 00:22:30.206 "req_id": 1 00:22:30.206 } 00:22:30.206 Got JSON-RPC error response 00:22:30.206 response: 00:22:30.206 { 00:22:30.206 "code": -17, 00:22:30.206 "message": "File exists" 00:22:30.206 } 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:30.206 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:30.207 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:22:30.207 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:30.207 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:30.207 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:30.207 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:30.207 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:30.207 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:30.207 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.207 11:24:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:31.580 [2024-11-19 11:24:26.691616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.580 [2024-11-19 11:24:26.691667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fce40 with addr=10.0.0.2, port=8010 00:22:31.580 [2024-11-19 11:24:26.691689] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:31.580 [2024-11-19 11:24:26.691716] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:31.580 [2024-11-19 11:24:26.691727] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:32.514 [2024-11-19 11:24:27.694023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:32.514 [2024-11-19 11:24:27.694066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fce40 with addr=10.0.0.2, port=8010 00:22:32.514 [2024-11-19 11:24:27.694086] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:32.514 [2024-11-19 11:24:27.694107] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:32.514 [2024-11-19 11:24:27.694118] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:33.448 [2024-11-19 11:24:28.696241] bdev_nvme.c:7521:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:22:33.448 request: 00:22:33.448 { 00:22:33.448 "name": "nvme_second", 00:22:33.448 "trtype": "tcp", 00:22:33.448 "traddr": "10.0.0.2", 00:22:33.448 "adrfam": "ipv4", 00:22:33.448 "trsvcid": "8010", 00:22:33.448 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:33.448 "wait_for_attach": false, 00:22:33.448 "attach_timeout_ms": 3000, 00:22:33.448 "method": "bdev_nvme_start_discovery", 00:22:33.448 "req_id": 1 00:22:33.448 } 00:22:33.448 Got JSON-RPC error response 00:22:33.448 response: 00:22:33.448 { 00:22:33.448 "code": -110, 00:22:33.448 "message": "Connection timed out" 00:22:33.448 } 00:22:33.448 11:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:33.448 11:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:22:33.448 11:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:33.448 11:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:33.448 11:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:33.448 11:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:22:33.448 11:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:33.448 11:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:33.448 11:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.448 11:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:33.448 11:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:33.448 11:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:33.448 11:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.448 11:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:22:33.448 11:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:22:33.448 11:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2686648 00:22:33.448 11:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:22:33.448 11:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:33.448 11:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:22:33.448 11:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:33.448 11:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:22:33.448 11:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:33.448 11:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:33.448 rmmod nvme_tcp 00:22:33.448 rmmod nvme_fabrics 00:22:33.448 rmmod nvme_keyring 00:22:33.448 11:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:33.448 11:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:22:33.448 11:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:22:33.448 11:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 2686623 ']' 00:22:33.448 11:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 2686623 00:22:33.448 11:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 2686623 ']' 00:22:33.448 11:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 2686623 00:22:33.448 11:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:22:33.448 11:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:33.448 11:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2686623 00:22:33.448 11:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:33.448 11:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:33.448 11:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2686623' 00:22:33.448 killing process with pid 2686623 00:22:33.448 11:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 2686623 00:22:33.448 11:24:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 2686623 00:22:33.709 11:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:33.709 11:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:33.709 11:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:33.709 11:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:22:33.709 11:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:22:33.709 11:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:33.709 11:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:22:33.709 11:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:33.709 11:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:33.709 11:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.709 11:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:33.709 11:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.246 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:36.246 00:22:36.246 real 0m15.159s 00:22:36.246 user 0m21.970s 00:22:36.247 sys 0m3.348s 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.247 ************************************ 00:22:36.247 END TEST nvmf_host_discovery 00:22:36.247 ************************************ 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.247 ************************************ 00:22:36.247 START TEST nvmf_host_multipath_status 00:22:36.247 ************************************ 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:36.247 * Looking for test storage... 00:22:36.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:36.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.247 --rc genhtml_branch_coverage=1 00:22:36.247 --rc genhtml_function_coverage=1 00:22:36.247 --rc genhtml_legend=1 00:22:36.247 --rc geninfo_all_blocks=1 00:22:36.247 --rc geninfo_unexecuted_blocks=1 00:22:36.247 00:22:36.247 ' 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:36.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.247 --rc genhtml_branch_coverage=1 00:22:36.247 --rc genhtml_function_coverage=1 00:22:36.247 --rc genhtml_legend=1 00:22:36.247 --rc geninfo_all_blocks=1 00:22:36.247 --rc geninfo_unexecuted_blocks=1 00:22:36.247 00:22:36.247 ' 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:36.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.247 --rc genhtml_branch_coverage=1 00:22:36.247 --rc genhtml_function_coverage=1 00:22:36.247 --rc genhtml_legend=1 00:22:36.247 --rc geninfo_all_blocks=1 00:22:36.247 --rc geninfo_unexecuted_blocks=1 00:22:36.247 00:22:36.247 ' 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:36.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.247 --rc genhtml_branch_coverage=1 00:22:36.247 --rc genhtml_function_coverage=1 00:22:36.247 --rc genhtml_legend=1 00:22:36.247 --rc geninfo_all_blocks=1 00:22:36.247 --rc geninfo_unexecuted_blocks=1 00:22:36.247 00:22:36.247 ' 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.247 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.248 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.248 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:22:36.248 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.248 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:22:36.248 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:36.248 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:36.248 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:36.248 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:36.248 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:36.248 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:36.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:36.248 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:36.248 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:36.248 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:36.248 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:36.248 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:36.248 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:36.248 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:22:36.248 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:36.248 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:36.248 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:22:36.248 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:36.248 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:36.248 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:36.248 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:36.248 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:36.248 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.248 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:36.248 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.248 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:36.248 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:36.248 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:22:36.248 11:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:38.778 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:38.778 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:22:38.778 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:38.778 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:38.778 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:38.778 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:38.778 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:38.778 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:22:38.779 Found 0000:82:00.0 (0x8086 - 0x159b) 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:22:38.779 Found 0000:82:00.1 (0x8086 - 0x159b) 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:22:38.779 Found net devices under 0000:82:00.0: cvl_0_0 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:22:38.779 Found net devices under 0000:82:00.1: cvl_0_1 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:38.779 11:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:38.779 11:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:38.779 11:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:38.779 11:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:38.779 11:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:38.779 11:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:38.779 11:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:38.779 11:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:38.779 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:38.779 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:22:38.779 00:22:38.779 --- 10.0.0.2 ping statistics --- 00:22:38.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.779 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:22:38.779 11:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:38.779 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:38.779 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:22:38.779 00:22:38.779 --- 10.0.0.1 ping statistics --- 00:22:38.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.780 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:22:38.780 11:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:38.780 11:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:22:38.780 11:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:38.780 11:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:38.780 11:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:38.780 11:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:38.780 11:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:38.780 11:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:38.780 11:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:38.780 11:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:22:38.780 11:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:38.780 11:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:38.780 11:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:38.780 11:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2690245 00:22:38.780 11:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:38.780 11:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2690245 00:22:38.780 11:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2690245 ']' 00:22:38.780 11:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:38.780 11:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:38.780 11:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:38.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:38.780 11:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:38.780 11:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:38.780 [2024-11-19 11:24:34.132785] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:22:38.780 [2024-11-19 11:24:34.132869] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:38.780 [2024-11-19 11:24:34.214418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:38.780 [2024-11-19 11:24:34.267544] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:38.780 [2024-11-19 11:24:34.267607] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:38.780 [2024-11-19 11:24:34.267634] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:38.780 [2024-11-19 11:24:34.267645] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:38.780 [2024-11-19 11:24:34.267654] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:38.780 [2024-11-19 11:24:34.269060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:38.780 [2024-11-19 11:24:34.269066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.038 11:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:39.038 11:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:22:39.038 11:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:39.038 11:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:39.038 11:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:39.038 11:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:39.038 11:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2690245 00:22:39.038 11:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:39.296 [2024-11-19 11:24:34.657994] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:39.296 11:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:39.555 Malloc0 00:22:39.555 11:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:39.813 11:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:40.071 11:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:40.329 [2024-11-19 11:24:35.776007] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:40.329 11:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:40.587 [2024-11-19 11:24:36.040707] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:40.587 11:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2690526 00:22:40.587 11:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:40.587 11:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:40.587 11:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2690526 /var/tmp/bdevperf.sock 00:22:40.587 11:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2690526 ']' 00:22:40.587 11:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:40.587 11:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:40.587 11:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:40.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:40.587 11:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:40.587 11:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:41.155 11:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:41.155 11:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:22:41.155 11:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:41.155 11:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:41.721 Nvme0n1 00:22:41.721 11:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:41.979 Nvme0n1 00:22:41.979 11:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:22:41.979 11:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:44.517 11:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:22:44.517 11:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:44.517 11:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:44.775 11:24:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:22:45.709 11:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:22:45.709 11:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:45.709 11:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:45.709 11:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:45.967 11:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:45.967 11:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:45.967 11:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:45.967 11:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:46.225 11:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:46.225 11:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:46.225 11:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:46.225 11:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:46.791 11:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:46.791 11:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:46.791 11:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:46.791 11:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:47.049 11:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:47.049 11:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:47.049 11:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:47.049 11:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:47.307 11:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:47.307 11:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:47.307 11:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:47.307 11:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:47.565 11:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:47.565 11:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:22:47.565 11:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:47.824 11:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:48.082 11:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:22:49.455 11:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:22:49.456 11:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:49.456 11:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:49.456 11:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:49.456 11:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:49.456 11:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:49.456 11:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:49.456 11:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:49.714 11:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:49.714 11:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:49.714 11:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:49.714 11:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:49.972 11:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:49.972 11:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:49.972 11:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:49.972 11:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:50.540 11:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:50.540 11:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:50.540 11:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:50.540 11:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:50.798 11:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:50.798 11:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:50.798 11:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:50.798 11:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:51.056 11:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:51.056 11:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:22:51.056 11:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:51.314 11:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:51.571 11:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:22:52.945 11:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:22:52.945 11:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:52.945 11:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:52.945 11:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:52.945 11:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:52.945 11:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:52.945 11:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:52.945 11:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:53.203 11:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:53.203 11:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:53.203 11:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:53.203 11:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:53.461 11:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:53.461 11:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:53.461 11:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:53.461 11:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:54.026 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:54.026 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:54.026 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:54.026 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:54.284 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:54.284 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:54.284 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:54.284 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:54.543 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:54.543 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:22:54.543 11:24:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:54.801 11:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:55.059 11:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:22:55.995 11:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:22:55.995 11:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:55.995 11:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:55.995 11:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:56.562 11:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:56.562 11:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:56.562 11:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:56.562 11:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:56.820 11:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:56.820 11:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:56.820 11:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:56.820 11:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:57.104 11:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:57.104 11:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:57.104 11:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:57.104 11:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:57.369 11:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:57.369 11:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:57.369 11:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:57.369 11:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:57.638 11:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:57.638 11:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:57.638 11:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:57.638 11:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:57.896 11:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:57.896 11:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:22:57.896 11:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:58.154 11:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:58.719 11:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:22:59.653 11:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:22:59.653 11:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:59.653 11:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:59.653 11:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:59.911 11:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:59.911 11:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:59.911 11:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:59.911 11:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:00.169 11:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:00.169 11:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:00.169 11:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:00.169 11:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:00.427 11:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:00.427 11:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:00.427 11:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:00.427 11:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:00.685 11:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:00.685 11:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:00.685 11:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:00.685 11:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:00.943 11:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:00.943 11:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:00.943 11:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:00.943 11:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:01.201 11:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:01.201 11:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:23:01.201 11:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:01.459 11:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:01.717 11:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:23:02.651 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:23:02.651 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:02.651 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:02.651 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:03.219 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:03.219 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:03.219 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:03.219 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:03.219 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:03.219 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:03.477 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:03.477 11:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:03.735 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:03.735 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:03.735 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:03.735 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:03.992 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:03.992 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:03.992 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:03.992 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:04.250 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:04.250 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:04.250 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.250 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:04.509 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:04.509 11:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:23:05.076 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:23:05.076 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:05.334 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:05.593 11:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:23:06.526 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:23:06.526 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:06.526 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:06.526 11:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:06.784 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:06.784 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:06.784 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:06.784 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:07.351 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:07.351 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:07.351 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.351 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:07.608 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:07.609 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:07.609 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.609 11:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:07.867 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:07.867 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:07.867 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.867 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:08.125 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:08.125 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:08.125 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:08.125 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:08.384 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:08.384 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:23:08.384 11:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:08.642 11:25:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:09.208 11:25:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:23:10.142 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:23:10.142 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:10.142 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.142 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:10.400 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:10.400 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:10.400 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.400 11:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:10.659 11:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:10.659 11:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:10.659 11:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.659 11:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:10.917 11:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:10.917 11:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:10.917 11:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.917 11:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:11.176 11:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:11.176 11:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:11.176 11:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.176 11:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:11.743 11:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:11.743 11:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:11.743 11:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.743 11:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:12.001 11:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:12.001 11:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:23:12.001 11:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:12.261 11:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:12.518 11:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:23:13.454 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:23:13.454 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:13.454 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:13.454 11:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:14.019 11:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:14.019 11:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:14.019 11:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.019 11:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:14.277 11:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:14.277 11:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:14.277 11:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.277 11:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:14.535 11:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:14.535 11:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:14.535 11:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.535 11:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:14.793 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:14.793 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:14.793 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.793 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:15.051 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:15.051 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:15.051 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:15.051 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:15.310 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:15.310 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:23:15.310 11:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:15.878 11:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:16.136 11:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:23:17.070 11:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:23:17.070 11:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:17.070 11:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.070 11:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:17.329 11:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:17.329 11:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:17.329 11:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.329 11:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:17.587 11:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:17.587 11:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:17.587 11:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.587 11:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:18.154 11:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:18.154 11:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:18.154 11:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:18.154 11:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:18.414 11:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:18.414 11:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:18.414 11:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:18.414 11:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:18.674 11:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:18.674 11:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:18.674 11:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:18.674 11:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:18.932 11:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:18.932 11:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2690526 00:23:18.932 11:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2690526 ']' 00:23:18.932 11:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2690526 00:23:18.932 11:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:23:18.932 11:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:18.932 11:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2690526 00:23:18.932 11:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:18.932 11:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:18.932 11:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2690526' 00:23:18.932 killing process with pid 2690526 00:23:18.932 11:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2690526 00:23:18.932 11:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2690526 00:23:18.932 { 00:23:18.932 "results": [ 00:23:18.932 { 00:23:18.932 "job": "Nvme0n1", 00:23:18.932 "core_mask": "0x4", 00:23:18.932 "workload": "verify", 00:23:18.932 "status": "terminated", 00:23:18.932 "verify_range": { 00:23:18.932 "start": 0, 00:23:18.932 "length": 16384 00:23:18.932 }, 00:23:18.932 "queue_depth": 128, 00:23:18.932 "io_size": 4096, 00:23:18.932 "runtime": 36.701349, 00:23:18.932 "iops": 8432.769051622598, 00:23:18.932 "mibps": 32.94050410790077, 00:23:18.932 "io_failed": 0, 00:23:18.932 "io_timeout": 0, 00:23:18.932 "avg_latency_us": 15154.439696048676, 00:23:18.932 "min_latency_us": 183.56148148148148, 00:23:18.932 "max_latency_us": 4076242.1096296296 00:23:18.932 } 00:23:18.932 ], 00:23:18.932 "core_count": 1 00:23:18.932 } 00:23:19.214 11:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2690526 00:23:19.214 11:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:19.214 [2024-11-19 11:24:36.109219] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:23:19.214 [2024-11-19 11:24:36.109307] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2690526 ] 00:23:19.214 [2024-11-19 11:24:36.189094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.214 [2024-11-19 11:24:36.246561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:19.214 Running I/O for 90 seconds... 00:23:19.214 8951.00 IOPS, 34.96 MiB/s [2024-11-19T10:25:14.711Z] 8933.00 IOPS, 34.89 MiB/s [2024-11-19T10:25:14.711Z] 8889.00 IOPS, 34.72 MiB/s [2024-11-19T10:25:14.711Z] 8922.25 IOPS, 34.85 MiB/s [2024-11-19T10:25:14.711Z] 8923.20 IOPS, 34.86 MiB/s [2024-11-19T10:25:14.711Z] 8920.17 IOPS, 34.84 MiB/s [2024-11-19T10:25:14.711Z] 8932.71 IOPS, 34.89 MiB/s [2024-11-19T10:25:14.711Z] 8948.25 IOPS, 34.95 MiB/s [2024-11-19T10:25:14.711Z] 8936.56 IOPS, 34.91 MiB/s [2024-11-19T10:25:14.711Z] 8947.10 IOPS, 34.95 MiB/s [2024-11-19T10:25:14.711Z] 8960.91 IOPS, 35.00 MiB/s [2024-11-19T10:25:14.711Z] 8964.25 IOPS, 35.02 MiB/s [2024-11-19T10:25:14.711Z] 8940.62 IOPS, 34.92 MiB/s [2024-11-19T10:25:14.711Z] 8956.00 IOPS, 34.98 MiB/s [2024-11-19T10:25:14.711Z] 8944.73 IOPS, 34.94 MiB/s [2024-11-19T10:25:14.711Z] 8938.62 IOPS, 34.92 MiB/s [2024-11-19T10:25:14.711Z] [2024-11-19 11:24:53.626503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.214 [2024-11-19 11:24:53.626574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:19.214 [2024-11-19 11:24:53.626609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.214 [2024-11-19 11:24:53.626628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:19.214 [2024-11-19 11:24:53.626667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.214 [2024-11-19 11:24:53.626689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:19.214 [2024-11-19 11:24:53.626711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.214 [2024-11-19 11:24:53.626752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.214 [2024-11-19 11:24:53.626776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.214 [2024-11-19 11:24:53.626793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:19.214 [2024-11-19 11:24:53.626827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.214 [2024-11-19 11:24:53.626843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:19.214 [2024-11-19 11:24:53.626865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.214 [2024-11-19 11:24:53.626882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:19.214 [2024-11-19 11:24:53.626905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.214 [2024-11-19 11:24:53.626922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:19.214 [2024-11-19 11:24:53.626944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.214 [2024-11-19 11:24:53.626960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:19.214 [2024-11-19 11:24:53.626995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.214 [2024-11-19 11:24:53.627014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:19.214 [2024-11-19 11:24:53.627036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.214 [2024-11-19 11:24:53.627052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:19.214 [2024-11-19 11:24:53.627075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.214 [2024-11-19 11:24:53.627099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:19.214 [2024-11-19 11:24:53.627122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.214 [2024-11-19 11:24:53.627138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:19.214 [2024-11-19 11:24:53.627159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.214 [2024-11-19 11:24:53.627176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:19.214 [2024-11-19 11:24:53.627198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.214 [2024-11-19 11:24:53.627214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:19.214 [2024-11-19 11:24:53.627236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.214 [2024-11-19 11:24:53.627252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:19.214 [2024-11-19 11:24:53.627274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.214 [2024-11-19 11:24:53.627291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:19.214 [2024-11-19 11:24:53.627313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.214 [2024-11-19 11:24:53.627330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:19.214 [2024-11-19 11:24:53.627377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.214 [2024-11-19 11:24:53.627396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:19.214 [2024-11-19 11:24:53.627420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.214 [2024-11-19 11:24:53.627437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:19.214 [2024-11-19 11:24:53.627460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.214 [2024-11-19 11:24:53.627476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:19.214 [2024-11-19 11:24:53.627498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.214 [2024-11-19 11:24:53.627520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:19.214 [2024-11-19 11:24:53.627544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.214 [2024-11-19 11:24:53.627561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:19.214 [2024-11-19 11:24:53.627583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.214 [2024-11-19 11:24:53.627600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:19.214 [2024-11-19 11:24:53.627626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.214 [2024-11-19 11:24:53.627643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:19.214 [2024-11-19 11:24:53.627666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.214 [2024-11-19 11:24:53.627682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:19.214 [2024-11-19 11:24:53.627706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.214 [2024-11-19 11:24:53.627723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:19.215 [2024-11-19 11:24:53.628470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.215 [2024-11-19 11:24:53.628494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:19.215 [2024-11-19 11:24:53.628522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.215 [2024-11-19 11:24:53.628540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:19.215 [2024-11-19 11:24:53.628564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.215 [2024-11-19 11:24:53.628581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:19.215 [2024-11-19 11:24:53.628603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.215 [2024-11-19 11:24:53.628620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:19.215 [2024-11-19 11:24:53.628642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.215 [2024-11-19 11:24:53.628659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:19.215 [2024-11-19 11:24:53.628698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.215 [2024-11-19 11:24:53.628715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:19.215 [2024-11-19 11:24:53.628737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.215 [2024-11-19 11:24:53.628768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:19.215 [2024-11-19 11:24:53.628792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:99424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.215 [2024-11-19 11:24:53.628808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:19.215 [2024-11-19 11:24:53.628830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.215 [2024-11-19 11:24:53.628846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.215 [2024-11-19 11:24:53.628868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.215 [2024-11-19 11:24:53.628884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:19.215 [2024-11-19 11:24:53.628906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:99448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.215 [2024-11-19 11:24:53.628922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:19.215 [2024-11-19 11:24:53.628943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.215 [2024-11-19 11:24:53.628959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:19.215 [2024-11-19 11:24:53.628980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.215 [2024-11-19 11:24:53.628997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:19.215 [2024-11-19 11:24:53.629018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.215 [2024-11-19 11:24:53.629034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:19.215 [2024-11-19 11:24:53.629055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.215 [2024-11-19 11:24:53.629071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:19.215 [2024-11-19 11:24:53.629093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.215 [2024-11-19 11:24:53.629109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:19.215 [2024-11-19 11:24:53.629130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.215 [2024-11-19 11:24:53.629145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:19.215 [2024-11-19 11:24:53.629167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.215 [2024-11-19 11:24:53.629182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:19.215 [2024-11-19 11:24:53.629204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.215 [2024-11-19 11:24:53.629220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:19.215 [2024-11-19 11:24:53.629246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.215 [2024-11-19 11:24:53.629262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:19.215 [2024-11-19 11:24:53.629285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:99528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.215 [2024-11-19 11:24:53.629302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:19.215 [2024-11-19 11:24:53.629323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.215 [2024-11-19 11:24:53.629340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:19.215 [2024-11-19 11:24:53.629369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.215 [2024-11-19 11:24:53.629409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:19.215 [2024-11-19 11:24:53.629435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.215 [2024-11-19 11:24:53.629452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:19.215 [2024-11-19 11:24:53.629474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.215 [2024-11-19 11:24:53.629490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:19.215 [2024-11-19 11:24:53.629512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.215 [2024-11-19 11:24:53.629529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:19.215 [2024-11-19 11:24:53.629551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:99576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.215 [2024-11-19 11:24:53.629568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:19.215 [2024-11-19 11:24:53.629591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.215 [2024-11-19 11:24:53.629607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:19.215 [2024-11-19 11:24:53.629629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.215 [2024-11-19 11:24:53.629646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:19.215 [2024-11-19 11:24:53.629672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.215 [2024-11-19 11:24:53.629688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:19.215 [2024-11-19 11:24:53.629725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.215 [2024-11-19 11:24:53.629742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:19.215 [2024-11-19 11:24:53.629769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.215 [2024-11-19 11:24:53.629786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:19.215 [2024-11-19 11:24:53.629811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.215 [2024-11-19 11:24:53.629828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:19.215 [2024-11-19 11:24:53.629850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.215 [2024-11-19 11:24:53.629866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:19.215 [2024-11-19 11:24:53.629892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.215 [2024-11-19 11:24:53.629907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:19.215 [2024-11-19 11:24:53.629929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.215 [2024-11-19 11:24:53.629945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:19.215 [2024-11-19 11:24:53.629967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.216 [2024-11-19 11:24:53.629983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.216 [2024-11-19 11:24:53.630005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.216 [2024-11-19 11:24:53.630021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:19.216 [2024-11-19 11:24:53.630043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.216 [2024-11-19 11:24:53.630060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:19.216 [2024-11-19 11:24:53.630081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.216 [2024-11-19 11:24:53.630097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.216 [2024-11-19 11:24:53.630119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.216 [2024-11-19 11:24:53.630135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.216 [2024-11-19 11:24:53.630157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.216 [2024-11-19 11:24:53.630173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:19.216 [2024-11-19 11:24:53.630195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.216 [2024-11-19 11:24:53.630211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:19.216 [2024-11-19 11:24:53.630233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.216 [2024-11-19 11:24:53.630254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:19.216 [2024-11-19 11:24:53.630277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.216 [2024-11-19 11:24:53.630293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:19.216 [2024-11-19 11:24:53.630314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.216 [2024-11-19 11:24:53.630330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:19.216 [2024-11-19 11:24:53.630374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.216 [2024-11-19 11:24:53.630402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:19.216 [2024-11-19 11:24:53.630425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.216 [2024-11-19 11:24:53.630442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:19.216 [2024-11-19 11:24:53.630465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.216 [2024-11-19 11:24:53.630482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:19.216 [2024-11-19 11:24:53.630505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.216 [2024-11-19 11:24:53.630521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:19.216 [2024-11-19 11:24:53.630544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.216 [2024-11-19 11:24:53.630560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:19.216 [2024-11-19 11:24:53.630583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.216 [2024-11-19 11:24:53.630601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:19.216 [2024-11-19 11:24:53.631184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:99200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.216 [2024-11-19 11:24:53.631207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:19.216 [2024-11-19 11:24:53.631233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.216 [2024-11-19 11:24:53.631251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:19.216 [2024-11-19 11:24:53.631281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.216 [2024-11-19 11:24:53.631297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:19.216 [2024-11-19 11:24:53.631319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.216 [2024-11-19 11:24:53.631339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:19.216 [2024-11-19 11:24:53.631370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.216 [2024-11-19 11:24:53.631404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:19.216 [2024-11-19 11:24:53.631429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.216 [2024-11-19 11:24:53.631445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:19.216 [2024-11-19 11:24:53.631467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.216 [2024-11-19 11:24:53.631484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:19.216 [2024-11-19 11:24:53.631506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.216 [2024-11-19 11:24:53.631523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:19.216 [2024-11-19 11:24:53.631545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.216 [2024-11-19 11:24:53.631562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:19.216 [2024-11-19 11:24:53.631584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.216 [2024-11-19 11:24:53.631600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:19.216 [2024-11-19 11:24:53.631623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.216 [2024-11-19 11:24:53.631640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:19.216 [2024-11-19 11:24:53.631662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.216 [2024-11-19 11:24:53.631694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:19.216 [2024-11-19 11:24:53.631717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.216 [2024-11-19 11:24:53.631734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:19.216 [2024-11-19 11:24:53.631756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.216 [2024-11-19 11:24:53.631772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:19.216 [2024-11-19 11:24:53.631794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.216 [2024-11-19 11:24:53.631810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:19.216 [2024-11-19 11:24:53.631833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.216 [2024-11-19 11:24:53.631849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:19.216 [2024-11-19 11:24:53.631875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.216 [2024-11-19 11:24:53.631892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:19.216 [2024-11-19 11:24:53.631914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.216 [2024-11-19 11:24:53.631930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:19.216 [2024-11-19 11:24:53.631952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.216 [2024-11-19 11:24:53.631968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:19.216 [2024-11-19 11:24:53.631990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.216 [2024-11-19 11:24:53.632006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:19.216 [2024-11-19 11:24:53.632027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.216 [2024-11-19 11:24:53.632043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.217 [2024-11-19 11:24:53.632065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.217 [2024-11-19 11:24:53.632081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:19.217 [2024-11-19 11:24:53.632103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.217 [2024-11-19 11:24:53.632119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:19.217 [2024-11-19 11:24:53.632140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.217 [2024-11-19 11:24:53.632157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:19.217 [2024-11-19 11:24:53.632178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.217 [2024-11-19 11:24:53.632194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:19.217 [2024-11-19 11:24:53.632215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.217 [2024-11-19 11:24:53.632231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:19.217 [2024-11-19 11:24:53.632253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.217 [2024-11-19 11:24:53.632269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:19.217 [2024-11-19 11:24:53.632290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.217 [2024-11-19 11:24:53.632306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:19.217 [2024-11-19 11:24:53.632338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.217 [2024-11-19 11:24:53.632356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:19.217 [2024-11-19 11:24:53.632414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.217 [2024-11-19 11:24:53.632432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:19.217 [2024-11-19 11:24:53.632455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.217 [2024-11-19 11:24:53.632471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:19.217 [2024-11-19 11:24:53.632494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.217 [2024-11-19 11:24:53.632511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:19.217 [2024-11-19 11:24:53.632533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.217 [2024-11-19 11:24:53.632550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:19.217 [2024-11-19 11:24:53.632572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.217 [2024-11-19 11:24:53.632589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:19.217 [2024-11-19 11:24:53.632610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.217 [2024-11-19 11:24:53.632627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:19.217 [2024-11-19 11:24:53.632649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.217 [2024-11-19 11:24:53.632666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:19.217 [2024-11-19 11:24:53.632704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.217 [2024-11-19 11:24:53.632720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:19.217 [2024-11-19 11:24:53.632742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.217 [2024-11-19 11:24:53.632758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:19.217 [2024-11-19 11:24:53.632779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.217 [2024-11-19 11:24:53.632796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:19.217 [2024-11-19 11:24:53.632817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.217 [2024-11-19 11:24:53.632833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:19.217 [2024-11-19 11:24:53.632854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.217 [2024-11-19 11:24:53.632878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:19.217 [2024-11-19 11:24:53.632901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.217 [2024-11-19 11:24:53.632918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:19.217 [2024-11-19 11:24:53.632940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.217 [2024-11-19 11:24:53.632956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:19.217 [2024-11-19 11:24:53.632977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.217 [2024-11-19 11:24:53.632994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:19.217 [2024-11-19 11:24:53.633016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.217 [2024-11-19 11:24:53.633033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:19.217 [2024-11-19 11:24:53.633054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.217 [2024-11-19 11:24:53.633070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:19.217 [2024-11-19 11:24:53.633091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.217 [2024-11-19 11:24:53.633108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:19.217 [2024-11-19 11:24:53.633129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.217 [2024-11-19 11:24:53.633145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:19.217 [2024-11-19 11:24:53.633167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.217 [2024-11-19 11:24:53.633182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:19.217 [2024-11-19 11:24:53.633205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.217 [2024-11-19 11:24:53.633221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:19.217 [2024-11-19 11:24:53.633243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.217 [2024-11-19 11:24:53.633259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:19.217 [2024-11-19 11:24:53.633280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.217 [2024-11-19 11:24:53.633296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:19.217 [2024-11-19 11:24:53.633318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.217 [2024-11-19 11:24:53.633338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.217 [2024-11-19 11:24:53.633385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.217 [2024-11-19 11:24:53.633404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:19.217 [2024-11-19 11:24:53.633426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.217 [2024-11-19 11:24:53.633443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:19.217 [2024-11-19 11:24:53.633466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.217 [2024-11-19 11:24:53.633482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:19.217 [2024-11-19 11:24:53.633504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.217 [2024-11-19 11:24:53.633521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:19.217 [2024-11-19 11:24:53.633543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.217 [2024-11-19 11:24:53.633560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:19.218 [2024-11-19 11:24:53.633582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.218 [2024-11-19 11:24:53.633598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:19.218 [2024-11-19 11:24:53.633620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.218 [2024-11-19 11:24:53.633638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:19.218 [2024-11-19 11:24:53.633681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.218 [2024-11-19 11:24:53.633698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:19.218 [2024-11-19 11:24:53.633720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.218 [2024-11-19 11:24:53.633736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:19.218 [2024-11-19 11:24:53.633758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.218 [2024-11-19 11:24:53.633777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:19.218 [2024-11-19 11:24:53.633799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.218 [2024-11-19 11:24:53.633816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:19.218 [2024-11-19 11:24:53.633838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.218 [2024-11-19 11:24:53.633854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:19.218 [2024-11-19 11:24:53.633881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.218 [2024-11-19 11:24:53.633898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:19.218 [2024-11-19 11:24:53.633920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.218 [2024-11-19 11:24:53.633937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:19.218 [2024-11-19 11:24:53.633959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.218 [2024-11-19 11:24:53.633976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:19.218 [2024-11-19 11:24:53.633998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.218 [2024-11-19 11:24:53.634014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:19.218 [2024-11-19 11:24:53.634037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.218 [2024-11-19 11:24:53.634053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:19.218 [2024-11-19 11:24:53.634075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.218 [2024-11-19 11:24:53.634092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:19.218 [2024-11-19 11:24:53.634114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.218 [2024-11-19 11:24:53.634130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:19.218 [2024-11-19 11:24:53.634152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.218 [2024-11-19 11:24:53.634169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:19.218 [2024-11-19 11:24:53.635009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.218 [2024-11-19 11:24:53.635033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:19.218 [2024-11-19 11:24:53.635059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.218 [2024-11-19 11:24:53.635089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:19.218 [2024-11-19 11:24:53.635111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.218 [2024-11-19 11:24:53.635129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:19.218 [2024-11-19 11:24:53.635162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.218 [2024-11-19 11:24:53.635179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:19.218 [2024-11-19 11:24:53.635205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:99152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.218 [2024-11-19 11:24:53.635222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:19.218 [2024-11-19 11:24:53.635244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.218 [2024-11-19 11:24:53.635261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:19.218 [2024-11-19 11:24:53.635282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.218 [2024-11-19 11:24:53.635299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:19.218 [2024-11-19 11:24:53.635320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.218 [2024-11-19 11:24:53.635337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:19.218 [2024-11-19 11:24:53.635387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.218 [2024-11-19 11:24:53.635406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:19.218 [2024-11-19 11:24:53.635429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:99192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.218 [2024-11-19 11:24:53.635446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:19.218 [2024-11-19 11:24:53.635468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.218 [2024-11-19 11:24:53.635485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:19.218 [2024-11-19 11:24:53.635508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.218 [2024-11-19 11:24:53.635525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.218 [2024-11-19 11:24:53.635547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.218 [2024-11-19 11:24:53.635564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:19.218 [2024-11-19 11:24:53.635586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.218 [2024-11-19 11:24:53.635603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:19.218 [2024-11-19 11:24:53.635625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.218 [2024-11-19 11:24:53.635641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:19.218 [2024-11-19 11:24:53.635664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.218 [2024-11-19 11:24:53.635694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:19.219 [2024-11-19 11:24:53.635718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.219 [2024-11-19 11:24:53.635738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:19.219 [2024-11-19 11:24:53.635761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.219 [2024-11-19 11:24:53.635777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:19.219 [2024-11-19 11:24:53.635798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.219 [2024-11-19 11:24:53.635815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:19.219 [2024-11-19 11:24:53.635836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.219 [2024-11-19 11:24:53.635852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:19.219 [2024-11-19 11:24:53.635873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.219 [2024-11-19 11:24:53.635889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:19.219 [2024-11-19 11:24:53.635910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.219 [2024-11-19 11:24:53.635926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:19.219 [2024-11-19 11:24:53.635947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.219 [2024-11-19 11:24:53.635963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:19.219 [2024-11-19 11:24:53.635984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.219 [2024-11-19 11:24:53.636000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:19.219 [2024-11-19 11:24:53.636021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.219 [2024-11-19 11:24:53.636037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:19.219 [2024-11-19 11:24:53.636058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.219 [2024-11-19 11:24:53.636073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:19.219 [2024-11-19 11:24:53.636094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.219 [2024-11-19 11:24:53.636110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:19.219 [2024-11-19 11:24:53.636131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:99560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.219 [2024-11-19 11:24:53.636147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:19.219 [2024-11-19 11:24:53.636169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.219 [2024-11-19 11:24:53.636189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:19.219 [2024-11-19 11:24:53.636211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.219 [2024-11-19 11:24:53.636227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:19.219 [2024-11-19 11:24:53.636248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.219 [2024-11-19 11:24:53.636264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:19.219 [2024-11-19 11:24:53.636286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.219 [2024-11-19 11:24:53.636302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:19.219 [2024-11-19 11:24:53.636323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.219 [2024-11-19 11:24:53.636338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:19.219 [2024-11-19 11:24:53.636385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:99608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.219 [2024-11-19 11:24:53.636403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:19.219 [2024-11-19 11:24:53.636426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.219 [2024-11-19 11:24:53.636443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:19.219 [2024-11-19 11:24:53.636465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.219 [2024-11-19 11:24:53.636481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:19.219 [2024-11-19 11:24:53.636503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.219 [2024-11-19 11:24:53.636519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:19.219 [2024-11-19 11:24:53.636541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.219 [2024-11-19 11:24:53.636557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:19.219 [2024-11-19 11:24:53.636579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.219 [2024-11-19 11:24:53.636595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:19.219 [2024-11-19 11:24:53.636617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.219 [2024-11-19 11:24:53.636633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.219 [2024-11-19 11:24:53.636672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.219 [2024-11-19 11:24:53.636688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:19.219 [2024-11-19 11:24:53.636714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.219 [2024-11-19 11:24:53.636730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:19.219 [2024-11-19 11:24:53.636753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.219 [2024-11-19 11:24:53.636769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.219 [2024-11-19 11:24:53.636790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.219 [2024-11-19 11:24:53.636806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.219 [2024-11-19 11:24:53.636827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:99696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.219 [2024-11-19 11:24:53.636843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:19.219 [2024-11-19 11:24:53.636865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.219 [2024-11-19 11:24:53.636881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:19.219 [2024-11-19 11:24:53.636902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:99712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.219 [2024-11-19 11:24:53.636918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:19.219 [2024-11-19 11:24:53.636939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.219 [2024-11-19 11:24:53.636956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:19.219 [2024-11-19 11:24:53.636977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.219 [2024-11-19 11:24:53.636993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:19.219 [2024-11-19 11:24:53.637014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.219 [2024-11-19 11:24:53.637029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:19.219 [2024-11-19 11:24:53.637051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.219 [2024-11-19 11:24:53.637067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:19.219 [2024-11-19 11:24:53.637089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.219 [2024-11-19 11:24:53.637104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:19.219 [2024-11-19 11:24:53.637125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.219 [2024-11-19 11:24:53.637142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:19.220 [2024-11-19 11:24:53.637168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.220 [2024-11-19 11:24:53.637185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:19.220 [2024-11-19 11:24:53.637868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.220 [2024-11-19 11:24:53.637891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:19.220 [2024-11-19 11:24:53.637918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.220 [2024-11-19 11:24:53.637935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:19.220 [2024-11-19 11:24:53.637957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.220 [2024-11-19 11:24:53.637973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:19.220 [2024-11-19 11:24:53.637994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.220 [2024-11-19 11:24:53.638011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:19.220 [2024-11-19 11:24:53.638033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.220 [2024-11-19 11:24:53.638049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:19.220 [2024-11-19 11:24:53.638070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.220 [2024-11-19 11:24:53.638086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:19.220 [2024-11-19 11:24:53.638108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.220 [2024-11-19 11:24:53.638124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:19.220 [2024-11-19 11:24:53.638145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.220 [2024-11-19 11:24:53.638161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:19.220 [2024-11-19 11:24:53.638182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.220 [2024-11-19 11:24:53.638198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:19.220 [2024-11-19 11:24:53.638219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.220 [2024-11-19 11:24:53.638235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:19.220 [2024-11-19 11:24:53.638257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.220 [2024-11-19 11:24:53.638273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:19.220 [2024-11-19 11:24:53.638300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.220 [2024-11-19 11:24:53.638317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:19.220 [2024-11-19 11:24:53.638353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.220 [2024-11-19 11:24:53.638380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:19.220 [2024-11-19 11:24:53.638405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.220 [2024-11-19 11:24:53.638422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:19.220 [2024-11-19 11:24:53.638445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.220 [2024-11-19 11:24:53.638462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:19.220 [2024-11-19 11:24:53.638484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.220 [2024-11-19 11:24:53.638500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:19.220 [2024-11-19 11:24:53.638522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.220 [2024-11-19 11:24:53.638539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:19.220 [2024-11-19 11:24:53.638561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.220 [2024-11-19 11:24:53.638577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:19.220 [2024-11-19 11:24:53.638599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.220 [2024-11-19 11:24:53.638615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:19.220 [2024-11-19 11:24:53.638637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.220 [2024-11-19 11:24:53.638654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:19.220 [2024-11-19 11:24:53.638676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.220 [2024-11-19 11:24:53.638692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:19.220 [2024-11-19 11:24:53.638714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.220 [2024-11-19 11:24:53.638747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.220 [2024-11-19 11:24:53.638770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.220 [2024-11-19 11:24:53.638786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:19.220 [2024-11-19 11:24:53.638808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.220 [2024-11-19 11:24:53.638829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:19.220 [2024-11-19 11:24:53.638852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.220 [2024-11-19 11:24:53.638868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:19.220 [2024-11-19 11:24:53.638890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.220 [2024-11-19 11:24:53.638905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:19.220 [2024-11-19 11:24:53.638926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.220 [2024-11-19 11:24:53.638942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:19.220 [2024-11-19 11:24:53.638964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.220 [2024-11-19 11:24:53.638980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:19.220 [2024-11-19 11:24:53.639001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.220 [2024-11-19 11:24:53.639017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:19.220 [2024-11-19 11:24:53.639039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.220 [2024-11-19 11:24:53.639055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:19.220 [2024-11-19 11:24:53.639077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.220 [2024-11-19 11:24:53.639092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:19.220 [2024-11-19 11:24:53.639114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.220 [2024-11-19 11:24:53.639129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:19.220 [2024-11-19 11:24:53.639150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.220 [2024-11-19 11:24:53.639167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:19.220 [2024-11-19 11:24:53.639188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.220 [2024-11-19 11:24:53.639204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:19.220 [2024-11-19 11:24:53.639225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.220 [2024-11-19 11:24:53.639241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:19.220 [2024-11-19 11:24:53.639265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.221 [2024-11-19 11:24:53.639282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:19.221 [2024-11-19 11:24:53.639308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.221 [2024-11-19 11:24:53.639325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:19.221 [2024-11-19 11:24:53.639372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.221 [2024-11-19 11:24:53.639403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:19.221 [2024-11-19 11:24:53.639428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.221 [2024-11-19 11:24:53.639445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:19.221 [2024-11-19 11:24:53.639468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.221 [2024-11-19 11:24:53.639485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:19.221 [2024-11-19 11:24:53.639508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.221 [2024-11-19 11:24:53.639525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:19.221 [2024-11-19 11:24:53.639547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.221 [2024-11-19 11:24:53.639564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:19.221 [2024-11-19 11:24:53.639587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.221 [2024-11-19 11:24:53.639603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:19.221 [2024-11-19 11:24:53.639626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.221 [2024-11-19 11:24:53.639643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:19.221 [2024-11-19 11:24:53.639667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.221 [2024-11-19 11:24:53.639699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:19.221 [2024-11-19 11:24:53.639721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.221 [2024-11-19 11:24:53.639737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:19.221 [2024-11-19 11:24:53.639760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.221 [2024-11-19 11:24:53.639776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:19.221 [2024-11-19 11:24:53.639798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.221 [2024-11-19 11:24:53.639814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:19.221 [2024-11-19 11:24:53.639841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.221 [2024-11-19 11:24:53.639858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:19.221 [2024-11-19 11:24:53.639882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.221 [2024-11-19 11:24:53.639898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:19.221 [2024-11-19 11:24:53.639920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.221 [2024-11-19 11:24:53.639936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:19.221 [2024-11-19 11:24:53.639958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.221 [2024-11-19 11:24:53.639974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:19.221 [2024-11-19 11:24:53.639996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.221 [2024-11-19 11:24:53.640012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:19.221 [2024-11-19 11:24:53.640034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.221 [2024-11-19 11:24:53.640050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.221 [2024-11-19 11:24:53.640072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.221 [2024-11-19 11:24:53.640088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:19.221 [2024-11-19 11:24:53.640110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.221 [2024-11-19 11:24:53.640126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:19.221 [2024-11-19 11:24:53.640147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.221 [2024-11-19 11:24:53.640163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:19.221 [2024-11-19 11:24:53.640185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.221 [2024-11-19 11:24:53.640201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:19.221 [2024-11-19 11:24:53.640222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.221 [2024-11-19 11:24:53.640239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:19.221 [2024-11-19 11:24:53.640261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.221 [2024-11-19 11:24:53.640277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:19.221 [2024-11-19 11:24:53.640303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.221 [2024-11-19 11:24:53.640320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:19.221 [2024-11-19 11:24:53.640357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.221 [2024-11-19 11:24:53.640383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:19.221 [2024-11-19 11:24:53.640409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.221 [2024-11-19 11:24:53.640426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:19.221 [2024-11-19 11:24:53.640448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.221 [2024-11-19 11:24:53.640465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:19.221 [2024-11-19 11:24:53.640487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.221 [2024-11-19 11:24:53.640504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:19.221 [2024-11-19 11:24:53.640526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.221 [2024-11-19 11:24:53.640542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:19.221 [2024-11-19 11:24:53.640565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.221 [2024-11-19 11:24:53.640582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:19.221 [2024-11-19 11:24:53.640605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.221 [2024-11-19 11:24:53.640622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:19.221 [2024-11-19 11:24:53.640660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.221 [2024-11-19 11:24:53.640684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:19.221 [2024-11-19 11:24:53.640706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.221 [2024-11-19 11:24:53.640722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:19.221 [2024-11-19 11:24:53.640745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.221 [2024-11-19 11:24:53.640762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:19.221 [2024-11-19 11:24:53.640783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.221 [2024-11-19 11:24:53.640799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:19.221 [2024-11-19 11:24:53.640822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.222 [2024-11-19 11:24:53.640843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:19.222 [2024-11-19 11:24:53.641660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.222 [2024-11-19 11:24:53.641697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:19.222 [2024-11-19 11:24:53.641724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.222 [2024-11-19 11:24:53.641742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:19.222 [2024-11-19 11:24:53.641764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.222 [2024-11-19 11:24:53.641780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:19.222 [2024-11-19 11:24:53.641802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.222 [2024-11-19 11:24:53.641818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:19.222 [2024-11-19 11:24:53.641840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.222 [2024-11-19 11:24:53.641856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:19.222 [2024-11-19 11:24:53.641878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.222 [2024-11-19 11:24:53.641894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:19.222 [2024-11-19 11:24:53.641915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.222 [2024-11-19 11:24:53.641932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:19.222 [2024-11-19 11:24:53.641954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.222 [2024-11-19 11:24:53.641970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:19.222 [2024-11-19 11:24:53.641992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:99176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.222 [2024-11-19 11:24:53.642007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:19.222 [2024-11-19 11:24:53.642029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.222 [2024-11-19 11:24:53.642045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:19.222 [2024-11-19 11:24:53.642067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.222 [2024-11-19 11:24:53.642083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:19.222 [2024-11-19 11:24:53.642104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.222 [2024-11-19 11:24:53.642125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:19.222 [2024-11-19 11:24:53.642149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.222 [2024-11-19 11:24:53.642165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.222 [2024-11-19 11:24:53.642186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.222 [2024-11-19 11:24:53.642202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:19.222 [2024-11-19 11:24:53.642223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.222 [2024-11-19 11:24:53.642239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:19.222 [2024-11-19 11:24:53.642261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.222 [2024-11-19 11:24:53.642277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:19.222 [2024-11-19 11:24:53.642299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.222 [2024-11-19 11:24:53.642315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:19.222 [2024-11-19 11:24:53.642337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.222 [2024-11-19 11:24:53.642354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:19.222 [2024-11-19 11:24:53.642415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.222 [2024-11-19 11:24:53.642443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:19.222 [2024-11-19 11:24:53.642465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.222 [2024-11-19 11:24:53.642493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:19.222 [2024-11-19 11:24:53.642517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:99496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.222 [2024-11-19 11:24:53.642535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:19.222 [2024-11-19 11:24:53.642558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.222 [2024-11-19 11:24:53.642575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:19.222 [2024-11-19 11:24:53.642598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.222 [2024-11-19 11:24:53.642615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:19.222 [2024-11-19 11:24:53.642638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.222 [2024-11-19 11:24:53.642654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:19.222 [2024-11-19 11:24:53.642698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.222 [2024-11-19 11:24:53.642716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:19.222 [2024-11-19 11:24:53.642739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.222 [2024-11-19 11:24:53.642755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:19.222 [2024-11-19 11:24:53.642777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:99544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.222 [2024-11-19 11:24:53.642793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:19.222 [2024-11-19 11:24:53.642814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.222 [2024-11-19 11:24:53.642830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:19.222 [2024-11-19 11:24:53.642851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.222 [2024-11-19 11:24:53.642868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:19.222 [2024-11-19 11:24:53.642889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.222 [2024-11-19 11:24:53.642905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:19.222 [2024-11-19 11:24:53.642926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.222 [2024-11-19 11:24:53.642942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:19.222 [2024-11-19 11:24:53.642964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.222 [2024-11-19 11:24:53.642980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:19.222 [2024-11-19 11:24:53.643001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.222 [2024-11-19 11:24:53.643017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:19.222 [2024-11-19 11:24:53.643038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.222 [2024-11-19 11:24:53.643055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:19.222 [2024-11-19 11:24:53.643076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.222 [2024-11-19 11:24:53.643092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:19.222 [2024-11-19 11:24:53.643113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.222 [2024-11-19 11:24:53.643130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:19.222 [2024-11-19 11:24:53.643156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.223 [2024-11-19 11:24:53.643173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:19.223 [2024-11-19 11:24:53.643195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.223 [2024-11-19 11:24:53.643210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:19.223 [2024-11-19 11:24:53.643231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.223 [2024-11-19 11:24:53.643247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:19.223 [2024-11-19 11:24:53.643269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.223 [2024-11-19 11:24:53.643284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:19.223 [2024-11-19 11:24:53.643323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.223 [2024-11-19 11:24:53.643339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.223 [2024-11-19 11:24:53.643369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.223 [2024-11-19 11:24:53.643389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:19.223 [2024-11-19 11:24:53.643412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.223 [2024-11-19 11:24:53.643428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:19.223 [2024-11-19 11:24:53.643450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.223 [2024-11-19 11:24:53.643466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.223 [2024-11-19 11:24:53.643488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.223 [2024-11-19 11:24:53.643504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.223 [2024-11-19 11:24:53.643526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.223 [2024-11-19 11:24:53.643542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:19.223 [2024-11-19 11:24:53.643565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.223 [2024-11-19 11:24:53.643581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:19.223 [2024-11-19 11:24:53.643603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.223 [2024-11-19 11:24:53.643619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:19.223 [2024-11-19 11:24:53.643641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.223 [2024-11-19 11:24:53.643662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:19.223 [2024-11-19 11:24:53.643685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.223 [2024-11-19 11:24:53.643717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:19.223 [2024-11-19 11:24:53.643739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.223 [2024-11-19 11:24:53.643755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:19.223 [2024-11-19 11:24:53.643776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.223 [2024-11-19 11:24:53.643792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:19.223 [2024-11-19 11:24:53.643814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.223 [2024-11-19 11:24:53.643829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:19.223 [2024-11-19 11:24:53.643851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.223 [2024-11-19 11:24:53.643866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:19.223 [2024-11-19 11:24:53.644466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.223 [2024-11-19 11:24:53.644490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:19.223 [2024-11-19 11:24:53.644517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.223 [2024-11-19 11:24:53.644535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:19.223 [2024-11-19 11:24:53.644557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:99200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.223 [2024-11-19 11:24:53.644573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:19.223 [2024-11-19 11:24:53.644596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.223 [2024-11-19 11:24:53.644613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:19.223 [2024-11-19 11:24:53.644635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.223 [2024-11-19 11:24:53.644651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:19.223 [2024-11-19 11:24:53.644673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.223 [2024-11-19 11:24:53.644690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:19.223 [2024-11-19 11:24:53.644728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.223 [2024-11-19 11:24:53.644748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:19.223 [2024-11-19 11:24:53.644771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.223 [2024-11-19 11:24:53.644787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:19.223 [2024-11-19 11:24:53.644809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.223 [2024-11-19 11:24:53.644825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:19.223 [2024-11-19 11:24:53.644846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.223 [2024-11-19 11:24:53.644862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:19.223 [2024-11-19 11:24:53.644883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.223 [2024-11-19 11:24:53.644899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:19.223 [2024-11-19 11:24:53.644921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.223 [2024-11-19 11:24:53.644937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:19.223 [2024-11-19 11:24:53.644958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.223 [2024-11-19 11:24:53.644974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:19.223 [2024-11-19 11:24:53.644995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.224 [2024-11-19 11:24:53.645011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:19.224 [2024-11-19 11:24:53.645033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.224 [2024-11-19 11:24:53.645048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:19.224 [2024-11-19 11:24:53.645070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.224 [2024-11-19 11:24:53.645086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:19.224 [2024-11-19 11:24:53.645108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.224 [2024-11-19 11:24:53.645124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:19.224 [2024-11-19 11:24:53.645146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.224 [2024-11-19 11:24:53.645161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:19.224 [2024-11-19 11:24:53.645183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.224 [2024-11-19 11:24:53.645198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:19.224 [2024-11-19 11:24:53.645225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.224 [2024-11-19 11:24:53.645241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:19.224 [2024-11-19 11:24:53.645263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.224 [2024-11-19 11:24:53.645279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:19.224 [2024-11-19 11:24:53.645300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.224 [2024-11-19 11:24:53.645316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:19.224 [2024-11-19 11:24:53.645352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.224 [2024-11-19 11:24:53.645379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.224 [2024-11-19 11:24:53.645404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.224 [2024-11-19 11:24:53.645421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:19.224 [2024-11-19 11:24:53.645444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.224 [2024-11-19 11:24:53.645460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:19.224 [2024-11-19 11:24:53.645483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.224 [2024-11-19 11:24:53.645499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:19.224 [2024-11-19 11:24:53.645520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.224 [2024-11-19 11:24:53.645537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:19.224 [2024-11-19 11:24:53.645559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.224 [2024-11-19 11:24:53.645575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:19.224 [2024-11-19 11:24:53.645597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.224 [2024-11-19 11:24:53.645615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:19.224 [2024-11-19 11:24:53.645639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.224 [2024-11-19 11:24:53.645656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:19.224 [2024-11-19 11:24:53.645696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.224 [2024-11-19 11:24:53.645712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:19.224 [2024-11-19 11:24:53.645754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.224 [2024-11-19 11:24:53.645771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:19.224 [2024-11-19 11:24:53.645792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.224 [2024-11-19 11:24:53.645808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:19.224 [2024-11-19 11:24:53.645829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.224 [2024-11-19 11:24:53.645844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:19.224 [2024-11-19 11:24:53.645865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.224 [2024-11-19 11:24:53.645882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:19.224 [2024-11-19 11:24:53.645903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.224 [2024-11-19 11:24:53.645919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:19.224 [2024-11-19 11:24:53.645941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.224 [2024-11-19 11:24:53.645956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:19.224 [2024-11-19 11:24:53.645977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.224 [2024-11-19 11:24:53.645993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:19.224 [2024-11-19 11:24:53.646014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.224 [2024-11-19 11:24:53.646029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:19.224 [2024-11-19 11:24:53.646049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.224 [2024-11-19 11:24:53.646065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:19.224 [2024-11-19 11:24:53.646086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.224 [2024-11-19 11:24:53.646101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:19.224 [2024-11-19 11:24:53.646122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.224 [2024-11-19 11:24:53.646137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:19.224 [2024-11-19 11:24:53.646157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.224 [2024-11-19 11:24:53.646174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:19.224 [2024-11-19 11:24:53.646195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.224 [2024-11-19 11:24:53.646214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:19.224 [2024-11-19 11:24:53.646236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.224 [2024-11-19 11:24:53.646252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:19.224 [2024-11-19 11:24:53.646273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.224 [2024-11-19 11:24:53.646295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:19.224 [2024-11-19 11:24:53.646317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.224 [2024-11-19 11:24:53.646333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:19.224 [2024-11-19 11:24:53.646379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.224 [2024-11-19 11:24:53.646399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:19.224 [2024-11-19 11:24:53.646433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.224 [2024-11-19 11:24:53.646450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:19.224 [2024-11-19 11:24:53.646472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.224 [2024-11-19 11:24:53.646489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:19.224 [2024-11-19 11:24:53.646511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.225 [2024-11-19 11:24:53.646528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:19.225 [2024-11-19 11:24:53.646551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.225 [2024-11-19 11:24:53.646567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:19.225 [2024-11-19 11:24:53.646589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.225 [2024-11-19 11:24:53.646606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:19.225 [2024-11-19 11:24:53.646629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.225 [2024-11-19 11:24:53.646647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:19.225 [2024-11-19 11:24:53.646670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.225 [2024-11-19 11:24:53.646687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.225 [2024-11-19 11:24:53.646710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.225 [2024-11-19 11:24:53.646731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:19.225 [2024-11-19 11:24:53.646755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.225 [2024-11-19 11:24:53.646772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:19.225 [2024-11-19 11:24:53.646795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.225 [2024-11-19 11:24:53.646811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:19.225 [2024-11-19 11:24:53.646834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.225 [2024-11-19 11:24:53.646851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:19.225 [2024-11-19 11:24:53.646873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.225 [2024-11-19 11:24:53.646890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:19.225 [2024-11-19 11:24:53.646912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.225 [2024-11-19 11:24:53.646929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:19.225 [2024-11-19 11:24:53.646951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.225 [2024-11-19 11:24:53.646969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:19.225 [2024-11-19 11:24:53.646992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.225 [2024-11-19 11:24:53.647008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:19.225 [2024-11-19 11:24:53.647031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.225 [2024-11-19 11:24:53.647048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:19.225 [2024-11-19 11:24:53.647070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.225 [2024-11-19 11:24:53.647087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:19.225 [2024-11-19 11:24:53.647109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.225 [2024-11-19 11:24:53.647126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:19.225 [2024-11-19 11:24:53.647149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.225 [2024-11-19 11:24:53.647166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:19.225 [2024-11-19 11:24:53.647188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.225 [2024-11-19 11:24:53.647205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:19.225 [2024-11-19 11:24:53.647232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.225 [2024-11-19 11:24:53.647251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:19.225 [2024-11-19 11:24:53.647273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.225 [2024-11-19 11:24:53.647291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:19.225 [2024-11-19 11:24:53.647314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.225 [2024-11-19 11:24:53.647330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:19.225 [2024-11-19 11:24:53.647354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.225 [2024-11-19 11:24:53.647379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:19.225 [2024-11-19 11:24:53.647404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.225 [2024-11-19 11:24:53.647421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:19.225 [2024-11-19 11:24:53.648281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.225 [2024-11-19 11:24:53.648303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:19.225 [2024-11-19 11:24:53.648329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.225 [2024-11-19 11:24:53.648370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:19.225 [2024-11-19 11:24:53.648398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.225 [2024-11-19 11:24:53.648416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:19.225 [2024-11-19 11:24:53.648438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.225 [2024-11-19 11:24:53.648454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:19.225 [2024-11-19 11:24:53.648477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.225 [2024-11-19 11:24:53.648493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:19.225 [2024-11-19 11:24:53.648516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.225 [2024-11-19 11:24:53.648532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:19.225 [2024-11-19 11:24:53.648554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.225 [2024-11-19 11:24:53.648571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:19.225 [2024-11-19 11:24:53.648599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.225 [2024-11-19 11:24:53.648616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:19.225 [2024-11-19 11:24:53.648639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.225 [2024-11-19 11:24:53.648655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:19.225 [2024-11-19 11:24:53.648678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:99176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.225 [2024-11-19 11:24:53.648694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:19.225 [2024-11-19 11:24:53.648716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.225 [2024-11-19 11:24:53.648733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:19.225 [2024-11-19 11:24:53.648755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:99192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.225 [2024-11-19 11:24:53.648772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:19.225 [2024-11-19 11:24:53.648794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.225 [2024-11-19 11:24:53.648810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:19.225 [2024-11-19 11:24:53.648833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.225 [2024-11-19 11:24:53.648850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.225 [2024-11-19 11:24:53.648872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.226 [2024-11-19 11:24:53.648888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:19.226 [2024-11-19 11:24:53.648910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.226 [2024-11-19 11:24:53.648926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:19.226 [2024-11-19 11:24:53.648949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.226 [2024-11-19 11:24:53.648965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:19.226 [2024-11-19 11:24:53.648987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.226 [2024-11-19 11:24:53.649003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:19.226 [2024-11-19 11:24:53.649026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.226 [2024-11-19 11:24:53.649042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:19.226 [2024-11-19 11:24:53.649065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.226 [2024-11-19 11:24:53.649085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:19.226 [2024-11-19 11:24:53.649110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.226 [2024-11-19 11:24:53.649127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:19.226 [2024-11-19 11:24:53.649150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.226 [2024-11-19 11:24:53.649167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:19.226 [2024-11-19 11:24:53.649189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.226 [2024-11-19 11:24:53.649205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:19.226 [2024-11-19 11:24:53.649227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.226 [2024-11-19 11:24:53.649244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:19.226 [2024-11-19 11:24:53.649266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.226 [2024-11-19 11:24:53.649283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:19.226 [2024-11-19 11:24:53.649305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.226 [2024-11-19 11:24:53.649321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:19.226 [2024-11-19 11:24:53.649344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.226 [2024-11-19 11:24:53.649368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:19.226 [2024-11-19 11:24:53.649393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:99544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.226 [2024-11-19 11:24:53.649410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:19.226 [2024-11-19 11:24:53.649432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.226 [2024-11-19 11:24:53.649449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:19.226 [2024-11-19 11:24:53.649471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.226 [2024-11-19 11:24:53.649488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:19.226 [2024-11-19 11:24:53.649509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.226 [2024-11-19 11:24:53.649526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:19.226 [2024-11-19 11:24:53.649547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.226 [2024-11-19 11:24:53.649568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:19.226 [2024-11-19 11:24:53.649592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.226 [2024-11-19 11:24:53.649608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:19.226 [2024-11-19 11:24:53.649630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:99592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.226 [2024-11-19 11:24:53.649647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:19.226 [2024-11-19 11:24:53.649687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.226 [2024-11-19 11:24:53.649704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:19.226 [2024-11-19 11:24:53.649726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.226 [2024-11-19 11:24:53.649742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:19.226 [2024-11-19 11:24:53.649763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.226 [2024-11-19 11:24:53.649780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:19.226 [2024-11-19 11:24:53.649802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.226 [2024-11-19 11:24:53.649818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:19.226 [2024-11-19 11:24:53.649854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.226 [2024-11-19 11:24:53.649870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:19.226 [2024-11-19 11:24:53.649891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.226 [2024-11-19 11:24:53.649907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:19.226 [2024-11-19 11:24:53.649928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.226 [2024-11-19 11:24:53.649943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:19.226 [2024-11-19 11:24:53.649964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.226 [2024-11-19 11:24:53.649979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.226 [2024-11-19 11:24:53.650001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.226 [2024-11-19 11:24:53.650017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:19.226 [2024-11-19 11:24:53.650038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.226 [2024-11-19 11:24:53.650053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:19.226 [2024-11-19 11:24:53.650079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:99680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.226 [2024-11-19 11:24:53.650096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.226 [2024-11-19 11:24:53.650118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.226 [2024-11-19 11:24:53.650133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.226 [2024-11-19 11:24:53.650154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:99696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.226 [2024-11-19 11:24:53.650169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:19.226 [2024-11-19 11:24:53.650190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.226 [2024-11-19 11:24:53.650206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:19.226 [2024-11-19 11:24:53.650227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.226 [2024-11-19 11:24:53.650243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:19.226 [2024-11-19 11:24:53.650264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.226 [2024-11-19 11:24:53.650280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:19.226 [2024-11-19 11:24:53.650301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.226 [2024-11-19 11:24:53.650316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:19.227 [2024-11-19 11:24:53.650337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.227 [2024-11-19 11:24:53.650376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:19.227 [2024-11-19 11:24:53.650402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.227 [2024-11-19 11:24:53.650425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:19.227 [2024-11-19 11:24:53.650449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.227 [2024-11-19 11:24:53.650467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:19.227 [2024-11-19 11:24:53.651054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.227 [2024-11-19 11:24:53.651077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:19.227 [2024-11-19 11:24:53.651103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.227 [2024-11-19 11:24:53.651121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:19.227 [2024-11-19 11:24:53.651149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.227 [2024-11-19 11:24:53.651166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:19.227 [2024-11-19 11:24:53.651189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.227 [2024-11-19 11:24:53.651206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:19.227 [2024-11-19 11:24:53.651229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.227 [2024-11-19 11:24:53.651245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:19.227 [2024-11-19 11:24:53.651267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.227 [2024-11-19 11:24:53.651284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:19.227 [2024-11-19 11:24:53.651306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.227 [2024-11-19 11:24:53.651323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:19.227 [2024-11-19 11:24:53.651345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.227 [2024-11-19 11:24:53.651371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:19.227 [2024-11-19 11:24:53.651397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.227 [2024-11-19 11:24:53.651414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:19.227 [2024-11-19 11:24:53.651437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.227 [2024-11-19 11:24:53.651453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:19.227 [2024-11-19 11:24:53.651475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.227 [2024-11-19 11:24:53.651491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:19.227 [2024-11-19 11:24:53.651513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.227 [2024-11-19 11:24:53.651530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:19.227 [2024-11-19 11:24:53.651553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.227 [2024-11-19 11:24:53.651569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:19.227 [2024-11-19 11:24:53.651591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.227 [2024-11-19 11:24:53.651607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:19.227 [2024-11-19 11:24:53.651630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.227 [2024-11-19 11:24:53.651651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:19.227 [2024-11-19 11:24:53.651675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.227 [2024-11-19 11:24:53.651692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:19.227 [2024-11-19 11:24:53.651714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.227 [2024-11-19 11:24:53.651730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:19.227 [2024-11-19 11:24:53.651752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.227 [2024-11-19 11:24:53.651768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:19.227 [2024-11-19 11:24:53.651790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.227 [2024-11-19 11:24:53.651805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:19.227 [2024-11-19 11:24:53.651827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.227 [2024-11-19 11:24:53.651844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:19.227 [2024-11-19 11:24:53.651866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.227 [2024-11-19 11:24:53.651882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:19.227 [2024-11-19 11:24:53.651903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:99920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.227 [2024-11-19 11:24:53.651919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:19.227 [2024-11-19 11:24:53.651941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.227 [2024-11-19 11:24:53.651958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:19.227 [2024-11-19 11:24:53.651980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.227 [2024-11-19 11:24:53.651996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.227 [2024-11-19 11:24:53.652018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.227 [2024-11-19 11:24:53.652034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:19.227 [2024-11-19 11:24:53.652056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.227 [2024-11-19 11:24:53.652072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:19.227 [2024-11-19 11:24:53.652094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.227 [2024-11-19 11:24:53.652117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:19.227 [2024-11-19 11:24:53.652141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.227 [2024-11-19 11:24:53.652157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:19.227 [2024-11-19 11:24:53.652179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.227 [2024-11-19 11:24:53.652195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:19.227 [2024-11-19 11:24:53.652217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.227 [2024-11-19 11:24:53.652233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:19.227 [2024-11-19 11:24:53.652255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.227 [2024-11-19 11:24:53.652272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:19.227 [2024-11-19 11:24:53.652293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.227 [2024-11-19 11:24:53.652309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:19.227 [2024-11-19 11:24:53.652331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.227 [2024-11-19 11:24:53.652348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:19.227 [2024-11-19 11:24:53.652377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.227 [2024-11-19 11:24:53.652396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:19.228 [2024-11-19 11:24:53.652423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.228 [2024-11-19 11:24:53.652439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:19.228 [2024-11-19 11:24:53.652461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.228 [2024-11-19 11:24:53.652477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:19.228 [2024-11-19 11:24:53.652499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.228 [2024-11-19 11:24:53.652521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:19.228 [2024-11-19 11:24:53.652543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.228 [2024-11-19 11:24:53.652561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:19.228 [2024-11-19 11:24:53.652583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.228 [2024-11-19 11:24:53.652599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:19.228 [2024-11-19 11:24:53.652626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.228 [2024-11-19 11:24:53.652643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:19.228 [2024-11-19 11:24:53.652665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.228 [2024-11-19 11:24:53.652682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:19.228 [2024-11-19 11:24:53.652704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.228 [2024-11-19 11:24:53.652720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:19.228 [2024-11-19 11:24:53.652742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.228 [2024-11-19 11:24:53.652759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:19.228 [2024-11-19 11:24:53.652781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.228 [2024-11-19 11:24:53.652797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:19.228 [2024-11-19 11:24:53.652819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.228 [2024-11-19 11:24:53.652835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:19.228 [2024-11-19 11:24:53.652856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.228 [2024-11-19 11:24:53.652873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:19.228 [2024-11-19 11:24:53.652894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.228 [2024-11-19 11:24:53.652911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:19.228 [2024-11-19 11:24:53.652933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.228 [2024-11-19 11:24:53.652949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:19.228 [2024-11-19 11:24:53.652971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.228 [2024-11-19 11:24:53.652987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:19.228 [2024-11-19 11:24:53.653009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.228 [2024-11-19 11:24:53.653025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:19.228 [2024-11-19 11:24:53.653047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.228 [2024-11-19 11:24:53.653063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:19.228 [2024-11-19 11:24:53.653090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.228 [2024-11-19 11:24:53.653107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:19.228 [2024-11-19 11:24:53.653129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.228 [2024-11-19 11:24:53.653146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:19.228 [2024-11-19 11:24:53.653168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.228 [2024-11-19 11:24:53.653185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:19.228 [2024-11-19 11:24:53.653207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.228 [2024-11-19 11:24:53.653223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:19.228 [2024-11-19 11:24:53.653245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.228 [2024-11-19 11:24:53.653261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.228 [2024-11-19 11:24:53.653284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.228 [2024-11-19 11:24:53.653300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:19.228 [2024-11-19 11:24:53.653322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.228 [2024-11-19 11:24:53.653338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:19.228 [2024-11-19 11:24:53.653360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.228 [2024-11-19 11:24:53.653385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:19.228 [2024-11-19 11:24:53.653407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.228 [2024-11-19 11:24:53.653424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:19.228 [2024-11-19 11:24:53.653446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.228 [2024-11-19 11:24:53.653462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:19.228 [2024-11-19 11:24:53.653484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.228 [2024-11-19 11:24:53.653500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:19.228 [2024-11-19 11:24:53.653522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.228 [2024-11-19 11:24:53.653538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:19.228 [2024-11-19 11:24:53.653560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.228 [2024-11-19 11:24:53.653581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:19.228 [2024-11-19 11:24:53.653604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.228 [2024-11-19 11:24:53.653621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:19.228 [2024-11-19 11:24:53.653642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.228 [2024-11-19 11:24:53.653678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:19.228 [2024-11-19 11:24:53.653701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.228 [2024-11-19 11:24:53.653732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:19.229 [2024-11-19 11:24:53.653755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.229 [2024-11-19 11:24:53.653772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:19.229 [2024-11-19 11:24:53.653792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.229 [2024-11-19 11:24:53.653808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:19.229 [2024-11-19 11:24:53.653829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.229 [2024-11-19 11:24:53.653845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:19.229 [2024-11-19 11:24:53.653866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.229 [2024-11-19 11:24:53.653881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:19.229 [2024-11-19 11:24:53.653902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.229 [2024-11-19 11:24:53.653918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:19.229 [2024-11-19 11:24:53.653939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.229 [2024-11-19 11:24:53.653955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:19.229 [2024-11-19 11:24:53.654767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.229 [2024-11-19 11:24:53.654791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:19.229 [2024-11-19 11:24:53.654816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.229 [2024-11-19 11:24:53.654833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:19.229 [2024-11-19 11:24:53.654855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.229 [2024-11-19 11:24:53.654876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:19.229 [2024-11-19 11:24:53.654898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.229 [2024-11-19 11:24:53.654914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:19.229 [2024-11-19 11:24:53.654935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.229 [2024-11-19 11:24:53.654951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:19.229 [2024-11-19 11:24:53.654972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.229 [2024-11-19 11:24:53.654988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:19.229 [2024-11-19 11:24:53.655010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:99416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.229 [2024-11-19 11:24:53.655025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:19.229 [2024-11-19 11:24:53.655046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.229 [2024-11-19 11:24:53.655061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:19.229 [2024-11-19 11:24:53.655082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.229 [2024-11-19 11:24:53.655098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:19.229 [2024-11-19 11:24:53.655119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.229 [2024-11-19 11:24:53.655135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:19.229 [2024-11-19 11:24:53.655156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.229 [2024-11-19 11:24:53.655171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:19.229 [2024-11-19 11:24:53.655192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.229 [2024-11-19 11:24:53.655208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:19.229 [2024-11-19 11:24:53.655228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.229 [2024-11-19 11:24:53.655244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:19.229 [2024-11-19 11:24:53.655270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.229 [2024-11-19 11:24:53.655286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:19.229 [2024-11-19 11:24:53.655307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.229 [2024-11-19 11:24:53.655323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.229 [2024-11-19 11:24:53.655373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.229 [2024-11-19 11:24:53.655394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:19.229 [2024-11-19 11:24:53.655418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.229 [2024-11-19 11:24:53.655434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:19.229 [2024-11-19 11:24:53.655457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.229 [2024-11-19 11:24:53.655474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:19.229 [2024-11-19 11:24:53.655496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.229 [2024-11-19 11:24:53.655512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:19.229 [2024-11-19 11:24:53.655534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.229 [2024-11-19 11:24:53.655551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:19.229 [2024-11-19 11:24:53.655574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:99480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.229 [2024-11-19 11:24:53.655591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:19.229 [2024-11-19 11:24:53.655613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.229 [2024-11-19 11:24:53.655629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:19.229 [2024-11-19 11:24:53.655667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.229 [2024-11-19 11:24:53.655683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:19.229 [2024-11-19 11:24:53.655706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.229 [2024-11-19 11:24:53.655736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:19.229 [2024-11-19 11:24:53.655759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.229 [2024-11-19 11:24:53.655775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:19.229 [2024-11-19 11:24:53.655795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.229 [2024-11-19 11:24:53.655811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:19.229 [2024-11-19 11:24:53.655832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:99528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.229 [2024-11-19 11:24:53.655847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:19.229 [2024-11-19 11:24:53.655872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.229 [2024-11-19 11:24:53.655889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:19.229 [2024-11-19 11:24:53.655909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.229 [2024-11-19 11:24:53.655925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:19.229 [2024-11-19 11:24:53.655946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.229 [2024-11-19 11:24:53.655962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:19.229 [2024-11-19 11:24:53.655983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.229 [2024-11-19 11:24:53.655999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:19.230 [2024-11-19 11:24:53.656020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.230 [2024-11-19 11:24:53.656035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:19.230 [2024-11-19 11:24:53.656056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.230 [2024-11-19 11:24:53.656072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:19.230 [2024-11-19 11:24:53.656093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.230 [2024-11-19 11:24:53.656108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:19.230 [2024-11-19 11:24:53.656129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.230 [2024-11-19 11:24:53.656145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:19.230 [2024-11-19 11:24:53.656166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.230 [2024-11-19 11:24:53.656181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:19.230 [2024-11-19 11:24:53.656202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.230 [2024-11-19 11:24:53.656217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:19.230 [2024-11-19 11:24:53.656238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.230 [2024-11-19 11:24:53.656254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:19.230 [2024-11-19 11:24:53.656274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.230 [2024-11-19 11:24:53.656291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:19.230 [2024-11-19 11:24:53.656312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.230 [2024-11-19 11:24:53.656332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:19.230 [2024-11-19 11:24:53.656380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.230 [2024-11-19 11:24:53.656399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:19.230 [2024-11-19 11:24:53.656421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.230 [2024-11-19 11:24:53.656437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:19.230 [2024-11-19 11:24:53.656459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.230 [2024-11-19 11:24:53.656475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.230 [2024-11-19 11:24:53.656498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.230 [2024-11-19 11:24:53.656514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:19.230 [2024-11-19 11:24:53.656536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.230 [2024-11-19 11:24:53.656553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:19.230 [2024-11-19 11:24:53.656575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.230 [2024-11-19 11:24:53.656592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.230 [2024-11-19 11:24:53.656614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.230 [2024-11-19 11:24:53.656630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.230 [2024-11-19 11:24:53.656667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.230 [2024-11-19 11:24:53.656683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:19.230 [2024-11-19 11:24:53.656706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.230 [2024-11-19 11:24:53.656736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:19.230 [2024-11-19 11:24:53.656759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.230 [2024-11-19 11:24:53.656775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:19.230 [2024-11-19 11:24:53.656795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.230 [2024-11-19 11:24:53.656811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:19.230 [2024-11-19 11:24:53.656832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.230 [2024-11-19 11:24:53.656852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:19.230 [2024-11-19 11:24:53.656875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.230 [2024-11-19 11:24:53.656890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:19.230 [2024-11-19 11:24:53.656911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.230 [2024-11-19 11:24:53.656928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:19.230 [2024-11-19 11:24:53.657518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.230 [2024-11-19 11:24:53.657541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:19.230 [2024-11-19 11:24:53.657568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.230 [2024-11-19 11:24:53.657585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:19.230 [2024-11-19 11:24:53.657609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.230 [2024-11-19 11:24:53.657626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:19.230 [2024-11-19 11:24:53.657662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.230 [2024-11-19 11:24:53.657678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:19.230 [2024-11-19 11:24:53.657699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:99200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.230 [2024-11-19 11:24:53.657715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:19.230 [2024-11-19 11:24:53.657736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.230 [2024-11-19 11:24:53.657751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:19.230 [2024-11-19 11:24:53.657772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.230 [2024-11-19 11:24:53.657787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:19.230 [2024-11-19 11:24:53.657808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.230 [2024-11-19 11:24:53.657823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:19.230 [2024-11-19 11:24:53.657844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.230 [2024-11-19 11:24:53.657860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:19.230 [2024-11-19 11:24:53.657880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.230 [2024-11-19 11:24:53.657895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:19.230 [2024-11-19 11:24:53.657921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.230 [2024-11-19 11:24:53.657937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:19.230 [2024-11-19 11:24:53.657958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.230 [2024-11-19 11:24:53.657973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:19.230 [2024-11-19 11:24:53.657994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.230 [2024-11-19 11:24:53.658009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:19.230 [2024-11-19 11:24:53.658030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.231 [2024-11-19 11:24:53.658045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:19.231 [2024-11-19 11:24:53.658066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.231 [2024-11-19 11:24:53.658081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:19.231 [2024-11-19 11:24:53.658102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.231 [2024-11-19 11:24:53.658122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:19.231 [2024-11-19 11:24:53.658144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.231 [2024-11-19 11:24:53.658160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:19.231 [2024-11-19 11:24:53.658181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.231 [2024-11-19 11:24:53.658196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:19.231 [2024-11-19 11:24:53.658218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.231 [2024-11-19 11:24:53.658233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:19.231 [2024-11-19 11:24:53.658254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.231 [2024-11-19 11:24:53.658269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:19.231 [2024-11-19 11:24:53.658290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.231 [2024-11-19 11:24:53.658305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:19.231 [2024-11-19 11:24:53.658326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.231 [2024-11-19 11:24:53.658341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:19.231 [2024-11-19 11:24:53.658394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.231 [2024-11-19 11:24:53.658412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:19.231 [2024-11-19 11:24:53.658436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.231 [2024-11-19 11:24:53.658453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:19.231 [2024-11-19 11:24:53.658475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.231 [2024-11-19 11:24:53.658491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.231 [2024-11-19 11:24:53.658513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.231 [2024-11-19 11:24:53.658530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:19.231 [2024-11-19 11:24:53.658553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.231 [2024-11-19 11:24:53.658569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:19.231 [2024-11-19 11:24:53.658592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.231 [2024-11-19 11:24:53.658608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:19.231 [2024-11-19 11:24:53.658631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.231 [2024-11-19 11:24:53.658647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:19.231 [2024-11-19 11:24:53.658685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.231 [2024-11-19 11:24:53.658701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:19.231 [2024-11-19 11:24:53.658737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.231 [2024-11-19 11:24:53.658754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:19.231 [2024-11-19 11:24:53.658775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.231 [2024-11-19 11:24:53.658791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:19.231 [2024-11-19 11:24:53.658812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.231 [2024-11-19 11:24:53.658827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:19.231 [2024-11-19 11:24:53.658848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.231 [2024-11-19 11:24:53.658864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:19.231 [2024-11-19 11:24:53.658884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.231 [2024-11-19 11:24:53.658910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:19.231 [2024-11-19 11:24:53.658932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.231 [2024-11-19 11:24:53.658947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:19.231 [2024-11-19 11:24:53.658969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.231 [2024-11-19 11:24:53.658984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:19.231 [2024-11-19 11:24:53.659005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.231 [2024-11-19 11:24:53.659020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:19.231 [2024-11-19 11:24:53.659041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.231 [2024-11-19 11:24:53.659056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:19.231 [2024-11-19 11:24:53.659077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.231 [2024-11-19 11:24:53.659093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:19.231 [2024-11-19 11:24:53.659113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.231 [2024-11-19 11:24:53.659128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:19.231 [2024-11-19 11:24:53.659149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.231 [2024-11-19 11:24:53.659164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:19.231 [2024-11-19 11:24:53.659185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.231 [2024-11-19 11:24:53.659201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:19.231 [2024-11-19 11:24:53.659221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.231 [2024-11-19 11:24:53.659237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:19.231 [2024-11-19 11:24:53.659258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.231 [2024-11-19 11:24:53.659273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:19.231 [2024-11-19 11:24:53.659293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.231 [2024-11-19 11:24:53.659308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:19.231 [2024-11-19 11:24:53.659329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.231 [2024-11-19 11:24:53.659373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:19.231 [2024-11-19 11:24:53.659402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.231 [2024-11-19 11:24:53.659419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:19.231 [2024-11-19 11:24:53.659442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.231 [2024-11-19 11:24:53.659458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:19.231 [2024-11-19 11:24:53.659480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.231 [2024-11-19 11:24:53.659497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:19.231 [2024-11-19 11:24:53.659520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.232 [2024-11-19 11:24:53.659536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:19.232 [2024-11-19 11:24:53.659557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.232 [2024-11-19 11:24:53.659573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:19.232 [2024-11-19 11:24:53.659596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.232 [2024-11-19 11:24:53.659612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:19.232 [2024-11-19 11:24:53.659634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.232 [2024-11-19 11:24:53.659664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:19.232 [2024-11-19 11:24:53.659693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.232 [2024-11-19 11:24:53.659724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:19.232 [2024-11-19 11:24:53.659747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.232 [2024-11-19 11:24:53.659762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:19.232 [2024-11-19 11:24:53.659783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.232 [2024-11-19 11:24:53.659798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.232 [2024-11-19 11:24:53.659819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.232 [2024-11-19 11:24:53.659835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:19.232 [2024-11-19 11:24:53.659857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.232 [2024-11-19 11:24:53.659872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:19.232 [2024-11-19 11:24:53.659897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.232 [2024-11-19 11:24:53.659913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:19.232 [2024-11-19 11:24:53.659934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.232 [2024-11-19 11:24:53.659950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:19.232 [2024-11-19 11:24:53.659970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.232 [2024-11-19 11:24:53.659985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:19.232 [2024-11-19 11:24:53.660006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.232 [2024-11-19 11:24:53.660021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:19.232 [2024-11-19 11:24:53.660042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.232 [2024-11-19 11:24:53.660058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:19.232 [2024-11-19 11:24:53.660078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.232 [2024-11-19 11:24:53.660093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:19.232 [2024-11-19 11:24:53.660114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.232 [2024-11-19 11:24:53.660130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:19.232 [2024-11-19 11:24:53.660150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.232 [2024-11-19 11:24:53.660165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:19.232 [2024-11-19 11:24:53.660185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.232 [2024-11-19 11:24:53.660201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:19.232 [2024-11-19 11:24:53.660221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.232 [2024-11-19 11:24:53.660237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:19.232 [2024-11-19 11:24:53.660257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.232 [2024-11-19 11:24:53.660272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:19.232 [2024-11-19 11:24:53.660294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.232 [2024-11-19 11:24:53.660311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:19.232 [2024-11-19 11:24:53.660335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.232 [2024-11-19 11:24:53.660375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:19.232 [2024-11-19 11:24:53.660401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.232 [2024-11-19 11:24:53.660418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:19.232 [2024-11-19 11:24:53.661252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.232 [2024-11-19 11:24:53.661274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:19.232 [2024-11-19 11:24:53.661299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.232 [2024-11-19 11:24:53.661316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:19.232 [2024-11-19 11:24:53.661353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.232 [2024-11-19 11:24:53.661379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:19.232 [2024-11-19 11:24:53.661405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.232 [2024-11-19 11:24:53.661422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:19.232 [2024-11-19 11:24:53.661444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.232 [2024-11-19 11:24:53.661461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:19.232 [2024-11-19 11:24:53.661483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.232 [2024-11-19 11:24:53.661499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:19.232 [2024-11-19 11:24:53.661521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.232 [2024-11-19 11:24:53.661538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:19.232 [2024-11-19 11:24:53.661560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:99416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.232 [2024-11-19 11:24:53.661577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:19.232 [2024-11-19 11:24:53.661599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.232 [2024-11-19 11:24:53.661615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:19.232 [2024-11-19 11:24:53.661652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.232 [2024-11-19 11:24:53.661668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:19.233 [2024-11-19 11:24:53.661691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.233 [2024-11-19 11:24:53.661730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:19.233 [2024-11-19 11:24:53.661753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:99176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.233 [2024-11-19 11:24:53.661769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:19.233 [2024-11-19 11:24:53.661790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.233 [2024-11-19 11:24:53.661806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:19.233 [2024-11-19 11:24:53.661827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.233 [2024-11-19 11:24:53.661842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:19.233 [2024-11-19 11:24:53.661862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.233 [2024-11-19 11:24:53.661878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:19.233 [2024-11-19 11:24:53.661898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.233 [2024-11-19 11:24:53.661914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.233 [2024-11-19 11:24:53.661934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.233 [2024-11-19 11:24:53.661949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:19.233 [2024-11-19 11:24:53.661969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.233 [2024-11-19 11:24:53.661985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:19.233 [2024-11-19 11:24:53.662006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.233 [2024-11-19 11:24:53.662021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:19.233 [2024-11-19 11:24:53.662041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.233 [2024-11-19 11:24:53.662057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:19.233 [2024-11-19 11:24:53.662077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.233 [2024-11-19 11:24:53.662093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:19.233 [2024-11-19 11:24:53.662113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.233 [2024-11-19 11:24:53.662128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:19.233 [2024-11-19 11:24:53.662149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.233 [2024-11-19 11:24:53.662174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:19.233 [2024-11-19 11:24:53.662196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.233 [2024-11-19 11:24:53.662212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:19.233 [2024-11-19 11:24:53.662233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.233 [2024-11-19 11:24:53.662248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:19.233 [2024-11-19 11:24:53.662269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.233 [2024-11-19 11:24:53.662285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:19.233 [2024-11-19 11:24:53.662306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.233 [2024-11-19 11:24:53.662321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:19.233 [2024-11-19 11:24:53.662356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:99528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.233 [2024-11-19 11:24:53.662382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:19.233 [2024-11-19 11:24:53.662407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.233 [2024-11-19 11:24:53.662424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:19.233 [2024-11-19 11:24:53.662446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.233 [2024-11-19 11:24:53.662463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:19.233 [2024-11-19 11:24:53.662485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.233 [2024-11-19 11:24:53.662502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:19.233 [2024-11-19 11:24:53.662525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.233 [2024-11-19 11:24:53.662541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:19.233 [2024-11-19 11:24:53.662563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.233 [2024-11-19 11:24:53.662581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:19.233 [2024-11-19 11:24:53.662602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:99576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.233 [2024-11-19 11:24:53.662619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:19.233 [2024-11-19 11:24:53.662641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.233 [2024-11-19 11:24:53.662673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:19.233 [2024-11-19 11:24:53.662700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.233 [2024-11-19 11:24:53.662732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:19.233 [2024-11-19 11:24:53.662754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.233 [2024-11-19 11:24:53.662770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:19.233 [2024-11-19 11:24:53.662791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.233 [2024-11-19 11:24:53.662806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:19.233 [2024-11-19 11:24:53.662827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.233 [2024-11-19 11:24:53.662844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:19.233 [2024-11-19 11:24:53.662865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.233 [2024-11-19 11:24:53.662880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:19.233 [2024-11-19 11:24:53.662901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.233 [2024-11-19 11:24:53.662917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:19.233 [2024-11-19 11:24:53.662937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.233 [2024-11-19 11:24:53.662953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:19.233 [2024-11-19 11:24:53.662973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.233 [2024-11-19 11:24:53.662988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:19.233 [2024-11-19 11:24:53.663009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.233 [2024-11-19 11:24:53.663024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.233 [2024-11-19 11:24:53.663045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:99664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.233 [2024-11-19 11:24:53.663061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:19.233 [2024-11-19 11:24:53.663081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.233 [2024-11-19 11:24:53.663097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:19.233 [2024-11-19 11:24:53.663118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:99680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.233 [2024-11-19 11:24:53.663133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.234 [2024-11-19 11:24:53.663158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.234 [2024-11-19 11:24:53.663175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.234 [2024-11-19 11:24:53.663196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.234 [2024-11-19 11:24:53.663212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:19.234 [2024-11-19 11:24:53.663233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.234 [2024-11-19 11:24:53.663249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:19.234 [2024-11-19 11:24:53.663270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.234 [2024-11-19 11:24:53.663285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:19.234 [2024-11-19 11:24:53.663306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.234 [2024-11-19 11:24:53.663322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:19.234 [2024-11-19 11:24:53.663357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.234 [2024-11-19 11:24:53.663385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:19.234 [2024-11-19 11:24:53.663429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.234 [2024-11-19 11:24:53.663447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:19.234 [2024-11-19 11:24:53.664060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.234 [2024-11-19 11:24:53.664082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:19.234 [2024-11-19 11:24:53.664107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.234 [2024-11-19 11:24:53.664124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:19.234 [2024-11-19 11:24:53.664146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:99760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.234 [2024-11-19 11:24:53.664161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:19.234 [2024-11-19 11:24:53.664182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.234 [2024-11-19 11:24:53.664198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:19.234 [2024-11-19 11:24:53.664219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.234 [2024-11-19 11:24:53.664234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:19.234 [2024-11-19 11:24:53.664255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.234 [2024-11-19 11:24:53.664275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:19.234 [2024-11-19 11:24:53.664298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.234 [2024-11-19 11:24:53.664314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:19.234 [2024-11-19 11:24:53.664335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.234 [2024-11-19 11:24:53.664375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:19.234 [2024-11-19 11:24:53.664401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.234 [2024-11-19 11:24:53.664419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:19.234 [2024-11-19 11:24:53.664442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.234 [2024-11-19 11:24:53.664458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:19.234 [2024-11-19 11:24:53.664480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.234 [2024-11-19 11:24:53.664497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:19.234 [2024-11-19 11:24:53.664519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.234 [2024-11-19 11:24:53.664536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:19.234 [2024-11-19 11:24:53.664559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.234 [2024-11-19 11:24:53.664575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:19.234 [2024-11-19 11:24:53.664597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.234 [2024-11-19 11:24:53.664613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:19.234 [2024-11-19 11:24:53.664635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.234 [2024-11-19 11:24:53.664667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:19.234 [2024-11-19 11:24:53.664689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.234 [2024-11-19 11:24:53.664704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:19.234 [2024-11-19 11:24:53.664725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.234 [2024-11-19 11:24:53.664741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:19.234 [2024-11-19 11:24:53.664762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.234 [2024-11-19 11:24:53.664781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:19.234 [2024-11-19 11:24:53.664804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.234 [2024-11-19 11:24:53.664819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:19.234 [2024-11-19 11:24:53.664840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.234 [2024-11-19 11:24:53.664856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:19.234 [2024-11-19 11:24:53.664876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.234 [2024-11-19 11:24:53.664892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:19.234 [2024-11-19 11:24:53.664913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:99904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.234 [2024-11-19 11:24:53.664929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:19.234 [2024-11-19 11:24:53.664950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.234 [2024-11-19 11:24:53.664966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:19.234 [2024-11-19 11:24:53.664987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.234 [2024-11-19 11:24:53.665002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:19.234 [2024-11-19 11:24:53.665023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.234 [2024-11-19 11:24:53.665038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:19.234 [2024-11-19 11:24:53.665059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.234 [2024-11-19 11:24:53.665075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.234 [2024-11-19 11:24:53.665095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.234 [2024-11-19 11:24:53.665111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:19.234 [2024-11-19 11:24:53.665131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:99952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.234 [2024-11-19 11:24:53.665147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:19.234 [2024-11-19 11:24:53.665168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.234 [2024-11-19 11:24:53.665184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:19.234 [2024-11-19 11:24:53.665204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.235 [2024-11-19 11:24:53.665219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:19.235 [2024-11-19 11:24:53.665245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.235 [2024-11-19 11:24:53.665261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:19.235 [2024-11-19 11:24:53.665282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.235 [2024-11-19 11:24:53.665297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:19.235 [2024-11-19 11:24:53.665318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.235 [2024-11-19 11:24:53.665333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:19.235 [2024-11-19 11:24:53.665378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.235 [2024-11-19 11:24:53.665397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:19.235 [2024-11-19 11:24:53.665420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.235 [2024-11-19 11:24:53.665436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:19.235 [2024-11-19 11:24:53.665459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.235 [2024-11-19 11:24:53.665475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:19.235 [2024-11-19 11:24:53.665497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.235 [2024-11-19 11:24:53.665513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:19.235 [2024-11-19 11:24:53.665535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.235 [2024-11-19 11:24:53.665551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:19.235 [2024-11-19 11:24:53.665573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.235 [2024-11-19 11:24:53.665589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:19.235 [2024-11-19 11:24:53.665611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.235 [2024-11-19 11:24:53.665627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:19.235 [2024-11-19 11:24:53.665665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.235 [2024-11-19 11:24:53.665681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:19.235 [2024-11-19 11:24:53.665702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.235 [2024-11-19 11:24:53.665718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:19.235 [2024-11-19 11:24:53.665743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.235 [2024-11-19 11:24:53.665759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:19.235 [2024-11-19 11:24:53.665780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.235 [2024-11-19 11:24:53.665796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:19.235 [2024-11-19 11:24:53.665816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.235 [2024-11-19 11:24:53.665832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:19.235 [2024-11-19 11:24:53.665852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.235 [2024-11-19 11:24:53.665868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:19.235 [2024-11-19 11:24:53.665888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.235 [2024-11-19 11:24:53.665903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:19.235 [2024-11-19 11:24:53.665924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.235 [2024-11-19 11:24:53.665939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:19.235 [2024-11-19 11:24:53.665959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.235 [2024-11-19 11:24:53.665975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:19.235 [2024-11-19 11:24:53.665995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.235 [2024-11-19 11:24:53.666010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:19.235 [2024-11-19 11:24:53.666031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.235 [2024-11-19 11:24:53.666047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:19.235 [2024-11-19 11:24:53.666067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.235 [2024-11-19 11:24:53.666083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:19.235 [2024-11-19 11:24:53.666103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.235 [2024-11-19 11:24:53.666118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:19.235 [2024-11-19 11:24:53.666139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.235 [2024-11-19 11:24:53.666154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:19.235 [2024-11-19 11:24:53.666175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.235 [2024-11-19 11:24:53.666194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:19.235 [2024-11-19 11:24:53.666216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.235 [2024-11-19 11:24:53.666232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:19.235 [2024-11-19 11:24:53.666252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.235 [2024-11-19 11:24:53.666268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:19.235 [2024-11-19 11:24:53.666288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.235 [2024-11-19 11:24:53.666303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.235 [2024-11-19 11:24:53.666324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.235 [2024-11-19 11:24:53.666353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:19.235 [2024-11-19 11:24:53.666386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.235 [2024-11-19 11:24:53.666404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:19.235 [2024-11-19 11:24:53.666426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.235 [2024-11-19 11:24:53.666442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:19.235 [2024-11-19 11:24:53.666465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.235 [2024-11-19 11:24:53.666481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:19.235 [2024-11-19 11:24:53.666503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.235 [2024-11-19 11:24:53.666519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:19.235 [2024-11-19 11:24:53.666541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.235 [2024-11-19 11:24:53.666557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:19.235 [2024-11-19 11:24:53.666579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.235 [2024-11-19 11:24:53.666596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:19.235 [2024-11-19 11:24:53.666628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.235 [2024-11-19 11:24:53.666644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:19.236 [2024-11-19 11:24:53.666682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.236 [2024-11-19 11:24:53.666709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:19.236 [2024-11-19 11:24:53.666748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.236 [2024-11-19 11:24:53.666764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:19.236 [2024-11-19 11:24:53.666786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.236 [2024-11-19 11:24:53.666802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:19.236 [2024-11-19 11:24:53.666823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.236 [2024-11-19 11:24:53.666839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:19.236 [2024-11-19 11:24:53.666860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.236 [2024-11-19 11:24:53.666876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:19.236 [2024-11-19 11:24:53.666896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.236 [2024-11-19 11:24:53.666912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:19.236 [2024-11-19 11:24:53.666933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.236 [2024-11-19 11:24:53.666948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:19.236 [2024-11-19 11:24:53.667773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.236 [2024-11-19 11:24:53.667795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:19.236 [2024-11-19 11:24:53.667820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.236 [2024-11-19 11:24:53.667837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:19.236 [2024-11-19 11:24:53.667859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.236 [2024-11-19 11:24:53.667875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:19.236 [2024-11-19 11:24:53.667896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.236 [2024-11-19 11:24:53.667911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:19.236 [2024-11-19 11:24:53.667932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.236 [2024-11-19 11:24:53.667948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:19.236 [2024-11-19 11:24:53.667979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.236 [2024-11-19 11:24:53.667995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:19.236 [2024-11-19 11:24:53.668021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:99400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.236 [2024-11-19 11:24:53.668037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:19.236 [2024-11-19 11:24:53.668058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.236 [2024-11-19 11:24:53.668074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:19.236 [2024-11-19 11:24:53.668095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.236 [2024-11-19 11:24:53.668110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:19.236 [2024-11-19 11:24:53.668131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.236 [2024-11-19 11:24:53.668147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:19.236 [2024-11-19 11:24:53.668169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.236 [2024-11-19 11:24:53.668185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:19.236 [2024-11-19 11:24:53.668220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.236 [2024-11-19 11:24:53.668237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:19.236 [2024-11-19 11:24:53.668259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.236 [2024-11-19 11:24:53.668275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:19.236 [2024-11-19 11:24:53.668298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.236 [2024-11-19 11:24:53.668313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:19.236 [2024-11-19 11:24:53.668334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.236 [2024-11-19 11:24:53.668375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:19.236 [2024-11-19 11:24:53.668407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.236 [2024-11-19 11:24:53.668424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:19.236 [2024-11-19 11:24:53.668447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.236 [2024-11-19 11:24:53.668463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.236 [2024-11-19 11:24:53.668485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.236 [2024-11-19 11:24:53.668501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:19.236 [2024-11-19 11:24:53.668544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.236 [2024-11-19 11:24:53.668562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:19.236 [2024-11-19 11:24:53.668584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.236 [2024-11-19 11:24:53.668600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:19.236 [2024-11-19 11:24:53.668621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:99464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.236 [2024-11-19 11:24:53.668638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:19.236 [2024-11-19 11:24:53.668675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.236 [2024-11-19 11:24:53.668700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:19.236 [2024-11-19 11:24:53.668721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.236 [2024-11-19 11:24:53.668736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:19.236 [2024-11-19 11:24:53.668758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.236 [2024-11-19 11:24:53.668773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:19.236 [2024-11-19 11:24:53.668794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.236 [2024-11-19 11:24:53.668809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:19.236 [2024-11-19 11:24:53.668830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.236 [2024-11-19 11:24:53.668845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:19.236 [2024-11-19 11:24:53.668866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:99512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.236 [2024-11-19 11:24:53.668882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:19.236 [2024-11-19 11:24:53.668902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.236 [2024-11-19 11:24:53.668917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:19.236 [2024-11-19 11:24:53.668938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.236 [2024-11-19 11:24:53.668954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:19.236 [2024-11-19 11:24:53.668974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.236 [2024-11-19 11:24:53.668989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:19.237 [2024-11-19 11:24:53.669010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.237 [2024-11-19 11:24:53.669029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:19.237 [2024-11-19 11:24:53.669051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.237 [2024-11-19 11:24:53.669067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:19.237 [2024-11-19 11:24:53.669089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.237 [2024-11-19 11:24:53.669104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:19.237 [2024-11-19 11:24:53.669125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.237 [2024-11-19 11:24:53.669141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:19.237 [2024-11-19 11:24:53.669162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.237 [2024-11-19 11:24:53.669177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:19.237 [2024-11-19 11:24:53.669198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.237 [2024-11-19 11:24:53.669213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:19.237 [2024-11-19 11:24:53.669234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.237 [2024-11-19 11:24:53.669249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:19.237 [2024-11-19 11:24:53.669270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.237 [2024-11-19 11:24:53.669286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:19.237 [2024-11-19 11:24:53.669306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.237 [2024-11-19 11:24:53.669321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:19.237 [2024-11-19 11:24:53.669357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.237 [2024-11-19 11:24:53.669388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:19.237 [2024-11-19 11:24:53.669414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.237 [2024-11-19 11:24:53.669430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:19.237 [2024-11-19 11:24:53.669451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.237 [2024-11-19 11:24:53.669469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:19.237 [2024-11-19 11:24:53.669491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.237 [2024-11-19 11:24:53.669512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:19.237 [2024-11-19 11:24:53.669535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.237 [2024-11-19 11:24:53.669551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:19.237 [2024-11-19 11:24:53.669573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.237 [2024-11-19 11:24:53.669590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.237 [2024-11-19 11:24:53.669611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.237 [2024-11-19 11:24:53.669627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:19.237 [2024-11-19 11:24:53.669663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.237 [2024-11-19 11:24:53.669679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:19.237 [2024-11-19 11:24:53.669700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.237 [2024-11-19 11:24:53.669716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.237 [2024-11-19 11:24:53.669747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.237 [2024-11-19 11:24:53.669762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.237 [2024-11-19 11:24:53.669783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.237 [2024-11-19 11:24:53.669803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:19.237 [2024-11-19 11:24:53.669824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.237 [2024-11-19 11:24:53.669840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:19.237 [2024-11-19 11:24:53.669861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.237 [2024-11-19 11:24:53.669876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:19.237 [2024-11-19 11:24:53.669897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.237 [2024-11-19 11:24:53.669913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:19.237 [2024-11-19 11:24:53.669935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.237 [2024-11-19 11:24:53.669951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:19.237 [2024-11-19 11:24:53.670548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.237 [2024-11-19 11:24:53.670572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:19.237 [2024-11-19 11:24:53.670605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.237 [2024-11-19 11:24:53.670624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:19.237 [2024-11-19 11:24:53.670647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.237 [2024-11-19 11:24:53.670680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:19.237 [2024-11-19 11:24:53.670703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.237 [2024-11-19 11:24:53.670733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:19.237 [2024-11-19 11:24:53.670756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.237 [2024-11-19 11:24:53.670772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:19.237 [2024-11-19 11:24:53.670793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.237 [2024-11-19 11:24:53.670808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:19.237 [2024-11-19 11:24:53.670829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:99200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.237 [2024-11-19 11:24:53.670845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:19.237 [2024-11-19 11:24:53.670866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.237 [2024-11-19 11:24:53.670882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:19.237 [2024-11-19 11:24:53.670903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:99792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.238 [2024-11-19 11:24:53.670918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:19.238 [2024-11-19 11:24:53.670939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.238 [2024-11-19 11:24:53.670954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:19.238 [2024-11-19 11:24:53.670974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.238 [2024-11-19 11:24:53.670990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:19.238 [2024-11-19 11:24:53.671010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.238 [2024-11-19 11:24:53.671026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:19.238 [2024-11-19 11:24:53.671046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.238 [2024-11-19 11:24:53.671062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:19.238 [2024-11-19 11:24:53.671089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.238 [2024-11-19 11:24:53.671105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:19.238 [2024-11-19 11:24:53.671126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.238 [2024-11-19 11:24:53.671141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:19.238 [2024-11-19 11:24:53.671161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.238 [2024-11-19 11:24:53.671176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:19.238 [2024-11-19 11:24:53.671197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.238 [2024-11-19 11:24:53.671213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:19.238 [2024-11-19 11:24:53.671233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.238 [2024-11-19 11:24:53.671249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:19.238 [2024-11-19 11:24:53.671270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.238 [2024-11-19 11:24:53.671285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:19.238 [2024-11-19 11:24:53.671306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.238 [2024-11-19 11:24:53.671321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:19.238 [2024-11-19 11:24:53.671357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.238 [2024-11-19 11:24:53.671382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:19.238 [2024-11-19 11:24:53.671405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.238 [2024-11-19 11:24:53.671422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:19.238 [2024-11-19 11:24:53.671444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.238 [2024-11-19 11:24:53.671459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:19.238 [2024-11-19 11:24:53.671480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.238 [2024-11-19 11:24:53.671496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:19.238 [2024-11-19 11:24:53.671518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.238 [2024-11-19 11:24:53.671534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:19.238 [2024-11-19 11:24:53.671555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.238 [2024-11-19 11:24:53.671575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:19.238 [2024-11-19 11:24:53.671597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.238 [2024-11-19 11:24:53.671615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:19.238 [2024-11-19 11:24:53.671636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.238 [2024-11-19 11:24:53.671652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:19.238 [2024-11-19 11:24:53.671689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.238 [2024-11-19 11:24:53.671705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:19.238 [2024-11-19 11:24:53.677712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.238 [2024-11-19 11:24:53.677742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:19.238 [2024-11-19 11:24:53.677766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.238 [2024-11-19 11:24:53.677782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:19.238 [2024-11-19 11:24:53.677803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.238 [2024-11-19 11:24:53.677818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:19.238 [2024-11-19 11:24:53.677839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.238 [2024-11-19 11:24:53.677854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:19.238 [2024-11-19 11:24:53.677875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.238 [2024-11-19 11:24:53.677891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:19.238 [2024-11-19 11:24:53.677912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.238 [2024-11-19 11:24:53.677927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:19.238 [2024-11-19 11:24:53.677948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.238 [2024-11-19 11:24:53.677963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:19.238 [2024-11-19 11:24:53.677984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.238 [2024-11-19 11:24:53.677999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:19.238 [2024-11-19 11:24:53.678019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.238 [2024-11-19 11:24:53.678041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:19.238 [2024-11-19 11:24:53.678064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.238 [2024-11-19 11:24:53.678080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:19.238 [2024-11-19 11:24:53.678100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.238 [2024-11-19 11:24:53.678115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:19.238 [2024-11-19 11:24:53.678136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.238 [2024-11-19 11:24:53.678151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:19.238 [2024-11-19 11:24:53.678172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.238 [2024-11-19 11:24:53.678187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:19.238 [2024-11-19 11:24:53.678208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.238 [2024-11-19 11:24:53.678223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:19.238 [2024-11-19 11:24:53.678244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.238 [2024-11-19 11:24:53.678259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:19.238 [2024-11-19 11:24:53.678280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.238 [2024-11-19 11:24:53.678295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:19.238 [2024-11-19 11:24:53.678316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.239 [2024-11-19 11:24:53.678331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:19.239 [2024-11-19 11:24:53.678380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.239 [2024-11-19 11:24:53.678400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:19.239 [2024-11-19 11:24:53.678422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.239 [2024-11-19 11:24:53.678438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:19.239 [2024-11-19 11:24:53.678461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.239 [2024-11-19 11:24:53.678477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:19.239 [2024-11-19 11:24:53.678499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.239 [2024-11-19 11:24:53.678520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:19.239 [2024-11-19 11:24:53.678543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.239 [2024-11-19 11:24:53.678560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:19.239 [2024-11-19 11:24:53.678582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.239 [2024-11-19 11:24:53.678598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:19.239 [2024-11-19 11:24:53.678620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.239 [2024-11-19 11:24:53.678636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:19.239 [2024-11-19 11:24:53.678674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.239 [2024-11-19 11:24:53.678690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:19.239 [2024-11-19 11:24:53.678727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.239 [2024-11-19 11:24:53.678742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:19.239 [2024-11-19 11:24:53.678763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.239 [2024-11-19 11:24:53.678778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:19.239 [2024-11-19 11:24:53.678799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.239 [2024-11-19 11:24:53.678814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:19.239 [2024-11-19 11:24:53.678835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.239 [2024-11-19 11:24:53.678850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:19.239 [2024-11-19 11:24:53.678870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.239 [2024-11-19 11:24:53.678886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.239 [2024-11-19 11:24:53.678907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.239 [2024-11-19 11:24:53.678922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:19.239 [2024-11-19 11:24:53.678943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.239 [2024-11-19 11:24:53.678958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:19.239 [2024-11-19 11:24:53.678979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.239 [2024-11-19 11:24:53.678994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:19.239 [2024-11-19 11:24:53.679019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.239 [2024-11-19 11:24:53.679035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:19.239 [2024-11-19 11:24:53.679055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.239 [2024-11-19 11:24:53.679071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:19.239 [2024-11-19 11:24:53.679091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.239 [2024-11-19 11:24:53.679107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:19.239 [2024-11-19 11:24:53.679127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.239 [2024-11-19 11:24:53.679143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:19.239 [2024-11-19 11:24:53.679163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.239 [2024-11-19 11:24:53.679179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:19.239 [2024-11-19 11:24:53.679199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.239 [2024-11-19 11:24:53.679215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:19.239 [2024-11-19 11:24:53.679235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.239 [2024-11-19 11:24:53.679250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:19.239 [2024-11-19 11:24:53.679271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.239 [2024-11-19 11:24:53.679286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:19.239 [2024-11-19 11:24:53.679306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.239 [2024-11-19 11:24:53.679321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:19.239 [2024-11-19 11:24:53.679342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.239 [2024-11-19 11:24:53.679382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:19.239 [2024-11-19 11:24:53.679407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.239 [2024-11-19 11:24:53.679423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:19.239 [2024-11-19 11:24:53.679741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.239 [2024-11-19 11:24:53.679765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:19.239 [2024-11-19 11:24:53.679824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.239 [2024-11-19 11:24:53.679844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:19.239 [2024-11-19 11:24:53.679871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.239 [2024-11-19 11:24:53.679887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:19.239 [2024-11-19 11:24:53.679912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.239 [2024-11-19 11:24:53.679928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:19.239 [2024-11-19 11:24:53.679953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.239 [2024-11-19 11:24:53.679968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:19.239 [2024-11-19 11:24:53.679993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.239 [2024-11-19 11:24:53.680009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:19.240 [2024-11-19 11:24:53.680034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.240 [2024-11-19 11:24:53.680050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:19.240 [2024-11-19 11:24:53.680074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:99400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.240 [2024-11-19 11:24:53.680090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:19.240 [2024-11-19 11:24:53.680115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.240 [2024-11-19 11:24:53.680137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:19.240 [2024-11-19 11:24:53.680162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.240 [2024-11-19 11:24:53.680177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:19.240 [2024-11-19 11:24:53.680202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.240 [2024-11-19 11:24:53.680218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:19.240 [2024-11-19 11:24:53.680243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.240 [2024-11-19 11:24:53.680259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:19.240 [2024-11-19 11:24:53.680284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:99168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.240 [2024-11-19 11:24:53.680299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:19.240 [2024-11-19 11:24:53.680323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.240 [2024-11-19 11:24:53.680357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:19.240 [2024-11-19 11:24:53.680401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.240 [2024-11-19 11:24:53.680419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:19.240 [2024-11-19 11:24:53.680445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.240 [2024-11-19 11:24:53.680460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:19.240 [2024-11-19 11:24:53.680486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.240 [2024-11-19 11:24:53.680501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:19.240 [2024-11-19 11:24:53.680528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.240 [2024-11-19 11:24:53.680543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:19.240 [2024-11-19 11:24:53.680569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.240 [2024-11-19 11:24:53.680584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:19.240 [2024-11-19 11:24:53.680609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.240 [2024-11-19 11:24:53.680626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:19.240 [2024-11-19 11:24:53.680665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.240 [2024-11-19 11:24:53.680681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:19.240 [2024-11-19 11:24:53.680706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.240 [2024-11-19 11:24:53.680722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:19.240 [2024-11-19 11:24:53.680747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.240 [2024-11-19 11:24:53.680762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:19.240 [2024-11-19 11:24:53.680787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.240 [2024-11-19 11:24:53.680802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:19.240 [2024-11-19 11:24:53.680827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.240 [2024-11-19 11:24:53.680843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:19.240 [2024-11-19 11:24:53.680868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.240 [2024-11-19 11:24:53.680887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:19.240 [2024-11-19 11:24:53.680913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.240 [2024-11-19 11:24:53.680929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:19.240 [2024-11-19 11:24:53.680954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:99512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.240 [2024-11-19 11:24:53.680970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:19.240 [2024-11-19 11:24:53.680995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.240 [2024-11-19 11:24:53.681010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:19.240 [2024-11-19 11:24:53.681034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.240 [2024-11-19 11:24:53.681049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:19.240 [2024-11-19 11:24:53.681074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.240 [2024-11-19 11:24:53.681089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:19.240 [2024-11-19 11:24:53.681114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.240 [2024-11-19 11:24:53.681130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:19.240 [2024-11-19 11:24:53.681154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.240 [2024-11-19 11:24:53.681170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:19.240 [2024-11-19 11:24:53.681195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:99560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.240 [2024-11-19 11:24:53.681211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:19.240 [2024-11-19 11:24:53.681235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.240 [2024-11-19 11:24:53.681250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:19.240 [2024-11-19 11:24:53.681276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.240 [2024-11-19 11:24:53.681292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:19.240 [2024-11-19 11:24:53.681316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.240 [2024-11-19 11:24:53.681332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:19.240 [2024-11-19 11:24:53.681357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.240 [2024-11-19 11:24:53.681395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:19.240 [2024-11-19 11:24:53.681427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.240 [2024-11-19 11:24:53.681444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:19.240 [2024-11-19 11:24:53.681470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.240 [2024-11-19 11:24:53.681486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:19.240 [2024-11-19 11:24:53.681512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.240 [2024-11-19 11:24:53.681528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:19.240 [2024-11-19 11:24:53.681554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.240 [2024-11-19 11:24:53.681570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:19.240 [2024-11-19 11:24:53.681595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.241 [2024-11-19 11:24:53.681611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:19.241 [2024-11-19 11:24:53.681637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.241 [2024-11-19 11:24:53.681653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:19.241 [2024-11-19 11:24:53.681694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.241 [2024-11-19 11:24:53.681710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:19.241 [2024-11-19 11:24:53.681735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.241 [2024-11-19 11:24:53.681751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.241 [2024-11-19 11:24:53.681776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:99664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.241 [2024-11-19 11:24:53.681791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:19.241 [2024-11-19 11:24:53.681816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.241 [2024-11-19 11:24:53.681832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:19.241 [2024-11-19 11:24:53.681857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.241 [2024-11-19 11:24:53.681873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.241 [2024-11-19 11:24:53.681898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.241 [2024-11-19 11:24:53.681913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.241 [2024-11-19 11:24:53.681942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.241 [2024-11-19 11:24:53.681958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:19.241 [2024-11-19 11:24:53.681984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.241 [2024-11-19 11:24:53.682000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:19.241 [2024-11-19 11:24:53.682024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.241 [2024-11-19 11:24:53.682040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:19.241 [2024-11-19 11:24:53.682065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.241 [2024-11-19 11:24:53.682081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:19.241 [2024-11-19 11:24:53.682221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.241 [2024-11-19 11:24:53.682241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:19.241 8439.12 IOPS, 32.97 MiB/s [2024-11-19T10:25:14.738Z] 7970.28 IOPS, 31.13 MiB/s [2024-11-19T10:25:14.738Z] 7550.79 IOPS, 29.50 MiB/s [2024-11-19T10:25:14.738Z] 7173.25 IOPS, 28.02 MiB/s [2024-11-19T10:25:14.738Z] 7225.38 IOPS, 28.22 MiB/s [2024-11-19T10:25:14.738Z] 7304.36 IOPS, 28.53 MiB/s [2024-11-19T10:25:14.738Z] 7371.78 IOPS, 28.80 MiB/s [2024-11-19T10:25:14.738Z] 7523.67 IOPS, 29.39 MiB/s [2024-11-19T10:25:14.738Z] 7680.88 IOPS, 30.00 MiB/s [2024-11-19T10:25:14.738Z] 7828.35 IOPS, 30.58 MiB/s [2024-11-19T10:25:14.738Z] 7926.85 IOPS, 30.96 MiB/s [2024-11-19T10:25:14.738Z] 7969.71 IOPS, 31.13 MiB/s [2024-11-19T10:25:14.738Z] 8001.76 IOPS, 31.26 MiB/s [2024-11-19T10:25:14.738Z] 8022.67 IOPS, 31.34 MiB/s [2024-11-19T10:25:14.738Z] 8116.16 IOPS, 31.70 MiB/s [2024-11-19T10:25:14.738Z] 8221.75 IOPS, 32.12 MiB/s [2024-11-19T10:25:14.738Z] 8315.42 IOPS, 32.48 MiB/s [2024-11-19T10:25:14.738Z] [2024-11-19 11:25:11.408147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:38824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.241 [2024-11-19 11:25:11.408222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:19.241 [2024-11-19 11:25:11.408300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:38840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.241 [2024-11-19 11:25:11.408330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:19.241 [2024-11-19 11:25:11.408378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.241 [2024-11-19 11:25:11.408398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:19.241 [2024-11-19 11:25:11.408422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:38872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.241 [2024-11-19 11:25:11.408440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:19.241 [2024-11-19 11:25:11.408463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:38888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.241 [2024-11-19 11:25:11.408481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:19.241 [2024-11-19 11:25:11.408504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:38904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.241 [2024-11-19 11:25:11.408532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:19.241 [2024-11-19 11:25:11.408557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:38920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.241 [2024-11-19 11:25:11.408574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:19.241 [2024-11-19 11:25:11.408597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:38936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.241 [2024-11-19 11:25:11.408614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:19.241 [2024-11-19 11:25:11.408636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:38952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.241 [2024-11-19 11:25:11.408654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:19.241 [2024-11-19 11:25:11.408693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:38968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.241 [2024-11-19 11:25:11.408710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:19.241 [2024-11-19 11:25:11.408733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:38984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.241 [2024-11-19 11:25:11.408750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:19.241 [2024-11-19 11:25:11.408772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.241 [2024-11-19 11:25:11.408788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:19.241 [2024-11-19 11:25:11.408810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:39016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.241 [2024-11-19 11:25:11.408826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:19.241 [2024-11-19 11:25:11.408849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.241 [2024-11-19 11:25:11.408865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:19.241 [2024-11-19 11:25:11.408887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.241 [2024-11-19 11:25:11.408904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:19.241 [2024-11-19 11:25:11.408925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:39064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.241 [2024-11-19 11:25:11.408942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:19.241 [2024-11-19 11:25:11.408964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.241 [2024-11-19 11:25:11.408980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:19.241 [2024-11-19 11:25:11.409002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.241 [2024-11-19 11:25:11.409018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:19.241 [2024-11-19 11:25:11.409046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.241 [2024-11-19 11:25:11.409063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:19.241 [2024-11-19 11:25:11.411671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:39128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.241 [2024-11-19 11:25:11.411707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:19.241 [2024-11-19 11:25:11.411750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:39144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.241 [2024-11-19 11:25:11.411768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:19.242 [2024-11-19 11:25:11.411791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.242 [2024-11-19 11:25:11.411807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:19.242 [2024-11-19 11:25:11.411829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.242 [2024-11-19 11:25:11.411844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.242 [2024-11-19 11:25:11.411867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:39192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.242 [2024-11-19 11:25:11.411893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:19.242 [2024-11-19 11:25:11.411914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:39208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.242 [2024-11-19 11:25:11.411930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:19.242 [2024-11-19 11:25:11.411952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.242 [2024-11-19 11:25:11.411967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.242 [2024-11-19 11:25:11.411989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.242 [2024-11-19 11:25:11.412005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.242 [2024-11-19 11:25:11.412026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.242 [2024-11-19 11:25:11.412042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:19.242 [2024-11-19 11:25:11.412064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:39272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.242 [2024-11-19 11:25:11.412080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:19.242 [2024-11-19 11:25:11.412101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.242 [2024-11-19 11:25:11.412117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:19.242 [2024-11-19 11:25:11.412144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.242 [2024-11-19 11:25:11.412161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:19.242 [2024-11-19 11:25:11.412183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.242 [2024-11-19 11:25:11.412199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:19.242 [2024-11-19 11:25:11.412221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.242 [2024-11-19 11:25:11.412236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:19.242 [2024-11-19 11:25:11.412257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.242 [2024-11-19 11:25:11.412273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:19.242 [2024-11-19 11:25:11.412295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.242 [2024-11-19 11:25:11.412311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:19.242 [2024-11-19 11:25:11.412332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.242 [2024-11-19 11:25:11.412379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:19.242 [2024-11-19 11:25:11.412406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.242 [2024-11-19 11:25:11.412424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:19.242 [2024-11-19 11:25:11.412446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.242 [2024-11-19 11:25:11.412462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:19.242 [2024-11-19 11:25:11.412484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.242 [2024-11-19 11:25:11.412501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:19.242 [2024-11-19 11:25:11.412523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.242 [2024-11-19 11:25:11.412540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:19.242 [2024-11-19 11:25:11.412562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:38728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.242 [2024-11-19 11:25:11.412578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:19.242 [2024-11-19 11:25:11.412600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.242 [2024-11-19 11:25:11.412616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:19.242 [2024-11-19 11:25:11.412638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:38792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.242 [2024-11-19 11:25:11.412660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:19.242 [2024-11-19 11:25:11.412700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.242 [2024-11-19 11:25:11.412716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:19.242 [2024-11-19 11:25:11.412738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.242 [2024-11-19 11:25:11.412754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:19.242 8392.71 IOPS, 32.78 MiB/s [2024-11-19T10:25:14.739Z] 8409.37 IOPS, 32.85 MiB/s [2024-11-19T10:25:14.739Z] 8426.06 IOPS, 32.91 MiB/s [2024-11-19T10:25:14.739Z] Received shutdown signal, test time was about 36.702108 seconds 00:23:19.242 00:23:19.242 Latency(us) 00:23:19.242 [2024-11-19T10:25:14.739Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.242 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:19.242 Verification LBA range: start 0x0 length 0x4000 00:23:19.242 Nvme0n1 : 36.70 8432.77 32.94 0.00 0.00 15154.44 183.56 4076242.11 00:23:19.242 [2024-11-19T10:25:14.739Z] =================================================================================================================== 00:23:19.242 [2024-11-19T10:25:14.739Z] Total : 8432.77 32.94 0.00 0.00 15154.44 183.56 4076242.11 00:23:19.242 11:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:19.530 11:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:23:19.530 11:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:19.530 11:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:23:19.530 11:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:19.530 11:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:23:19.531 11:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:19.531 11:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:23:19.531 11:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:19.531 11:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:19.531 rmmod nvme_tcp 00:23:19.531 rmmod nvme_fabrics 00:23:19.531 rmmod nvme_keyring 00:23:19.531 11:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:19.531 11:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:23:19.531 11:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:23:19.531 11:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2690245 ']' 00:23:19.531 11:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2690245 00:23:19.531 11:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2690245 ']' 00:23:19.531 11:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2690245 00:23:19.531 11:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:23:19.531 11:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:19.531 11:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2690245 00:23:19.531 11:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:19.531 11:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:19.531 11:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2690245' 00:23:19.531 killing process with pid 2690245 00:23:19.531 11:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2690245 00:23:19.531 11:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2690245 00:23:19.789 11:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:19.789 11:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:19.789 11:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:19.789 11:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:23:19.789 11:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:23:19.789 11:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:19.789 11:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:23:19.789 11:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:19.789 11:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:19.789 11:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.789 11:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:19.789 11:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.696 11:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:21.696 00:23:21.696 real 0m46.015s 00:23:21.696 user 2m19.380s 00:23:21.696 sys 0m12.682s 00:23:21.696 11:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:21.696 11:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:21.696 ************************************ 00:23:21.696 END TEST nvmf_host_multipath_status 00:23:21.696 ************************************ 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.955 ************************************ 00:23:21.955 START TEST nvmf_discovery_remove_ifc 00:23:21.955 ************************************ 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:21.955 * Looking for test storage... 00:23:21.955 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:21.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.955 --rc genhtml_branch_coverage=1 00:23:21.955 --rc genhtml_function_coverage=1 00:23:21.955 --rc genhtml_legend=1 00:23:21.955 --rc geninfo_all_blocks=1 00:23:21.955 --rc geninfo_unexecuted_blocks=1 00:23:21.955 00:23:21.955 ' 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:21.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.955 --rc genhtml_branch_coverage=1 00:23:21.955 --rc genhtml_function_coverage=1 00:23:21.955 --rc genhtml_legend=1 00:23:21.955 --rc geninfo_all_blocks=1 00:23:21.955 --rc geninfo_unexecuted_blocks=1 00:23:21.955 00:23:21.955 ' 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:21.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.955 --rc genhtml_branch_coverage=1 00:23:21.955 --rc genhtml_function_coverage=1 00:23:21.955 --rc genhtml_legend=1 00:23:21.955 --rc geninfo_all_blocks=1 00:23:21.955 --rc geninfo_unexecuted_blocks=1 00:23:21.955 00:23:21.955 ' 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:21.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.955 --rc genhtml_branch_coverage=1 00:23:21.955 --rc genhtml_function_coverage=1 00:23:21.955 --rc genhtml_legend=1 00:23:21.955 --rc geninfo_all_blocks=1 00:23:21.955 --rc geninfo_unexecuted_blocks=1 00:23:21.955 00:23:21.955 ' 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:21.955 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:21.956 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.956 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.956 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.956 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:23:21.956 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.956 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:23:21.956 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:21.956 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:21.956 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:21.956 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:21.956 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:21.956 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:21.956 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:21.956 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:21.956 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:21.956 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:21.956 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:23:21.956 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:23:21.956 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:23:21.956 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:21.956 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:23:21.956 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:23:21.956 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:23:21.956 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:21.956 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:21.956 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:21.956 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:21.956 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:21.956 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:21.956 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:21.956 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.956 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:21.956 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:21.956 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:23:21.956 11:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:23:24.494 Found 0000:82:00.0 (0x8086 - 0x159b) 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:23:24.494 Found 0000:82:00.1 (0x8086 - 0x159b) 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:23:24.494 Found net devices under 0000:82:00.0: cvl_0_0 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:24.494 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:24.495 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.495 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:23:24.495 Found net devices under 0000:82:00.1: cvl_0_1 00:23:24.495 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.495 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:24.495 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:23:24.495 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:24.495 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:24.495 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:24.495 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:24.495 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:24.495 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:24.495 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:24.495 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:24.495 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:24.495 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:24.495 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:24.495 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:24.495 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:24.495 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:24.495 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:24.495 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:24.495 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:24.495 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:24.495 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:24.495 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:24.754 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:24.754 11:25:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:24.754 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:24.754 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:24.754 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:24.754 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:24.754 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:24.754 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:23:24.754 00:23:24.754 --- 10.0.0.2 ping statistics --- 00:23:24.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.754 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:23:24.754 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:24.754 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:24.754 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:23:24.754 00:23:24.754 --- 10.0.0.1 ping statistics --- 00:23:24.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.754 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:23:24.754 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:24.754 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:23:24.754 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:24.754 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:24.754 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:24.754 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:24.754 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:24.754 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:24.754 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:24.754 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:23:24.754 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:24.754 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:24.754 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:24.754 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=2697564 00:23:24.754 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:24.754 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 2697564 00:23:24.754 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2697564 ']' 00:23:24.754 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:24.754 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:24.754 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:24.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:24.754 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:24.754 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:24.754 [2024-11-19 11:25:20.122800] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:23:24.754 [2024-11-19 11:25:20.122873] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:24.754 [2024-11-19 11:25:20.209219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.014 [2024-11-19 11:25:20.268923] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:25.014 [2024-11-19 11:25:20.268987] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:25.014 [2024-11-19 11:25:20.269000] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:25.014 [2024-11-19 11:25:20.269026] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:25.014 [2024-11-19 11:25:20.269035] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:25.014 [2024-11-19 11:25:20.269709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:25.014 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:25.014 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:23:25.014 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:25.014 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:25.014 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:25.014 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:25.014 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:23:25.014 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.014 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:25.014 [2024-11-19 11:25:20.414968] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:25.014 [2024-11-19 11:25:20.423142] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:25.014 null0 00:23:25.014 [2024-11-19 11:25:20.455127] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:25.014 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.014 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2697589 00:23:25.014 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:23:25.014 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2697589 /tmp/host.sock 00:23:25.014 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2697589 ']' 00:23:25.014 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:23:25.014 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:25.014 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:25.014 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:25.014 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:25.014 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:25.275 [2024-11-19 11:25:20.520664] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:23:25.275 [2024-11-19 11:25:20.520745] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2697589 ] 00:23:25.275 [2024-11-19 11:25:20.593915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.275 [2024-11-19 11:25:20.650803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:25.275 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:25.275 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:23:25.275 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:25.275 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:23:25.275 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.275 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:25.275 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.275 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:23:25.275 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.275 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:25.534 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.534 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:23:25.534 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.534 11:25:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:26.468 [2024-11-19 11:25:21.882312] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:26.468 [2024-11-19 11:25:21.882340] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:26.468 [2024-11-19 11:25:21.882388] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:26.726 [2024-11-19 11:25:21.969679] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:26.726 [2024-11-19 11:25:22.112658] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:23:26.726 [2024-11-19 11:25:22.113698] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2126be0:1 started. 00:23:26.726 [2024-11-19 11:25:22.115410] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:26.726 [2024-11-19 11:25:22.115470] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:26.726 [2024-11-19 11:25:22.115505] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:26.726 [2024-11-19 11:25:22.115528] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:26.726 [2024-11-19 11:25:22.115561] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:26.726 11:25:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.726 11:25:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:23:26.726 11:25:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:26.726 11:25:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:26.726 11:25:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.726 11:25:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:26.726 11:25:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:26.726 11:25:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:26.726 11:25:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:26.726 11:25:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.726 [2024-11-19 11:25:22.161995] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2126be0 was disconnected and freed. delete nvme_qpair. 00:23:26.726 11:25:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:23:26.726 11:25:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:23:26.726 11:25:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:23:26.726 11:25:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:23:26.726 11:25:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:26.726 11:25:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:26.726 11:25:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:26.726 11:25:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.726 11:25:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:26.726 11:25:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:26.726 11:25:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:26.984 11:25:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.984 11:25:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:26.984 11:25:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:27.918 11:25:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:27.918 11:25:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:27.918 11:25:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:27.918 11:25:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.918 11:25:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:27.918 11:25:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:27.918 11:25:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:27.918 11:25:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.918 11:25:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:27.918 11:25:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:28.852 11:25:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:28.852 11:25:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:28.852 11:25:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:28.852 11:25:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.852 11:25:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:28.852 11:25:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:28.852 11:25:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:28.852 11:25:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.852 11:25:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:28.852 11:25:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:30.226 11:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:30.226 11:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:30.226 11:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:30.226 11:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.226 11:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:30.226 11:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:30.226 11:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:30.226 11:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.226 11:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:30.226 11:25:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:31.160 11:25:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:31.160 11:25:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:31.160 11:25:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:31.160 11:25:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:31.160 11:25:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.160 11:25:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:31.160 11:25:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:31.160 11:25:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.160 11:25:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:31.160 11:25:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:32.095 11:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:32.095 11:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:32.095 11:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:32.095 11:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.095 11:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:32.095 11:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:32.095 11:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:32.095 11:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.095 11:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:32.095 11:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:32.095 [2024-11-19 11:25:27.556922] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:23:32.095 [2024-11-19 11:25:27.557011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.095 [2024-11-19 11:25:27.557034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.095 [2024-11-19 11:25:27.557053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.095 [2024-11-19 11:25:27.557066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.095 [2024-11-19 11:25:27.557079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.095 [2024-11-19 11:25:27.557091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.095 [2024-11-19 11:25:27.557104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.095 [2024-11-19 11:25:27.557117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.095 [2024-11-19 11:25:27.557130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.095 [2024-11-19 11:25:27.557142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.095 [2024-11-19 11:25:27.557155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2103400 is same with the state(6) to be set 00:23:32.095 [2024-11-19 11:25:27.566942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2103400 (9): Bad file descriptor 00:23:32.095 [2024-11-19 11:25:27.576979] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:32.095 [2024-11-19 11:25:27.577000] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:32.095 [2024-11-19 11:25:27.577010] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:32.095 [2024-11-19 11:25:27.577018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:32.095 [2024-11-19 11:25:27.577070] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:33.029 11:25:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:33.029 11:25:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:33.029 11:25:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:33.029 11:25:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.029 11:25:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:33.029 11:25:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:33.029 11:25:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:33.288 [2024-11-19 11:25:28.637398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:33.288 [2024-11-19 11:25:28.637474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2103400 with addr=10.0.0.2, port=4420 00:23:33.288 [2024-11-19 11:25:28.637502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2103400 is same with the state(6) to be set 00:23:33.288 [2024-11-19 11:25:28.637552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2103400 (9): Bad file descriptor 00:23:33.288 [2024-11-19 11:25:28.637982] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:23:33.288 [2024-11-19 11:25:28.638035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:33.288 [2024-11-19 11:25:28.638052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:33.288 [2024-11-19 11:25:28.638069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:33.288 [2024-11-19 11:25:28.638082] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:33.288 [2024-11-19 11:25:28.638091] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:33.288 [2024-11-19 11:25:28.638098] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:33.288 [2024-11-19 11:25:28.638111] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:33.288 [2024-11-19 11:25:28.638120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:33.288 11:25:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.288 11:25:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:33.288 11:25:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:34.221 [2024-11-19 11:25:29.640613] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:34.221 [2024-11-19 11:25:29.640661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:34.221 [2024-11-19 11:25:29.640704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:34.221 [2024-11-19 11:25:29.640719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:34.221 [2024-11-19 11:25:29.640735] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:23:34.221 [2024-11-19 11:25:29.640749] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:34.221 [2024-11-19 11:25:29.640758] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:34.221 [2024-11-19 11:25:29.640766] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:34.221 [2024-11-19 11:25:29.640807] bdev_nvme.c:7229:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:23:34.221 [2024-11-19 11:25:29.640862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.221 [2024-11-19 11:25:29.640884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.221 [2024-11-19 11:25:29.640914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.221 [2024-11-19 11:25:29.640926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.221 [2024-11-19 11:25:29.640940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.221 [2024-11-19 11:25:29.640952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.221 [2024-11-19 11:25:29.640965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.221 [2024-11-19 11:25:29.640978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.221 [2024-11-19 11:25:29.641007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.221 [2024-11-19 11:25:29.641020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.221 [2024-11-19 11:25:29.641033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:23:34.221 [2024-11-19 11:25:29.641085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f2b40 (9): Bad file descriptor 00:23:34.221 [2024-11-19 11:25:29.642077] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:23:34.221 [2024-11-19 11:25:29.642100] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:23:34.221 11:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:34.221 11:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:34.221 11:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:34.221 11:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.221 11:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:34.221 11:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:34.221 11:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:34.221 11:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.221 11:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:23:34.221 11:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:34.221 11:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:34.479 11:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:23:34.479 11:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:34.479 11:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:34.479 11:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:34.479 11:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.479 11:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:34.479 11:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:34.479 11:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:34.479 11:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.479 11:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:34.479 11:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:35.412 11:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:35.412 11:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:35.412 11:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:35.412 11:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.412 11:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:35.412 11:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:35.412 11:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:35.412 11:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.412 11:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:35.412 11:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:36.345 [2024-11-19 11:25:31.699071] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:36.345 [2024-11-19 11:25:31.699104] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:36.345 [2024-11-19 11:25:31.699126] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:36.345 [2024-11-19 11:25:31.827532] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:23:36.345 11:25:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:36.345 11:25:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:36.345 11:25:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:36.345 11:25:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.345 11:25:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:36.346 11:25:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:36.346 11:25:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:36.603 11:25:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.603 11:25:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:36.603 11:25:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:36.603 [2024-11-19 11:25:31.887245] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:23:36.603 [2024-11-19 11:25:31.888074] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x2104840:1 started. 00:23:36.603 [2024-11-19 11:25:31.889401] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:36.603 [2024-11-19 11:25:31.889451] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:36.603 [2024-11-19 11:25:31.889481] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:36.603 [2024-11-19 11:25:31.889509] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:23:36.603 [2024-11-19 11:25:31.889521] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:36.603 [2024-11-19 11:25:31.896807] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x2104840 was disconnected and freed. delete nvme_qpair. 00:23:37.537 11:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:37.537 11:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:37.537 11:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:37.537 11:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.537 11:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:37.537 11:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:37.537 11:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:37.537 11:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.537 11:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:23:37.537 11:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:23:37.537 11:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2697589 00:23:37.537 11:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2697589 ']' 00:23:37.537 11:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2697589 00:23:37.537 11:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:23:37.537 11:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:37.537 11:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2697589 00:23:37.537 11:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:37.537 11:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:37.537 11:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2697589' 00:23:37.537 killing process with pid 2697589 00:23:37.537 11:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2697589 00:23:37.537 11:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2697589 00:23:37.795 11:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:23:37.795 11:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:37.795 11:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:23:37.795 11:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:37.795 11:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:23:37.795 11:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:37.795 11:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:37.795 rmmod nvme_tcp 00:23:37.795 rmmod nvme_fabrics 00:23:37.795 rmmod nvme_keyring 00:23:37.795 11:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:37.795 11:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:23:37.795 11:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:23:37.795 11:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 2697564 ']' 00:23:37.795 11:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 2697564 00:23:37.795 11:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2697564 ']' 00:23:37.795 11:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2697564 00:23:37.795 11:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:23:37.795 11:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:37.795 11:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2697564 00:23:37.795 11:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:37.795 11:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:37.795 11:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2697564' 00:23:37.795 killing process with pid 2697564 00:23:37.795 11:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2697564 00:23:37.795 11:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2697564 00:23:38.053 11:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:38.053 11:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:38.053 11:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:38.053 11:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:23:38.053 11:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:23:38.053 11:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:38.053 11:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:23:38.053 11:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:38.053 11:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:38.053 11:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.053 11:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:38.053 11:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:40.590 00:23:40.590 real 0m18.315s 00:23:40.590 user 0m26.114s 00:23:40.590 sys 0m3.299s 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:40.590 ************************************ 00:23:40.590 END TEST nvmf_discovery_remove_ifc 00:23:40.590 ************************************ 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.590 ************************************ 00:23:40.590 START TEST nvmf_identify_kernel_target 00:23:40.590 ************************************ 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:40.590 * Looking for test storage... 00:23:40.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:40.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.590 --rc genhtml_branch_coverage=1 00:23:40.590 --rc genhtml_function_coverage=1 00:23:40.590 --rc genhtml_legend=1 00:23:40.590 --rc geninfo_all_blocks=1 00:23:40.590 --rc geninfo_unexecuted_blocks=1 00:23:40.590 00:23:40.590 ' 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:40.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.590 --rc genhtml_branch_coverage=1 00:23:40.590 --rc genhtml_function_coverage=1 00:23:40.590 --rc genhtml_legend=1 00:23:40.590 --rc geninfo_all_blocks=1 00:23:40.590 --rc geninfo_unexecuted_blocks=1 00:23:40.590 00:23:40.590 ' 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:40.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.590 --rc genhtml_branch_coverage=1 00:23:40.590 --rc genhtml_function_coverage=1 00:23:40.590 --rc genhtml_legend=1 00:23:40.590 --rc geninfo_all_blocks=1 00:23:40.590 --rc geninfo_unexecuted_blocks=1 00:23:40.590 00:23:40.590 ' 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:40.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.590 --rc genhtml_branch_coverage=1 00:23:40.590 --rc genhtml_function_coverage=1 00:23:40.590 --rc genhtml_legend=1 00:23:40.590 --rc geninfo_all_blocks=1 00:23:40.590 --rc geninfo_unexecuted_blocks=1 00:23:40.590 00:23:40.590 ' 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:23:40.590 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:40.591 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:40.591 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:40.591 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:40.591 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:40.591 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:23:40.591 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:40.591 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:40.591 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:40.591 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.591 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.591 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.591 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:23:40.591 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.591 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:23:40.591 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:40.591 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:40.591 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:40.591 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:40.591 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:40.591 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:40.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:40.591 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:40.591 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:40.591 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:40.591 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:23:40.591 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:40.591 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:40.591 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:40.591 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:40.591 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:40.591 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.591 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:40.591 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:40.591 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:40.591 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:40.591 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:23:40.591 11:25:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:43.124 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:43.124 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:23:43.124 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:43.124 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:43.124 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:23:43.125 Found 0000:82:00.0 (0x8086 - 0x159b) 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:23:43.125 Found 0000:82:00.1 (0x8086 - 0x159b) 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:23:43.125 Found net devices under 0000:82:00.0: cvl_0_0 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:23:43.125 Found net devices under 0000:82:00.1: cvl_0_1 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:43.125 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:43.125 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:23:43.125 00:23:43.125 --- 10.0.0.2 ping statistics --- 00:23:43.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.125 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:23:43.125 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:43.126 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:43.126 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:23:43.126 00:23:43.126 --- 10.0.0.1 ping statistics --- 00:23:43.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.126 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:23:43.126 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:43.126 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:23:43.126 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:43.126 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:43.126 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:43.126 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:43.126 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:43.126 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:43.126 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:43.126 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:23:43.126 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:23:43.126 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:23:43.126 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:43.126 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:43.126 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.126 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.126 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:43.126 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.126 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:43.126 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:43.126 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:43.126 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:23:43.126 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:43.126 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:43.126 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:23:43.126 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:43.126 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:43.126 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:43.126 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:23:43.126 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:23:43.126 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:23:43.126 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:43.126 11:25:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:44.501 Waiting for block devices as requested 00:23:44.760 0000:81:00.0 (8086 0a54): vfio-pci -> nvme 00:23:44.760 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:44.760 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:45.018 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:45.018 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:45.018 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:45.277 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:45.277 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:45.277 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:45.277 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:45.536 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:45.536 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:45.536 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:45.536 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:45.795 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:45.795 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:45.795 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:46.054 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:46.054 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:46.054 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:23:46.054 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:23:46.054 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:46.054 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:46.054 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:23:46.054 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:23:46.054 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:46.054 No valid GPT data, bailing 00:23:46.054 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:46.054 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:23:46.054 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:23:46.054 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:23:46.054 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:23:46.054 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:46.054 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:46.054 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:46.054 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:46.054 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:23:46.054 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:23:46.054 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:23:46.054 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:23:46.054 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:23:46.054 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:23:46.054 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:23:46.054 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:46.054 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -a 10.0.0.1 -t tcp -s 4420 00:23:46.054 00:23:46.054 Discovery Log Number of Records 2, Generation counter 2 00:23:46.054 =====Discovery Log Entry 0====== 00:23:46.054 trtype: tcp 00:23:46.054 adrfam: ipv4 00:23:46.054 subtype: current discovery subsystem 00:23:46.054 treq: not specified, sq flow control disable supported 00:23:46.055 portid: 1 00:23:46.055 trsvcid: 4420 00:23:46.055 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:46.055 traddr: 10.0.0.1 00:23:46.055 eflags: none 00:23:46.055 sectype: none 00:23:46.055 =====Discovery Log Entry 1====== 00:23:46.055 trtype: tcp 00:23:46.055 adrfam: ipv4 00:23:46.055 subtype: nvme subsystem 00:23:46.055 treq: not specified, sq flow control disable supported 00:23:46.055 portid: 1 00:23:46.055 trsvcid: 4420 00:23:46.055 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:46.055 traddr: 10.0.0.1 00:23:46.055 eflags: none 00:23:46.055 sectype: none 00:23:46.055 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:23:46.055 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:23:46.315 ===================================================== 00:23:46.315 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:46.315 ===================================================== 00:23:46.315 Controller Capabilities/Features 00:23:46.315 ================================ 00:23:46.315 Vendor ID: 0000 00:23:46.315 Subsystem Vendor ID: 0000 00:23:46.315 Serial Number: 766c9700415f7810beb2 00:23:46.315 Model Number: Linux 00:23:46.315 Firmware Version: 6.8.9-20 00:23:46.315 Recommended Arb Burst: 0 00:23:46.315 IEEE OUI Identifier: 00 00 00 00:23:46.315 Multi-path I/O 00:23:46.315 May have multiple subsystem ports: No 00:23:46.315 May have multiple controllers: No 00:23:46.315 Associated with SR-IOV VF: No 00:23:46.315 Max Data Transfer Size: Unlimited 00:23:46.315 Max Number of Namespaces: 0 00:23:46.315 Max Number of I/O Queues: 1024 00:23:46.315 NVMe Specification Version (VS): 1.3 00:23:46.315 NVMe Specification Version (Identify): 1.3 00:23:46.315 Maximum Queue Entries: 1024 00:23:46.315 Contiguous Queues Required: No 00:23:46.315 Arbitration Mechanisms Supported 00:23:46.315 Weighted Round Robin: Not Supported 00:23:46.315 Vendor Specific: Not Supported 00:23:46.315 Reset Timeout: 7500 ms 00:23:46.315 Doorbell Stride: 4 bytes 00:23:46.315 NVM Subsystem Reset: Not Supported 00:23:46.315 Command Sets Supported 00:23:46.315 NVM Command Set: Supported 00:23:46.315 Boot Partition: Not Supported 00:23:46.315 Memory Page Size Minimum: 4096 bytes 00:23:46.315 Memory Page Size Maximum: 4096 bytes 00:23:46.315 Persistent Memory Region: Not Supported 00:23:46.315 Optional Asynchronous Events Supported 00:23:46.315 Namespace Attribute Notices: Not Supported 00:23:46.315 Firmware Activation Notices: Not Supported 00:23:46.315 ANA Change Notices: Not Supported 00:23:46.315 PLE Aggregate Log Change Notices: Not Supported 00:23:46.315 LBA Status Info Alert Notices: Not Supported 00:23:46.315 EGE Aggregate Log Change Notices: Not Supported 00:23:46.315 Normal NVM Subsystem Shutdown event: Not Supported 00:23:46.315 Zone Descriptor Change Notices: Not Supported 00:23:46.315 Discovery Log Change Notices: Supported 00:23:46.315 Controller Attributes 00:23:46.315 128-bit Host Identifier: Not Supported 00:23:46.315 Non-Operational Permissive Mode: Not Supported 00:23:46.315 NVM Sets: Not Supported 00:23:46.315 Read Recovery Levels: Not Supported 00:23:46.315 Endurance Groups: Not Supported 00:23:46.315 Predictable Latency Mode: Not Supported 00:23:46.315 Traffic Based Keep ALive: Not Supported 00:23:46.315 Namespace Granularity: Not Supported 00:23:46.315 SQ Associations: Not Supported 00:23:46.315 UUID List: Not Supported 00:23:46.315 Multi-Domain Subsystem: Not Supported 00:23:46.315 Fixed Capacity Management: Not Supported 00:23:46.315 Variable Capacity Management: Not Supported 00:23:46.315 Delete Endurance Group: Not Supported 00:23:46.315 Delete NVM Set: Not Supported 00:23:46.315 Extended LBA Formats Supported: Not Supported 00:23:46.315 Flexible Data Placement Supported: Not Supported 00:23:46.315 00:23:46.315 Controller Memory Buffer Support 00:23:46.315 ================================ 00:23:46.315 Supported: No 00:23:46.315 00:23:46.315 Persistent Memory Region Support 00:23:46.315 ================================ 00:23:46.315 Supported: No 00:23:46.315 00:23:46.315 Admin Command Set Attributes 00:23:46.315 ============================ 00:23:46.315 Security Send/Receive: Not Supported 00:23:46.315 Format NVM: Not Supported 00:23:46.315 Firmware Activate/Download: Not Supported 00:23:46.315 Namespace Management: Not Supported 00:23:46.315 Device Self-Test: Not Supported 00:23:46.315 Directives: Not Supported 00:23:46.315 NVMe-MI: Not Supported 00:23:46.315 Virtualization Management: Not Supported 00:23:46.315 Doorbell Buffer Config: Not Supported 00:23:46.315 Get LBA Status Capability: Not Supported 00:23:46.315 Command & Feature Lockdown Capability: Not Supported 00:23:46.315 Abort Command Limit: 1 00:23:46.315 Async Event Request Limit: 1 00:23:46.315 Number of Firmware Slots: N/A 00:23:46.315 Firmware Slot 1 Read-Only: N/A 00:23:46.315 Firmware Activation Without Reset: N/A 00:23:46.315 Multiple Update Detection Support: N/A 00:23:46.315 Firmware Update Granularity: No Information Provided 00:23:46.315 Per-Namespace SMART Log: No 00:23:46.315 Asymmetric Namespace Access Log Page: Not Supported 00:23:46.315 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:46.315 Command Effects Log Page: Not Supported 00:23:46.315 Get Log Page Extended Data: Supported 00:23:46.315 Telemetry Log Pages: Not Supported 00:23:46.315 Persistent Event Log Pages: Not Supported 00:23:46.315 Supported Log Pages Log Page: May Support 00:23:46.315 Commands Supported & Effects Log Page: Not Supported 00:23:46.315 Feature Identifiers & Effects Log Page:May Support 00:23:46.315 NVMe-MI Commands & Effects Log Page: May Support 00:23:46.315 Data Area 4 for Telemetry Log: Not Supported 00:23:46.315 Error Log Page Entries Supported: 1 00:23:46.315 Keep Alive: Not Supported 00:23:46.315 00:23:46.315 NVM Command Set Attributes 00:23:46.315 ========================== 00:23:46.315 Submission Queue Entry Size 00:23:46.315 Max: 1 00:23:46.315 Min: 1 00:23:46.315 Completion Queue Entry Size 00:23:46.315 Max: 1 00:23:46.315 Min: 1 00:23:46.315 Number of Namespaces: 0 00:23:46.315 Compare Command: Not Supported 00:23:46.315 Write Uncorrectable Command: Not Supported 00:23:46.315 Dataset Management Command: Not Supported 00:23:46.316 Write Zeroes Command: Not Supported 00:23:46.316 Set Features Save Field: Not Supported 00:23:46.316 Reservations: Not Supported 00:23:46.316 Timestamp: Not Supported 00:23:46.316 Copy: Not Supported 00:23:46.316 Volatile Write Cache: Not Present 00:23:46.316 Atomic Write Unit (Normal): 1 00:23:46.316 Atomic Write Unit (PFail): 1 00:23:46.316 Atomic Compare & Write Unit: 1 00:23:46.316 Fused Compare & Write: Not Supported 00:23:46.316 Scatter-Gather List 00:23:46.316 SGL Command Set: Supported 00:23:46.316 SGL Keyed: Not Supported 00:23:46.316 SGL Bit Bucket Descriptor: Not Supported 00:23:46.316 SGL Metadata Pointer: Not Supported 00:23:46.316 Oversized SGL: Not Supported 00:23:46.316 SGL Metadata Address: Not Supported 00:23:46.316 SGL Offset: Supported 00:23:46.316 Transport SGL Data Block: Not Supported 00:23:46.316 Replay Protected Memory Block: Not Supported 00:23:46.316 00:23:46.316 Firmware Slot Information 00:23:46.316 ========================= 00:23:46.316 Active slot: 0 00:23:46.316 00:23:46.316 00:23:46.316 Error Log 00:23:46.316 ========= 00:23:46.316 00:23:46.316 Active Namespaces 00:23:46.316 ================= 00:23:46.316 Discovery Log Page 00:23:46.316 ================== 00:23:46.316 Generation Counter: 2 00:23:46.316 Number of Records: 2 00:23:46.316 Record Format: 0 00:23:46.316 00:23:46.316 Discovery Log Entry 0 00:23:46.316 ---------------------- 00:23:46.316 Transport Type: 3 (TCP) 00:23:46.316 Address Family: 1 (IPv4) 00:23:46.316 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:46.316 Entry Flags: 00:23:46.316 Duplicate Returned Information: 0 00:23:46.316 Explicit Persistent Connection Support for Discovery: 0 00:23:46.316 Transport Requirements: 00:23:46.316 Secure Channel: Not Specified 00:23:46.316 Port ID: 1 (0x0001) 00:23:46.316 Controller ID: 65535 (0xffff) 00:23:46.316 Admin Max SQ Size: 32 00:23:46.316 Transport Service Identifier: 4420 00:23:46.316 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:46.316 Transport Address: 10.0.0.1 00:23:46.316 Discovery Log Entry 1 00:23:46.316 ---------------------- 00:23:46.316 Transport Type: 3 (TCP) 00:23:46.316 Address Family: 1 (IPv4) 00:23:46.316 Subsystem Type: 2 (NVM Subsystem) 00:23:46.316 Entry Flags: 00:23:46.316 Duplicate Returned Information: 0 00:23:46.316 Explicit Persistent Connection Support for Discovery: 0 00:23:46.316 Transport Requirements: 00:23:46.316 Secure Channel: Not Specified 00:23:46.316 Port ID: 1 (0x0001) 00:23:46.316 Controller ID: 65535 (0xffff) 00:23:46.316 Admin Max SQ Size: 32 00:23:46.316 Transport Service Identifier: 4420 00:23:46.316 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:23:46.316 Transport Address: 10.0.0.1 00:23:46.316 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:46.316 get_feature(0x01) failed 00:23:46.316 get_feature(0x02) failed 00:23:46.316 get_feature(0x04) failed 00:23:46.316 ===================================================== 00:23:46.316 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:46.316 ===================================================== 00:23:46.316 Controller Capabilities/Features 00:23:46.316 ================================ 00:23:46.316 Vendor ID: 0000 00:23:46.316 Subsystem Vendor ID: 0000 00:23:46.316 Serial Number: e7f1ea3e223a6f5cdb73 00:23:46.316 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:23:46.316 Firmware Version: 6.8.9-20 00:23:46.316 Recommended Arb Burst: 6 00:23:46.316 IEEE OUI Identifier: 00 00 00 00:23:46.316 Multi-path I/O 00:23:46.316 May have multiple subsystem ports: Yes 00:23:46.316 May have multiple controllers: Yes 00:23:46.316 Associated with SR-IOV VF: No 00:23:46.316 Max Data Transfer Size: Unlimited 00:23:46.316 Max Number of Namespaces: 1024 00:23:46.316 Max Number of I/O Queues: 128 00:23:46.316 NVMe Specification Version (VS): 1.3 00:23:46.316 NVMe Specification Version (Identify): 1.3 00:23:46.316 Maximum Queue Entries: 1024 00:23:46.316 Contiguous Queues Required: No 00:23:46.316 Arbitration Mechanisms Supported 00:23:46.316 Weighted Round Robin: Not Supported 00:23:46.316 Vendor Specific: Not Supported 00:23:46.316 Reset Timeout: 7500 ms 00:23:46.316 Doorbell Stride: 4 bytes 00:23:46.316 NVM Subsystem Reset: Not Supported 00:23:46.316 Command Sets Supported 00:23:46.316 NVM Command Set: Supported 00:23:46.316 Boot Partition: Not Supported 00:23:46.316 Memory Page Size Minimum: 4096 bytes 00:23:46.316 Memory Page Size Maximum: 4096 bytes 00:23:46.316 Persistent Memory Region: Not Supported 00:23:46.316 Optional Asynchronous Events Supported 00:23:46.316 Namespace Attribute Notices: Supported 00:23:46.316 Firmware Activation Notices: Not Supported 00:23:46.316 ANA Change Notices: Supported 00:23:46.316 PLE Aggregate Log Change Notices: Not Supported 00:23:46.316 LBA Status Info Alert Notices: Not Supported 00:23:46.316 EGE Aggregate Log Change Notices: Not Supported 00:23:46.316 Normal NVM Subsystem Shutdown event: Not Supported 00:23:46.316 Zone Descriptor Change Notices: Not Supported 00:23:46.316 Discovery Log Change Notices: Not Supported 00:23:46.316 Controller Attributes 00:23:46.316 128-bit Host Identifier: Supported 00:23:46.316 Non-Operational Permissive Mode: Not Supported 00:23:46.316 NVM Sets: Not Supported 00:23:46.316 Read Recovery Levels: Not Supported 00:23:46.316 Endurance Groups: Not Supported 00:23:46.316 Predictable Latency Mode: Not Supported 00:23:46.316 Traffic Based Keep ALive: Supported 00:23:46.316 Namespace Granularity: Not Supported 00:23:46.316 SQ Associations: Not Supported 00:23:46.316 UUID List: Not Supported 00:23:46.316 Multi-Domain Subsystem: Not Supported 00:23:46.316 Fixed Capacity Management: Not Supported 00:23:46.316 Variable Capacity Management: Not Supported 00:23:46.316 Delete Endurance Group: Not Supported 00:23:46.316 Delete NVM Set: Not Supported 00:23:46.316 Extended LBA Formats Supported: Not Supported 00:23:46.316 Flexible Data Placement Supported: Not Supported 00:23:46.316 00:23:46.316 Controller Memory Buffer Support 00:23:46.316 ================================ 00:23:46.316 Supported: No 00:23:46.316 00:23:46.316 Persistent Memory Region Support 00:23:46.316 ================================ 00:23:46.316 Supported: No 00:23:46.316 00:23:46.316 Admin Command Set Attributes 00:23:46.316 ============================ 00:23:46.316 Security Send/Receive: Not Supported 00:23:46.316 Format NVM: Not Supported 00:23:46.316 Firmware Activate/Download: Not Supported 00:23:46.316 Namespace Management: Not Supported 00:23:46.316 Device Self-Test: Not Supported 00:23:46.316 Directives: Not Supported 00:23:46.316 NVMe-MI: Not Supported 00:23:46.316 Virtualization Management: Not Supported 00:23:46.316 Doorbell Buffer Config: Not Supported 00:23:46.316 Get LBA Status Capability: Not Supported 00:23:46.316 Command & Feature Lockdown Capability: Not Supported 00:23:46.316 Abort Command Limit: 4 00:23:46.316 Async Event Request Limit: 4 00:23:46.316 Number of Firmware Slots: N/A 00:23:46.316 Firmware Slot 1 Read-Only: N/A 00:23:46.316 Firmware Activation Without Reset: N/A 00:23:46.316 Multiple Update Detection Support: N/A 00:23:46.316 Firmware Update Granularity: No Information Provided 00:23:46.316 Per-Namespace SMART Log: Yes 00:23:46.316 Asymmetric Namespace Access Log Page: Supported 00:23:46.316 ANA Transition Time : 10 sec 00:23:46.316 00:23:46.316 Asymmetric Namespace Access Capabilities 00:23:46.316 ANA Optimized State : Supported 00:23:46.316 ANA Non-Optimized State : Supported 00:23:46.316 ANA Inaccessible State : Supported 00:23:46.316 ANA Persistent Loss State : Supported 00:23:46.316 ANA Change State : Supported 00:23:46.316 ANAGRPID is not changed : No 00:23:46.316 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:23:46.316 00:23:46.316 ANA Group Identifier Maximum : 128 00:23:46.316 Number of ANA Group Identifiers : 128 00:23:46.316 Max Number of Allowed Namespaces : 1024 00:23:46.316 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:23:46.316 Command Effects Log Page: Supported 00:23:46.316 Get Log Page Extended Data: Supported 00:23:46.316 Telemetry Log Pages: Not Supported 00:23:46.316 Persistent Event Log Pages: Not Supported 00:23:46.316 Supported Log Pages Log Page: May Support 00:23:46.317 Commands Supported & Effects Log Page: Not Supported 00:23:46.317 Feature Identifiers & Effects Log Page:May Support 00:23:46.317 NVMe-MI Commands & Effects Log Page: May Support 00:23:46.317 Data Area 4 for Telemetry Log: Not Supported 00:23:46.317 Error Log Page Entries Supported: 128 00:23:46.317 Keep Alive: Supported 00:23:46.317 Keep Alive Granularity: 1000 ms 00:23:46.317 00:23:46.317 NVM Command Set Attributes 00:23:46.317 ========================== 00:23:46.317 Submission Queue Entry Size 00:23:46.317 Max: 64 00:23:46.317 Min: 64 00:23:46.317 Completion Queue Entry Size 00:23:46.317 Max: 16 00:23:46.317 Min: 16 00:23:46.317 Number of Namespaces: 1024 00:23:46.317 Compare Command: Not Supported 00:23:46.317 Write Uncorrectable Command: Not Supported 00:23:46.317 Dataset Management Command: Supported 00:23:46.317 Write Zeroes Command: Supported 00:23:46.317 Set Features Save Field: Not Supported 00:23:46.317 Reservations: Not Supported 00:23:46.317 Timestamp: Not Supported 00:23:46.317 Copy: Not Supported 00:23:46.317 Volatile Write Cache: Present 00:23:46.317 Atomic Write Unit (Normal): 1 00:23:46.317 Atomic Write Unit (PFail): 1 00:23:46.317 Atomic Compare & Write Unit: 1 00:23:46.317 Fused Compare & Write: Not Supported 00:23:46.317 Scatter-Gather List 00:23:46.317 SGL Command Set: Supported 00:23:46.317 SGL Keyed: Not Supported 00:23:46.317 SGL Bit Bucket Descriptor: Not Supported 00:23:46.317 SGL Metadata Pointer: Not Supported 00:23:46.317 Oversized SGL: Not Supported 00:23:46.317 SGL Metadata Address: Not Supported 00:23:46.317 SGL Offset: Supported 00:23:46.317 Transport SGL Data Block: Not Supported 00:23:46.317 Replay Protected Memory Block: Not Supported 00:23:46.317 00:23:46.317 Firmware Slot Information 00:23:46.317 ========================= 00:23:46.317 Active slot: 0 00:23:46.317 00:23:46.317 Asymmetric Namespace Access 00:23:46.317 =========================== 00:23:46.317 Change Count : 0 00:23:46.317 Number of ANA Group Descriptors : 1 00:23:46.317 ANA Group Descriptor : 0 00:23:46.317 ANA Group ID : 1 00:23:46.317 Number of NSID Values : 1 00:23:46.317 Change Count : 0 00:23:46.317 ANA State : 1 00:23:46.317 Namespace Identifier : 1 00:23:46.317 00:23:46.317 Commands Supported and Effects 00:23:46.317 ============================== 00:23:46.317 Admin Commands 00:23:46.317 -------------- 00:23:46.317 Get Log Page (02h): Supported 00:23:46.317 Identify (06h): Supported 00:23:46.317 Abort (08h): Supported 00:23:46.317 Set Features (09h): Supported 00:23:46.317 Get Features (0Ah): Supported 00:23:46.317 Asynchronous Event Request (0Ch): Supported 00:23:46.317 Keep Alive (18h): Supported 00:23:46.317 I/O Commands 00:23:46.317 ------------ 00:23:46.317 Flush (00h): Supported 00:23:46.317 Write (01h): Supported LBA-Change 00:23:46.317 Read (02h): Supported 00:23:46.317 Write Zeroes (08h): Supported LBA-Change 00:23:46.317 Dataset Management (09h): Supported 00:23:46.317 00:23:46.317 Error Log 00:23:46.317 ========= 00:23:46.317 Entry: 0 00:23:46.317 Error Count: 0x3 00:23:46.317 Submission Queue Id: 0x0 00:23:46.317 Command Id: 0x5 00:23:46.317 Phase Bit: 0 00:23:46.317 Status Code: 0x2 00:23:46.317 Status Code Type: 0x0 00:23:46.317 Do Not Retry: 1 00:23:46.317 Error Location: 0x28 00:23:46.317 LBA: 0x0 00:23:46.317 Namespace: 0x0 00:23:46.317 Vendor Log Page: 0x0 00:23:46.317 ----------- 00:23:46.317 Entry: 1 00:23:46.317 Error Count: 0x2 00:23:46.317 Submission Queue Id: 0x0 00:23:46.317 Command Id: 0x5 00:23:46.317 Phase Bit: 0 00:23:46.317 Status Code: 0x2 00:23:46.317 Status Code Type: 0x0 00:23:46.317 Do Not Retry: 1 00:23:46.317 Error Location: 0x28 00:23:46.317 LBA: 0x0 00:23:46.317 Namespace: 0x0 00:23:46.317 Vendor Log Page: 0x0 00:23:46.317 ----------- 00:23:46.317 Entry: 2 00:23:46.317 Error Count: 0x1 00:23:46.317 Submission Queue Id: 0x0 00:23:46.317 Command Id: 0x4 00:23:46.317 Phase Bit: 0 00:23:46.317 Status Code: 0x2 00:23:46.317 Status Code Type: 0x0 00:23:46.317 Do Not Retry: 1 00:23:46.317 Error Location: 0x28 00:23:46.317 LBA: 0x0 00:23:46.317 Namespace: 0x0 00:23:46.317 Vendor Log Page: 0x0 00:23:46.317 00:23:46.317 Number of Queues 00:23:46.317 ================ 00:23:46.317 Number of I/O Submission Queues: 128 00:23:46.317 Number of I/O Completion Queues: 128 00:23:46.317 00:23:46.317 ZNS Specific Controller Data 00:23:46.317 ============================ 00:23:46.317 Zone Append Size Limit: 0 00:23:46.317 00:23:46.317 00:23:46.317 Active Namespaces 00:23:46.317 ================= 00:23:46.317 get_feature(0x05) failed 00:23:46.317 Namespace ID:1 00:23:46.317 Command Set Identifier: NVM (00h) 00:23:46.317 Deallocate: Supported 00:23:46.317 Deallocated/Unwritten Error: Not Supported 00:23:46.317 Deallocated Read Value: Unknown 00:23:46.317 Deallocate in Write Zeroes: Not Supported 00:23:46.317 Deallocated Guard Field: 0xFFFF 00:23:46.317 Flush: Supported 00:23:46.317 Reservation: Not Supported 00:23:46.317 Namespace Sharing Capabilities: Multiple Controllers 00:23:46.317 Size (in LBAs): 3907029168 (1863GiB) 00:23:46.317 Capacity (in LBAs): 3907029168 (1863GiB) 00:23:46.317 Utilization (in LBAs): 3907029168 (1863GiB) 00:23:46.317 UUID: 989eff02-45d2-4ff8-b68f-f6e67ee2900f 00:23:46.317 Thin Provisioning: Not Supported 00:23:46.317 Per-NS Atomic Units: Yes 00:23:46.317 Atomic Boundary Size (Normal): 0 00:23:46.317 Atomic Boundary Size (PFail): 0 00:23:46.317 Atomic Boundary Offset: 0 00:23:46.317 NGUID/EUI64 Never Reused: No 00:23:46.317 ANA group ID: 1 00:23:46.317 Namespace Write Protected: No 00:23:46.317 Number of LBA Formats: 1 00:23:46.317 Current LBA Format: LBA Format #00 00:23:46.317 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:46.317 00:23:46.317 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:23:46.317 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:46.317 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:23:46.317 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:46.317 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:23:46.317 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:46.317 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:46.317 rmmod nvme_tcp 00:23:46.317 rmmod nvme_fabrics 00:23:46.317 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:46.317 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:23:46.317 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:23:46.317 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:23:46.317 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:46.317 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:46.317 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:46.317 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:23:46.317 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:23:46.317 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:46.317 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:23:46.317 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:46.317 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:46.317 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.317 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:46.317 11:25:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:48.861 11:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:48.861 11:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:23:48.861 11:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:48.861 11:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:23:48.861 11:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:48.861 11:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:48.861 11:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:48.861 11:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:48.861 11:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:23:48.861 11:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:23:48.861 11:25:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:49.828 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:49.828 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:49.828 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:49.828 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:49.828 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:49.828 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:49.828 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:49.828 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:49.828 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:49.828 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:49.828 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:50.088 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:50.088 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:50.088 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:50.088 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:50.088 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:52.022 0000:81:00.0 (8086 0a54): nvme -> vfio-pci 00:23:52.022 00:23:52.023 real 0m11.627s 00:23:52.023 user 0m2.550s 00:23:52.023 sys 0m4.310s 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.023 ************************************ 00:23:52.023 END TEST nvmf_identify_kernel_target 00:23:52.023 ************************************ 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.023 ************************************ 00:23:52.023 START TEST nvmf_auth_host 00:23:52.023 ************************************ 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:52.023 * Looking for test storage... 00:23:52.023 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:52.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:52.023 --rc genhtml_branch_coverage=1 00:23:52.023 --rc genhtml_function_coverage=1 00:23:52.023 --rc genhtml_legend=1 00:23:52.023 --rc geninfo_all_blocks=1 00:23:52.023 --rc geninfo_unexecuted_blocks=1 00:23:52.023 00:23:52.023 ' 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:52.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:52.023 --rc genhtml_branch_coverage=1 00:23:52.023 --rc genhtml_function_coverage=1 00:23:52.023 --rc genhtml_legend=1 00:23:52.023 --rc geninfo_all_blocks=1 00:23:52.023 --rc geninfo_unexecuted_blocks=1 00:23:52.023 00:23:52.023 ' 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:52.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:52.023 --rc genhtml_branch_coverage=1 00:23:52.023 --rc genhtml_function_coverage=1 00:23:52.023 --rc genhtml_legend=1 00:23:52.023 --rc geninfo_all_blocks=1 00:23:52.023 --rc geninfo_unexecuted_blocks=1 00:23:52.023 00:23:52.023 ' 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:52.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:52.023 --rc genhtml_branch_coverage=1 00:23:52.023 --rc genhtml_function_coverage=1 00:23:52.023 --rc genhtml_legend=1 00:23:52.023 --rc geninfo_all_blocks=1 00:23:52.023 --rc geninfo_unexecuted_blocks=1 00:23:52.023 00:23:52.023 ' 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:52.023 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:52.024 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:52.024 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:52.024 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:52.024 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:52.024 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.024 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.024 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.024 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:23:52.024 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.024 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:23:52.024 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:52.024 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:52.024 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:52.024 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:52.024 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:52.024 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:52.024 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:52.024 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:52.024 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:52.024 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:52.024 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:23:52.024 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:23:52.024 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:23:52.024 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:23:52.024 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:52.024 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:52.024 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:23:52.024 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:23:52.024 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:23:52.024 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:52.024 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:52.024 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:52.024 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:52.024 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:52.024 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.024 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:52.024 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:52.024 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:52.024 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:52.024 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:23:52.024 11:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.324 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:55.324 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:23:55.324 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:55.324 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:55.324 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:55.324 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:55.324 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:55.324 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:23:55.324 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:55.324 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:23:55.324 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:23:55.324 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:23:55.324 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:23:55.324 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:23:55.324 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:23:55.324 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:55.324 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:55.324 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:55.324 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:55.324 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:55.324 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:55.324 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:55.324 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:55.324 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:55.324 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:55.324 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:55.324 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:55.324 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:55.324 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:55.324 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:55.324 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:55.324 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:55.324 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:55.324 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:55.324 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:23:55.324 Found 0000:82:00.0 (0x8086 - 0x159b) 00:23:55.324 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:55.324 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:55.324 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.324 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.324 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:55.324 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:55.324 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:23:55.324 Found 0000:82:00.1 (0x8086 - 0x159b) 00:23:55.324 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:55.324 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:55.324 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.324 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:23:55.325 Found net devices under 0000:82:00.0: cvl_0_0 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:23:55.325 Found net devices under 0000:82:00.1: cvl_0_1 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:55.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:55.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:23:55.325 00:23:55.325 --- 10.0.0.2 ping statistics --- 00:23:55.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.325 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:55.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:55.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:23:55.325 00:23:55.325 --- 10.0.0.1 ping statistics --- 00:23:55.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.325 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2705724 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2705724 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2705724 ']' 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.325 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4c0783e376a20863111b172559581ae2 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Aq3 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4c0783e376a20863111b172559581ae2 0 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4c0783e376a20863111b172559581ae2 0 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4c0783e376a20863111b172559581ae2 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Aq3 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Aq3 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Aq3 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=245f9ffe161ffde49d07d6f8763493cb6cc677c2c373128d7c958e3cbd63e630 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.mW0 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 245f9ffe161ffde49d07d6f8763493cb6cc677c2c373128d7c958e3cbd63e630 3 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 245f9ffe161ffde49d07d6f8763493cb6cc677c2c373128d7c958e3cbd63e630 3 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=245f9ffe161ffde49d07d6f8763493cb6cc677c2c373128d7c958e3cbd63e630 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.mW0 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.mW0 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.mW0 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d065da351d2c32ca731433381d5603d117a243cd5df406f6 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.6Po 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d065da351d2c32ca731433381d5603d117a243cd5df406f6 0 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d065da351d2c32ca731433381d5603d117a243cd5df406f6 0 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d065da351d2c32ca731433381d5603d117a243cd5df406f6 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.6Po 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.6Po 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.6Po 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=151bc1a8cf8911311d7382d80248bc2b37de21289c843489 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.BAZ 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 151bc1a8cf8911311d7382d80248bc2b37de21289c843489 2 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 151bc1a8cf8911311d7382d80248bc2b37de21289c843489 2 00:23:55.326 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:55.327 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:55.327 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=151bc1a8cf8911311d7382d80248bc2b37de21289c843489 00:23:55.327 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:23:55.327 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:55.585 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.BAZ 00:23:55.585 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.BAZ 00:23:55.585 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.BAZ 00:23:55.585 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:55.585 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:55.585 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:55.585 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:55.585 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:23:55.585 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:23:55.585 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:55.585 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a119e8c82944c05f3dd3cb3c6bcd2217 00:23:55.585 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:23:55.585 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.vSH 00:23:55.585 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a119e8c82944c05f3dd3cb3c6bcd2217 1 00:23:55.586 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a119e8c82944c05f3dd3cb3c6bcd2217 1 00:23:55.586 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:55.586 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:55.586 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a119e8c82944c05f3dd3cb3c6bcd2217 00:23:55.586 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:23:55.586 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:55.586 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.vSH 00:23:55.586 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.vSH 00:23:55.586 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.vSH 00:23:55.586 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:55.586 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:55.586 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:55.586 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:55.586 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:23:55.586 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:23:55.586 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:55.586 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b158b1511d4b3edb7121e3aa3978f01f 00:23:55.586 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:23:55.586 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.VVb 00:23:55.586 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b158b1511d4b3edb7121e3aa3978f01f 1 00:23:55.586 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b158b1511d4b3edb7121e3aa3978f01f 1 00:23:55.586 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:55.586 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:55.586 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b158b1511d4b3edb7121e3aa3978f01f 00:23:55.586 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:23:55.586 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:55.586 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.VVb 00:23:55.586 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.VVb 00:23:55.586 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.VVb 00:23:55.586 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:23:55.586 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:55.586 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:55.586 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:55.586 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:23:55.586 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:23:55.586 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:55.586 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b5564defc7fcde64549865c05e9029b39ec9d69cc52137ab 00:23:55.586 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:23:55.586 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.I0z 00:23:55.586 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b5564defc7fcde64549865c05e9029b39ec9d69cc52137ab 2 00:23:55.586 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b5564defc7fcde64549865c05e9029b39ec9d69cc52137ab 2 00:23:55.586 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:55.586 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:55.586 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b5564defc7fcde64549865c05e9029b39ec9d69cc52137ab 00:23:55.586 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:23:55.586 11:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:55.586 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.I0z 00:23:55.586 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.I0z 00:23:55.586 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.I0z 00:23:55.586 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:23:55.586 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:55.586 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:55.586 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:55.586 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:23:55.586 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:23:55.586 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:55.586 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=62064f934aeff8126f23088ab18c099e 00:23:55.586 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:23:55.586 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.94N 00:23:55.586 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 62064f934aeff8126f23088ab18c099e 0 00:23:55.586 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 62064f934aeff8126f23088ab18c099e 0 00:23:55.586 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:55.586 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:55.586 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=62064f934aeff8126f23088ab18c099e 00:23:55.586 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:23:55.586 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:55.586 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.94N 00:23:55.586 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.94N 00:23:55.586 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.94N 00:23:55.586 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:23:55.586 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:55.586 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:55.586 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:55.586 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:23:55.586 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:23:55.586 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:55.586 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5ca00ccf0517496ad09c842ea80fd4c779ead38c914129ba4a2f40097095d200 00:23:55.586 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:23:55.586 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.xOZ 00:23:55.586 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5ca00ccf0517496ad09c842ea80fd4c779ead38c914129ba4a2f40097095d200 3 00:23:55.586 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5ca00ccf0517496ad09c842ea80fd4c779ead38c914129ba4a2f40097095d200 3 00:23:55.586 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:55.587 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:55.587 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5ca00ccf0517496ad09c842ea80fd4c779ead38c914129ba4a2f40097095d200 00:23:55.587 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:23:55.587 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:55.845 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.xOZ 00:23:55.845 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.xOZ 00:23:55.845 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.xOZ 00:23:55.846 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:23:55.846 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2705724 00:23:55.846 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2705724 ']' 00:23:55.846 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.846 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:55.846 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:55.846 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:55.846 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Aq3 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.mW0 ]] 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.mW0 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.6Po 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.BAZ ]] 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.BAZ 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.vSH 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.VVb ]] 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.VVb 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.I0z 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.94N ]] 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.94N 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.xOZ 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:56.104 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:23:56.105 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:23:56.105 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:23:56.105 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:56.105 11:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:57.479 Waiting for block devices as requested 00:23:57.479 0000:81:00.0 (8086 0a54): vfio-pci -> nvme 00:23:57.479 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:57.479 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:57.737 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:57.738 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:57.738 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:57.738 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:57.738 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:57.995 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:57.995 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:57.995 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:58.253 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:58.253 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:58.253 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:58.253 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:58.511 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:58.511 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:59.078 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:59.078 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:59.078 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:23:59.078 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:23:59.078 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:59.078 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:59.078 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:23:59.078 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:23:59.078 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:59.078 No valid GPT data, bailing 00:23:59.078 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:59.078 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:23:59.078 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:23:59.078 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -a 10.0.0.1 -t tcp -s 4420 00:23:59.079 00:23:59.079 Discovery Log Number of Records 2, Generation counter 2 00:23:59.079 =====Discovery Log Entry 0====== 00:23:59.079 trtype: tcp 00:23:59.079 adrfam: ipv4 00:23:59.079 subtype: current discovery subsystem 00:23:59.079 treq: not specified, sq flow control disable supported 00:23:59.079 portid: 1 00:23:59.079 trsvcid: 4420 00:23:59.079 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:59.079 traddr: 10.0.0.1 00:23:59.079 eflags: none 00:23:59.079 sectype: none 00:23:59.079 =====Discovery Log Entry 1====== 00:23:59.079 trtype: tcp 00:23:59.079 adrfam: ipv4 00:23:59.079 subtype: nvme subsystem 00:23:59.079 treq: not specified, sq flow control disable supported 00:23:59.079 portid: 1 00:23:59.079 trsvcid: 4420 00:23:59.079 subnqn: nqn.2024-02.io.spdk:cnode0 00:23:59.079 traddr: 10.0.0.1 00:23:59.079 eflags: none 00:23:59.079 sectype: none 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA2NWRhMzUxZDJjMzJjYTczMTQzMzM4MWQ1NjAzZDExN2EyNDNjZDVkZjQwNmY2tJNDLQ==: 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA2NWRhMzUxZDJjMzJjYTczMTQzMzM4MWQ1NjAzZDExN2EyNDNjZDVkZjQwNmY2tJNDLQ==: 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: ]] 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.079 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.338 nvme0n1 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGMwNzgzZTM3NmEyMDg2MzExMWIxNzI1NTk1ODFhZTKPXjS/: 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQ1ZjlmZmUxNjFmZmRlNDlkMDdkNmY4NzYzNDkzY2I2Y2M2NzdjMmMzNzMxMjhkN2M5NThlM2NiZDYzZTYzMHZ8h6g=: 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGMwNzgzZTM3NmEyMDg2MzExMWIxNzI1NTk1ODFhZTKPXjS/: 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQ1ZjlmZmUxNjFmZmRlNDlkMDdkNmY4NzYzNDkzY2I2Y2M2NzdjMmMzNzMxMjhkN2M5NThlM2NiZDYzZTYzMHZ8h6g=: ]] 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQ1ZjlmZmUxNjFmZmRlNDlkMDdkNmY4NzYzNDkzY2I2Y2M2NzdjMmMzNzMxMjhkN2M5NThlM2NiZDYzZTYzMHZ8h6g=: 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.338 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.597 nvme0n1 00:23:59.597 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.597 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.597 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.597 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.597 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:59.597 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.597 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.597 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.597 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.597 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.597 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.597 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:59.597 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:59.597 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:59.597 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:59.597 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:59.597 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:59.597 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA2NWRhMzUxZDJjMzJjYTczMTQzMzM4MWQ1NjAzZDExN2EyNDNjZDVkZjQwNmY2tJNDLQ==: 00:23:59.597 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: 00:23:59.597 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:59.597 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:59.597 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA2NWRhMzUxZDJjMzJjYTczMTQzMzM4MWQ1NjAzZDExN2EyNDNjZDVkZjQwNmY2tJNDLQ==: 00:23:59.597 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: ]] 00:23:59.597 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: 00:23:59.597 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:23:59.597 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:59.597 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:59.597 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:59.597 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:59.597 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:59.597 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:59.597 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.597 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.597 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.597 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:59.597 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:59.597 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:59.597 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:59.597 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.597 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.597 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:59.597 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.597 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:59.597 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:59.597 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:59.597 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:59.597 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.597 11:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.856 nvme0n1 00:23:59.856 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.856 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.856 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.856 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.856 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:59.856 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.856 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.856 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.856 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.856 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.856 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.856 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:59.856 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:59.856 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:59.856 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:59.856 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:59.856 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:59.856 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTExOWU4YzgyOTQ0YzA1ZjNkZDNjYjNjNmJjZDIyMTfYrXrE: 00:23:59.856 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: 00:23:59.856 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:59.856 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:59.856 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTExOWU4YzgyOTQ0YzA1ZjNkZDNjYjNjNmJjZDIyMTfYrXrE: 00:23:59.856 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: ]] 00:23:59.856 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: 00:23:59.856 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:23:59.856 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:59.856 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:59.856 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:59.856 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:59.856 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:59.856 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:59.856 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.856 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.856 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.856 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:59.856 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:59.856 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:59.856 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:59.856 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.856 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.856 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:59.856 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.856 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:59.856 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:59.856 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:59.856 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:59.856 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.856 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.114 nvme0n1 00:24:00.114 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.114 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.114 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:00.114 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.114 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.114 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.114 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.114 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.114 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.114 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.114 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.114 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:00.114 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:00.115 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:00.115 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:00.115 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:00.115 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:00.115 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjU1NjRkZWZjN2ZjZGU2NDU0OTg2NWMwNWU5MDI5YjM5ZWM5ZDY5Y2M1MjEzN2FiLvFwSw==: 00:24:00.115 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjIwNjRmOTM0YWVmZjgxMjZmMjMwODhhYjE4YzA5OWV3emnY: 00:24:00.115 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:00.115 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:00.115 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjU1NjRkZWZjN2ZjZGU2NDU0OTg2NWMwNWU5MDI5YjM5ZWM5ZDY5Y2M1MjEzN2FiLvFwSw==: 00:24:00.115 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjIwNjRmOTM0YWVmZjgxMjZmMjMwODhhYjE4YzA5OWV3emnY: ]] 00:24:00.115 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjIwNjRmOTM0YWVmZjgxMjZmMjMwODhhYjE4YzA5OWV3emnY: 00:24:00.115 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:24:00.115 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:00.115 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:00.115 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:00.115 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:00.115 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:00.115 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:00.115 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.115 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.115 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.115 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:00.115 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:00.115 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:00.115 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:00.115 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.115 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.115 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:00.115 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.115 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:00.115 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:00.115 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:00.115 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:00.115 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.115 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.115 nvme0n1 00:24:00.115 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.115 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.115 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.115 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.115 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:00.115 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.373 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.373 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.373 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.373 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.373 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.373 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:00.373 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:00.373 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:00.373 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:00.373 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:00.373 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:00.373 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWNhMDBjY2YwNTE3NDk2YWQwOWM4NDJlYTgwZmQ0Yzc3OWVhZDM4YzkxNDEyOWJhNGEyZjQwMDk3MDk1ZDIwMGoWRFc=: 00:24:00.373 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:00.373 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:00.373 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:00.373 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWNhMDBjY2YwNTE3NDk2YWQwOWM4NDJlYTgwZmQ0Yzc3OWVhZDM4YzkxNDEyOWJhNGEyZjQwMDk3MDk1ZDIwMGoWRFc=: 00:24:00.373 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:00.373 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:24:00.373 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:00.373 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:00.373 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:00.373 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:00.373 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:00.373 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:00.373 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.373 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.373 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.373 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:00.373 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:00.373 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:00.373 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:00.373 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.374 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.374 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:00.374 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.374 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:00.374 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:00.374 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:00.374 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:00.374 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.374 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.374 nvme0n1 00:24:00.374 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.374 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.374 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.374 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.374 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:00.374 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.374 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.374 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.374 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.374 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.374 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.374 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:00.374 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:00.374 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:00.374 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:00.374 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:00.374 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:00.374 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:00.374 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGMwNzgzZTM3NmEyMDg2MzExMWIxNzI1NTk1ODFhZTKPXjS/: 00:24:00.374 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQ1ZjlmZmUxNjFmZmRlNDlkMDdkNmY4NzYzNDkzY2I2Y2M2NzdjMmMzNzMxMjhkN2M5NThlM2NiZDYzZTYzMHZ8h6g=: 00:24:00.374 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:00.374 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:00.374 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGMwNzgzZTM3NmEyMDg2MzExMWIxNzI1NTk1ODFhZTKPXjS/: 00:24:00.374 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQ1ZjlmZmUxNjFmZmRlNDlkMDdkNmY4NzYzNDkzY2I2Y2M2NzdjMmMzNzMxMjhkN2M5NThlM2NiZDYzZTYzMHZ8h6g=: ]] 00:24:00.374 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQ1ZjlmZmUxNjFmZmRlNDlkMDdkNmY4NzYzNDkzY2I2Y2M2NzdjMmMzNzMxMjhkN2M5NThlM2NiZDYzZTYzMHZ8h6g=: 00:24:00.374 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:24:00.374 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:00.374 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:00.374 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:00.374 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:00.374 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:00.374 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:00.374 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.374 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.374 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.374 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:00.374 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:00.632 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:00.632 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:00.632 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.632 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.632 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:00.632 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.632 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:00.632 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:00.632 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:00.632 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:00.632 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.632 11:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.632 nvme0n1 00:24:00.632 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.632 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.632 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.632 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.632 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:00.632 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.632 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.632 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.632 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.632 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.632 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.632 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:00.632 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:00.632 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:00.632 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:00.632 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:00.632 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:00.632 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA2NWRhMzUxZDJjMzJjYTczMTQzMzM4MWQ1NjAzZDExN2EyNDNjZDVkZjQwNmY2tJNDLQ==: 00:24:00.632 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: 00:24:00.632 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:00.632 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:00.632 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA2NWRhMzUxZDJjMzJjYTczMTQzMzM4MWQ1NjAzZDExN2EyNDNjZDVkZjQwNmY2tJNDLQ==: 00:24:00.632 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: ]] 00:24:00.632 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: 00:24:00.632 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:24:00.632 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:00.632 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:00.632 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:00.632 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:00.632 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:00.632 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:00.632 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.633 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.633 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.633 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:00.633 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:00.633 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:00.633 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:00.633 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.633 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.633 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:00.633 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.633 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:00.633 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:00.633 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:00.633 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:00.633 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.633 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.891 nvme0n1 00:24:00.891 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.891 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.891 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.891 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.891 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:00.891 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.891 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.891 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.891 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.891 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.891 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.891 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:00.891 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:00.891 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:00.891 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:00.891 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:00.891 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:00.891 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTExOWU4YzgyOTQ0YzA1ZjNkZDNjYjNjNmJjZDIyMTfYrXrE: 00:24:00.891 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: 00:24:00.891 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:00.891 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:00.891 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTExOWU4YzgyOTQ0YzA1ZjNkZDNjYjNjNmJjZDIyMTfYrXrE: 00:24:00.891 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: ]] 00:24:00.891 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: 00:24:00.891 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:24:00.891 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:00.891 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:00.891 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:00.891 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:00.891 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:00.891 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:00.891 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.891 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.891 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.891 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:00.891 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:00.891 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:00.891 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:00.891 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.891 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.891 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:00.891 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.891 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:00.891 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:00.891 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:00.891 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:00.891 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.891 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.149 nvme0n1 00:24:01.149 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.149 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.149 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.149 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.149 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:01.149 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.149 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.149 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.149 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.149 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.149 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.149 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:01.149 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:01.149 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:01.149 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:01.149 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:01.149 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:01.149 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjU1NjRkZWZjN2ZjZGU2NDU0OTg2NWMwNWU5MDI5YjM5ZWM5ZDY5Y2M1MjEzN2FiLvFwSw==: 00:24:01.149 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjIwNjRmOTM0YWVmZjgxMjZmMjMwODhhYjE4YzA5OWV3emnY: 00:24:01.149 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:01.149 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:01.149 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjU1NjRkZWZjN2ZjZGU2NDU0OTg2NWMwNWU5MDI5YjM5ZWM5ZDY5Y2M1MjEzN2FiLvFwSw==: 00:24:01.149 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjIwNjRmOTM0YWVmZjgxMjZmMjMwODhhYjE4YzA5OWV3emnY: ]] 00:24:01.149 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjIwNjRmOTM0YWVmZjgxMjZmMjMwODhhYjE4YzA5OWV3emnY: 00:24:01.149 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:24:01.149 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:01.149 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:01.149 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:01.149 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:01.149 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:01.149 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:01.149 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.149 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.407 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.407 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:01.407 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:01.407 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:01.407 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:01.407 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.407 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.407 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:01.407 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.407 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:01.407 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:01.407 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:01.408 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:01.408 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.408 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.408 nvme0n1 00:24:01.408 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.408 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.408 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:01.408 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.408 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.408 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.408 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.408 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.408 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.408 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.666 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.666 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:01.666 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:01.666 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:01.666 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:01.666 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:01.666 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:01.666 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWNhMDBjY2YwNTE3NDk2YWQwOWM4NDJlYTgwZmQ0Yzc3OWVhZDM4YzkxNDEyOWJhNGEyZjQwMDk3MDk1ZDIwMGoWRFc=: 00:24:01.666 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:01.666 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:01.666 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:01.666 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWNhMDBjY2YwNTE3NDk2YWQwOWM4NDJlYTgwZmQ0Yzc3OWVhZDM4YzkxNDEyOWJhNGEyZjQwMDk3MDk1ZDIwMGoWRFc=: 00:24:01.666 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:01.666 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:24:01.666 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:01.666 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:01.666 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:01.666 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:01.666 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:01.666 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:01.666 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.666 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.666 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.666 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:01.666 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:01.666 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:01.666 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:01.666 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.666 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.666 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:01.666 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.666 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:01.666 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:01.666 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:01.666 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:01.666 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.667 11:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.667 nvme0n1 00:24:01.667 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.667 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.667 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.667 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.667 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:01.667 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.925 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.925 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.925 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.925 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.925 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.925 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:01.925 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:01.925 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:01.925 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:01.925 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:01.925 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:01.925 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:01.925 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGMwNzgzZTM3NmEyMDg2MzExMWIxNzI1NTk1ODFhZTKPXjS/: 00:24:01.925 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQ1ZjlmZmUxNjFmZmRlNDlkMDdkNmY4NzYzNDkzY2I2Y2M2NzdjMmMzNzMxMjhkN2M5NThlM2NiZDYzZTYzMHZ8h6g=: 00:24:01.925 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:01.925 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:01.925 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGMwNzgzZTM3NmEyMDg2MzExMWIxNzI1NTk1ODFhZTKPXjS/: 00:24:01.925 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQ1ZjlmZmUxNjFmZmRlNDlkMDdkNmY4NzYzNDkzY2I2Y2M2NzdjMmMzNzMxMjhkN2M5NThlM2NiZDYzZTYzMHZ8h6g=: ]] 00:24:01.925 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQ1ZjlmZmUxNjFmZmRlNDlkMDdkNmY4NzYzNDkzY2I2Y2M2NzdjMmMzNzMxMjhkN2M5NThlM2NiZDYzZTYzMHZ8h6g=: 00:24:01.925 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:24:01.925 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:01.925 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:01.925 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:01.925 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:01.925 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:01.925 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:01.925 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.925 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.925 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.925 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:01.925 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:01.925 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:01.925 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:01.925 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.925 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.925 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:01.925 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.925 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:01.925 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:01.925 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:01.925 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:01.925 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.925 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.184 nvme0n1 00:24:02.184 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.184 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.184 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:02.184 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.184 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.184 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.184 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.184 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:02.184 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.184 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.184 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.184 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:02.184 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:02.184 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:02.184 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:02.184 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:02.184 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:02.184 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA2NWRhMzUxZDJjMzJjYTczMTQzMzM4MWQ1NjAzZDExN2EyNDNjZDVkZjQwNmY2tJNDLQ==: 00:24:02.184 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: 00:24:02.184 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:02.184 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:02.184 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA2NWRhMzUxZDJjMzJjYTczMTQzMzM4MWQ1NjAzZDExN2EyNDNjZDVkZjQwNmY2tJNDLQ==: 00:24:02.184 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: ]] 00:24:02.184 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: 00:24:02.184 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:24:02.184 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:02.184 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:02.184 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:02.184 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:02.184 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:02.184 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:02.184 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.184 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.184 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.184 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:02.184 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:02.184 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:02.184 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:02.184 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.184 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.184 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:02.184 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.184 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:02.184 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:02.184 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:02.184 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:02.184 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.184 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.442 nvme0n1 00:24:02.442 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.442 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.442 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:02.442 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.442 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.442 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.701 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.701 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:02.701 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.701 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.701 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.701 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:02.701 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:02.701 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:02.701 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:02.701 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:02.701 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:02.701 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTExOWU4YzgyOTQ0YzA1ZjNkZDNjYjNjNmJjZDIyMTfYrXrE: 00:24:02.701 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: 00:24:02.701 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:02.701 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:02.701 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTExOWU4YzgyOTQ0YzA1ZjNkZDNjYjNjNmJjZDIyMTfYrXrE: 00:24:02.701 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: ]] 00:24:02.701 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: 00:24:02.701 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:24:02.701 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:02.701 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:02.701 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:02.701 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:02.701 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:02.701 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:02.701 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.701 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.701 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.701 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:02.701 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:02.701 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:02.701 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:02.701 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.701 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.701 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:02.701 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.701 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:02.701 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:02.701 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:02.701 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:02.701 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.701 11:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.959 nvme0n1 00:24:02.959 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.959 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.959 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.959 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.959 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:02.959 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.959 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.959 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:02.959 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.959 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.959 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.959 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:02.959 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:02.959 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:02.959 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:02.959 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:02.959 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:02.959 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjU1NjRkZWZjN2ZjZGU2NDU0OTg2NWMwNWU5MDI5YjM5ZWM5ZDY5Y2M1MjEzN2FiLvFwSw==: 00:24:02.959 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjIwNjRmOTM0YWVmZjgxMjZmMjMwODhhYjE4YzA5OWV3emnY: 00:24:02.959 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:02.959 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:02.959 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjU1NjRkZWZjN2ZjZGU2NDU0OTg2NWMwNWU5MDI5YjM5ZWM5ZDY5Y2M1MjEzN2FiLvFwSw==: 00:24:02.959 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjIwNjRmOTM0YWVmZjgxMjZmMjMwODhhYjE4YzA5OWV3emnY: ]] 00:24:02.959 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjIwNjRmOTM0YWVmZjgxMjZmMjMwODhhYjE4YzA5OWV3emnY: 00:24:02.959 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:24:02.959 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:02.959 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:02.959 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:02.959 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:02.959 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:02.959 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:02.959 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.959 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.959 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.959 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:02.959 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:02.959 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:02.959 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:02.959 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.959 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.959 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:02.959 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.959 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:02.959 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:02.959 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:02.959 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:02.959 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.959 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.525 nvme0n1 00:24:03.525 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.525 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.525 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.525 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.525 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:03.525 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.525 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.525 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.525 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.525 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.525 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.525 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:03.525 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:03.525 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:03.525 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:03.525 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:03.525 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:03.525 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWNhMDBjY2YwNTE3NDk2YWQwOWM4NDJlYTgwZmQ0Yzc3OWVhZDM4YzkxNDEyOWJhNGEyZjQwMDk3MDk1ZDIwMGoWRFc=: 00:24:03.525 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:03.525 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:03.525 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:03.525 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWNhMDBjY2YwNTE3NDk2YWQwOWM4NDJlYTgwZmQ0Yzc3OWVhZDM4YzkxNDEyOWJhNGEyZjQwMDk3MDk1ZDIwMGoWRFc=: 00:24:03.525 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:03.525 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:24:03.525 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:03.525 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:03.525 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:03.525 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:03.525 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:03.525 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:03.525 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.525 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.525 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.525 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:03.525 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:03.525 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:03.525 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:03.525 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.525 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.525 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:03.525 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:03.525 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:03.525 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:03.525 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:03.525 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:03.525 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.525 11:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.783 nvme0n1 00:24:03.783 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.783 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.783 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.783 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.783 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:03.783 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.783 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.783 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.783 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.783 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.783 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.783 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:03.784 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:03.784 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:03.784 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:03.784 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:03.784 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:03.784 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:03.784 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGMwNzgzZTM3NmEyMDg2MzExMWIxNzI1NTk1ODFhZTKPXjS/: 00:24:03.784 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQ1ZjlmZmUxNjFmZmRlNDlkMDdkNmY4NzYzNDkzY2I2Y2M2NzdjMmMzNzMxMjhkN2M5NThlM2NiZDYzZTYzMHZ8h6g=: 00:24:03.784 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:03.784 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:03.784 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGMwNzgzZTM3NmEyMDg2MzExMWIxNzI1NTk1ODFhZTKPXjS/: 00:24:03.784 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQ1ZjlmZmUxNjFmZmRlNDlkMDdkNmY4NzYzNDkzY2I2Y2M2NzdjMmMzNzMxMjhkN2M5NThlM2NiZDYzZTYzMHZ8h6g=: ]] 00:24:03.784 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQ1ZjlmZmUxNjFmZmRlNDlkMDdkNmY4NzYzNDkzY2I2Y2M2NzdjMmMzNzMxMjhkN2M5NThlM2NiZDYzZTYzMHZ8h6g=: 00:24:03.784 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:24:03.784 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:03.784 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:03.784 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:03.784 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:03.784 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:03.784 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:03.784 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.784 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.784 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.784 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:03.784 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:03.784 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:03.784 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:03.784 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.784 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.784 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:03.784 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:03.784 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:03.784 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:03.784 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:03.784 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:03.784 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.784 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.349 nvme0n1 00:24:04.349 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.349 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:04.349 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.349 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.349 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:04.349 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.349 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.349 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:04.349 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.349 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.349 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.349 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:04.349 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:04.349 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:04.349 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:04.349 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:04.349 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:04.349 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA2NWRhMzUxZDJjMzJjYTczMTQzMzM4MWQ1NjAzZDExN2EyNDNjZDVkZjQwNmY2tJNDLQ==: 00:24:04.349 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: 00:24:04.349 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:04.349 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:04.349 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA2NWRhMzUxZDJjMzJjYTczMTQzMzM4MWQ1NjAzZDExN2EyNDNjZDVkZjQwNmY2tJNDLQ==: 00:24:04.349 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: ]] 00:24:04.349 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: 00:24:04.349 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:24:04.349 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:04.349 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:04.349 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:04.349 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:04.349 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:04.350 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:04.350 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.350 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.350 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.350 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:04.350 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:04.350 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:04.350 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:04.350 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:04.350 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:04.350 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:04.350 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:04.350 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:04.350 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:04.350 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:04.350 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:04.350 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.350 11:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.916 nvme0n1 00:24:04.916 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.916 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:04.916 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.916 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.916 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:04.916 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.916 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.917 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:04.917 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.917 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.917 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.917 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:04.917 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:04.917 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:04.917 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:04.917 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:04.917 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:04.917 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTExOWU4YzgyOTQ0YzA1ZjNkZDNjYjNjNmJjZDIyMTfYrXrE: 00:24:04.917 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: 00:24:04.917 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:04.917 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:04.917 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTExOWU4YzgyOTQ0YzA1ZjNkZDNjYjNjNmJjZDIyMTfYrXrE: 00:24:04.917 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: ]] 00:24:04.917 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: 00:24:04.917 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:24:04.917 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:04.917 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:04.917 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:04.917 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:04.917 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:04.917 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:04.917 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.917 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.917 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.917 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:04.917 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:04.917 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:04.917 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:04.917 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:04.917 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:04.917 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:04.917 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:04.917 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:04.917 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:04.917 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:04.917 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:04.917 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.917 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.484 nvme0n1 00:24:05.484 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.484 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:05.484 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.484 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.484 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:05.484 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.484 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.484 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:05.484 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.484 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.484 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.484 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:05.484 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:05.484 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:05.484 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:05.484 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:05.484 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:05.485 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjU1NjRkZWZjN2ZjZGU2NDU0OTg2NWMwNWU5MDI5YjM5ZWM5ZDY5Y2M1MjEzN2FiLvFwSw==: 00:24:05.485 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjIwNjRmOTM0YWVmZjgxMjZmMjMwODhhYjE4YzA5OWV3emnY: 00:24:05.485 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:05.485 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:05.485 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjU1NjRkZWZjN2ZjZGU2NDU0OTg2NWMwNWU5MDI5YjM5ZWM5ZDY5Y2M1MjEzN2FiLvFwSw==: 00:24:05.485 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjIwNjRmOTM0YWVmZjgxMjZmMjMwODhhYjE4YzA5OWV3emnY: ]] 00:24:05.485 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjIwNjRmOTM0YWVmZjgxMjZmMjMwODhhYjE4YzA5OWV3emnY: 00:24:05.485 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:24:05.485 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:05.485 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:05.485 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:05.485 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:05.485 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:05.485 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:05.485 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.485 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.485 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.485 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:05.485 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:05.485 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:05.485 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:05.485 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:05.485 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:05.485 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:05.485 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:05.485 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:05.485 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:05.485 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:05.485 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:05.485 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.485 11:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.051 nvme0n1 00:24:06.051 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.051 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:06.051 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.051 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:06.051 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.051 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.051 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.051 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.051 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.052 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.052 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.052 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:06.052 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:06.052 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:06.052 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:06.052 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:06.052 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:06.052 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWNhMDBjY2YwNTE3NDk2YWQwOWM4NDJlYTgwZmQ0Yzc3OWVhZDM4YzkxNDEyOWJhNGEyZjQwMDk3MDk1ZDIwMGoWRFc=: 00:24:06.052 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:06.052 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:06.052 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:06.052 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWNhMDBjY2YwNTE3NDk2YWQwOWM4NDJlYTgwZmQ0Yzc3OWVhZDM4YzkxNDEyOWJhNGEyZjQwMDk3MDk1ZDIwMGoWRFc=: 00:24:06.052 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:06.052 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:24:06.052 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:06.052 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:06.052 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:06.052 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:06.052 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:06.052 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:06.052 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.052 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.052 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.052 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:06.052 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:06.052 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:06.052 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:06.052 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.052 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.052 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:06.052 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.052 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:06.052 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:06.052 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:06.052 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:06.052 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.052 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.618 nvme0n1 00:24:06.618 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.618 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:06.618 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.618 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.618 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:06.618 11:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.618 11:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.618 11:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.618 11:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.618 11:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.618 11:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.618 11:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:06.618 11:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:06.618 11:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:06.618 11:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:06.618 11:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:06.618 11:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:06.618 11:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:06.618 11:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGMwNzgzZTM3NmEyMDg2MzExMWIxNzI1NTk1ODFhZTKPXjS/: 00:24:06.618 11:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQ1ZjlmZmUxNjFmZmRlNDlkMDdkNmY4NzYzNDkzY2I2Y2M2NzdjMmMzNzMxMjhkN2M5NThlM2NiZDYzZTYzMHZ8h6g=: 00:24:06.618 11:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:06.618 11:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:06.618 11:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGMwNzgzZTM3NmEyMDg2MzExMWIxNzI1NTk1ODFhZTKPXjS/: 00:24:06.618 11:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQ1ZjlmZmUxNjFmZmRlNDlkMDdkNmY4NzYzNDkzY2I2Y2M2NzdjMmMzNzMxMjhkN2M5NThlM2NiZDYzZTYzMHZ8h6g=: ]] 00:24:06.618 11:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQ1ZjlmZmUxNjFmZmRlNDlkMDdkNmY4NzYzNDkzY2I2Y2M2NzdjMmMzNzMxMjhkN2M5NThlM2NiZDYzZTYzMHZ8h6g=: 00:24:06.618 11:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:24:06.618 11:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:06.618 11:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:06.618 11:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:06.618 11:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:06.618 11:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:06.618 11:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:06.618 11:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.618 11:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.618 11:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.618 11:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:06.618 11:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:06.618 11:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:06.619 11:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:06.619 11:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.619 11:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.619 11:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:06.619 11:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.619 11:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:06.619 11:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:06.619 11:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:06.619 11:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:06.619 11:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.619 11:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.553 nvme0n1 00:24:07.553 11:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.553 11:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.553 11:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:07.553 11:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.553 11:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.553 11:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.553 11:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.553 11:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.553 11:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.553 11:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.811 11:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.811 11:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:07.811 11:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:07.811 11:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:07.811 11:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:07.811 11:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:07.811 11:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:07.811 11:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA2NWRhMzUxZDJjMzJjYTczMTQzMzM4MWQ1NjAzZDExN2EyNDNjZDVkZjQwNmY2tJNDLQ==: 00:24:07.811 11:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: 00:24:07.811 11:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:07.811 11:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:07.811 11:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA2NWRhMzUxZDJjMzJjYTczMTQzMzM4MWQ1NjAzZDExN2EyNDNjZDVkZjQwNmY2tJNDLQ==: 00:24:07.811 11:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: ]] 00:24:07.811 11:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: 00:24:07.811 11:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:24:07.811 11:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:07.811 11:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:07.811 11:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:07.811 11:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:07.811 11:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:07.811 11:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:07.811 11:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.811 11:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.811 11:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.811 11:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:07.811 11:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:07.811 11:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:07.811 11:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:07.811 11:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.811 11:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.811 11:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:07.811 11:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.811 11:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:07.811 11:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:07.811 11:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:07.811 11:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:07.811 11:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.812 11:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.746 nvme0n1 00:24:08.746 11:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.746 11:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.746 11:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.746 11:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.746 11:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:08.746 11:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.746 11:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.746 11:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.746 11:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.746 11:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.746 11:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.746 11:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:08.746 11:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:08.746 11:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:08.746 11:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:08.746 11:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:08.746 11:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:08.746 11:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTExOWU4YzgyOTQ0YzA1ZjNkZDNjYjNjNmJjZDIyMTfYrXrE: 00:24:08.746 11:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: 00:24:08.746 11:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:08.746 11:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:08.746 11:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTExOWU4YzgyOTQ0YzA1ZjNkZDNjYjNjNmJjZDIyMTfYrXrE: 00:24:08.746 11:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: ]] 00:24:08.746 11:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: 00:24:08.746 11:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:24:08.746 11:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:08.746 11:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:08.746 11:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:08.746 11:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:08.746 11:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:08.746 11:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:08.746 11:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.746 11:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.746 11:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.746 11:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:08.746 11:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:08.746 11:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:08.746 11:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:08.746 11:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.746 11:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.746 11:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:08.746 11:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.746 11:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:08.746 11:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:08.746 11:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:08.746 11:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:08.746 11:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.746 11:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.681 nvme0n1 00:24:09.681 11:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.681 11:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.681 11:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:09.681 11:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.681 11:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.681 11:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.681 11:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.681 11:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.681 11:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.681 11:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.681 11:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.681 11:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:09.681 11:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:09.681 11:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:09.681 11:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:09.681 11:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:09.681 11:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:09.681 11:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjU1NjRkZWZjN2ZjZGU2NDU0OTg2NWMwNWU5MDI5YjM5ZWM5ZDY5Y2M1MjEzN2FiLvFwSw==: 00:24:09.681 11:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjIwNjRmOTM0YWVmZjgxMjZmMjMwODhhYjE4YzA5OWV3emnY: 00:24:09.681 11:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:09.681 11:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:09.681 11:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjU1NjRkZWZjN2ZjZGU2NDU0OTg2NWMwNWU5MDI5YjM5ZWM5ZDY5Y2M1MjEzN2FiLvFwSw==: 00:24:09.681 11:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjIwNjRmOTM0YWVmZjgxMjZmMjMwODhhYjE4YzA5OWV3emnY: ]] 00:24:09.681 11:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjIwNjRmOTM0YWVmZjgxMjZmMjMwODhhYjE4YzA5OWV3emnY: 00:24:09.681 11:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:24:09.681 11:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:09.681 11:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:09.681 11:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:09.681 11:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:09.681 11:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:09.681 11:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:09.681 11:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.681 11:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.681 11:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.681 11:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:09.681 11:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:09.681 11:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:09.681 11:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:09.681 11:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:09.681 11:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:09.681 11:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:09.681 11:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:09.681 11:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:09.681 11:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:09.681 11:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:09.681 11:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:09.681 11:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.681 11:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.616 nvme0n1 00:24:10.616 11:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.616 11:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.616 11:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.616 11:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.616 11:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:10.616 11:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.616 11:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.616 11:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.616 11:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.616 11:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.616 11:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.616 11:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:10.616 11:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:10.616 11:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:10.616 11:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:10.616 11:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:10.616 11:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:10.616 11:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWNhMDBjY2YwNTE3NDk2YWQwOWM4NDJlYTgwZmQ0Yzc3OWVhZDM4YzkxNDEyOWJhNGEyZjQwMDk3MDk1ZDIwMGoWRFc=: 00:24:10.616 11:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:10.616 11:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:10.616 11:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:10.616 11:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWNhMDBjY2YwNTE3NDk2YWQwOWM4NDJlYTgwZmQ0Yzc3OWVhZDM4YzkxNDEyOWJhNGEyZjQwMDk3MDk1ZDIwMGoWRFc=: 00:24:10.616 11:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:10.616 11:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:24:10.616 11:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:10.616 11:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:10.616 11:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:10.616 11:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:10.616 11:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:10.616 11:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:10.616 11:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.616 11:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.616 11:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.616 11:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:10.616 11:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:10.616 11:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:10.616 11:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:10.616 11:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.616 11:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.616 11:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:10.616 11:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.616 11:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:10.616 11:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:10.616 11:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:10.616 11:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:10.616 11:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.616 11:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.552 nvme0n1 00:24:11.552 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.552 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.552 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.552 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.552 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.552 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGMwNzgzZTM3NmEyMDg2MzExMWIxNzI1NTk1ODFhZTKPXjS/: 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQ1ZjlmZmUxNjFmZmRlNDlkMDdkNmY4NzYzNDkzY2I2Y2M2NzdjMmMzNzMxMjhkN2M5NThlM2NiZDYzZTYzMHZ8h6g=: 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGMwNzgzZTM3NmEyMDg2MzExMWIxNzI1NTk1ODFhZTKPXjS/: 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQ1ZjlmZmUxNjFmZmRlNDlkMDdkNmY4NzYzNDkzY2I2Y2M2NzdjMmMzNzMxMjhkN2M5NThlM2NiZDYzZTYzMHZ8h6g=: ]] 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQ1ZjlmZmUxNjFmZmRlNDlkMDdkNmY4NzYzNDkzY2I2Y2M2NzdjMmMzNzMxMjhkN2M5NThlM2NiZDYzZTYzMHZ8h6g=: 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.811 nvme0n1 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.811 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA2NWRhMzUxZDJjMzJjYTczMTQzMzM4MWQ1NjAzZDExN2EyNDNjZDVkZjQwNmY2tJNDLQ==: 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA2NWRhMzUxZDJjMzJjYTczMTQzMzM4MWQ1NjAzZDExN2EyNDNjZDVkZjQwNmY2tJNDLQ==: 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: ]] 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.071 nvme0n1 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTExOWU4YzgyOTQ0YzA1ZjNkZDNjYjNjNmJjZDIyMTfYrXrE: 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTExOWU4YzgyOTQ0YzA1ZjNkZDNjYjNjNmJjZDIyMTfYrXrE: 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: ]] 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:12.071 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:12.072 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.072 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.072 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:12.072 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.072 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:12.072 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:12.072 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:12.072 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:12.072 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.072 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.330 nvme0n1 00:24:12.330 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.330 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.330 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.330 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.330 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.330 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.330 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.330 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.330 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.330 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.330 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.330 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.330 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:12.330 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.330 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:12.330 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:12.330 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:12.330 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjU1NjRkZWZjN2ZjZGU2NDU0OTg2NWMwNWU5MDI5YjM5ZWM5ZDY5Y2M1MjEzN2FiLvFwSw==: 00:24:12.330 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjIwNjRmOTM0YWVmZjgxMjZmMjMwODhhYjE4YzA5OWV3emnY: 00:24:12.330 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:12.330 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:12.330 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjU1NjRkZWZjN2ZjZGU2NDU0OTg2NWMwNWU5MDI5YjM5ZWM5ZDY5Y2M1MjEzN2FiLvFwSw==: 00:24:12.330 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjIwNjRmOTM0YWVmZjgxMjZmMjMwODhhYjE4YzA5OWV3emnY: ]] 00:24:12.330 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjIwNjRmOTM0YWVmZjgxMjZmMjMwODhhYjE4YzA5OWV3emnY: 00:24:12.330 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:24:12.330 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.330 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:12.331 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:12.331 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:12.331 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.331 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:12.331 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.331 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.331 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.331 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.331 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:12.331 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:12.331 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:12.331 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.331 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.331 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:12.331 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.331 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:12.331 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:12.331 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:12.331 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:12.331 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.331 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.589 nvme0n1 00:24:12.589 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.589 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.589 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.589 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.589 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.589 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.589 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.589 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.589 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.590 11:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.590 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.590 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.590 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:24:12.590 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.590 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:12.590 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:12.590 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:12.590 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWNhMDBjY2YwNTE3NDk2YWQwOWM4NDJlYTgwZmQ0Yzc3OWVhZDM4YzkxNDEyOWJhNGEyZjQwMDk3MDk1ZDIwMGoWRFc=: 00:24:12.590 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:12.590 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:12.590 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:12.590 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWNhMDBjY2YwNTE3NDk2YWQwOWM4NDJlYTgwZmQ0Yzc3OWVhZDM4YzkxNDEyOWJhNGEyZjQwMDk3MDk1ZDIwMGoWRFc=: 00:24:12.590 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:12.590 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:24:12.590 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.590 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:12.590 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:12.590 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:12.590 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.590 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:12.590 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.590 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.590 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.590 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.590 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:12.590 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:12.590 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:12.590 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.590 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.590 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:12.590 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.590 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:12.590 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:12.590 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:12.590 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:12.590 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.590 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.849 nvme0n1 00:24:12.849 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.849 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.849 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.849 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.849 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.849 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.849 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.849 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.849 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.849 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.849 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.849 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:12.849 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.849 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:24:12.849 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.849 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:12.849 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:12.849 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:12.849 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGMwNzgzZTM3NmEyMDg2MzExMWIxNzI1NTk1ODFhZTKPXjS/: 00:24:12.849 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQ1ZjlmZmUxNjFmZmRlNDlkMDdkNmY4NzYzNDkzY2I2Y2M2NzdjMmMzNzMxMjhkN2M5NThlM2NiZDYzZTYzMHZ8h6g=: 00:24:12.849 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:12.849 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:12.849 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGMwNzgzZTM3NmEyMDg2MzExMWIxNzI1NTk1ODFhZTKPXjS/: 00:24:12.849 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQ1ZjlmZmUxNjFmZmRlNDlkMDdkNmY4NzYzNDkzY2I2Y2M2NzdjMmMzNzMxMjhkN2M5NThlM2NiZDYzZTYzMHZ8h6g=: ]] 00:24:12.849 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQ1ZjlmZmUxNjFmZmRlNDlkMDdkNmY4NzYzNDkzY2I2Y2M2NzdjMmMzNzMxMjhkN2M5NThlM2NiZDYzZTYzMHZ8h6g=: 00:24:12.849 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:24:12.849 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.849 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:12.849 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:12.849 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:12.849 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.849 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:12.849 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.849 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.849 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.849 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.849 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:12.849 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:12.849 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:12.849 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.849 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.849 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:12.849 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.849 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:12.850 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:12.850 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:12.850 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:12.850 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.850 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.108 nvme0n1 00:24:13.108 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.108 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.108 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.108 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.108 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.108 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.108 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.108 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.108 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.108 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.108 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.108 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.108 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:24:13.108 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.108 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:13.108 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:13.108 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:13.108 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA2NWRhMzUxZDJjMzJjYTczMTQzMzM4MWQ1NjAzZDExN2EyNDNjZDVkZjQwNmY2tJNDLQ==: 00:24:13.108 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: 00:24:13.109 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:13.109 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:13.109 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA2NWRhMzUxZDJjMzJjYTczMTQzMzM4MWQ1NjAzZDExN2EyNDNjZDVkZjQwNmY2tJNDLQ==: 00:24:13.109 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: ]] 00:24:13.109 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: 00:24:13.109 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:24:13.109 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.109 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:13.109 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:13.109 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:13.109 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.109 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:13.109 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.109 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.109 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.109 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.109 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:13.109 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:13.109 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:13.109 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.109 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.109 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:13.109 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.109 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:13.109 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:13.109 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:13.109 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:13.109 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.109 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.367 nvme0n1 00:24:13.367 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.367 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.367 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.367 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.367 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.367 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.367 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.367 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.367 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.368 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.368 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.368 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.368 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:24:13.368 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.368 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:13.368 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:13.368 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:13.368 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTExOWU4YzgyOTQ0YzA1ZjNkZDNjYjNjNmJjZDIyMTfYrXrE: 00:24:13.368 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: 00:24:13.368 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:13.368 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:13.368 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTExOWU4YzgyOTQ0YzA1ZjNkZDNjYjNjNmJjZDIyMTfYrXrE: 00:24:13.368 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: ]] 00:24:13.368 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: 00:24:13.368 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:24:13.368 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.368 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:13.368 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:13.368 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:13.368 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.368 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:13.368 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.368 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.368 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.368 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.368 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:13.368 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:13.368 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:13.368 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.368 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.368 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:13.368 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.368 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:13.368 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:13.368 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:13.368 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:13.368 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.368 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.626 nvme0n1 00:24:13.626 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.626 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.626 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.626 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.626 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.626 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.627 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.627 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.627 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.627 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.627 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.627 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.627 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:24:13.627 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.627 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:13.627 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:13.627 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:13.627 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjU1NjRkZWZjN2ZjZGU2NDU0OTg2NWMwNWU5MDI5YjM5ZWM5ZDY5Y2M1MjEzN2FiLvFwSw==: 00:24:13.627 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjIwNjRmOTM0YWVmZjgxMjZmMjMwODhhYjE4YzA5OWV3emnY: 00:24:13.627 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:13.627 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:13.627 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjU1NjRkZWZjN2ZjZGU2NDU0OTg2NWMwNWU5MDI5YjM5ZWM5ZDY5Y2M1MjEzN2FiLvFwSw==: 00:24:13.627 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjIwNjRmOTM0YWVmZjgxMjZmMjMwODhhYjE4YzA5OWV3emnY: ]] 00:24:13.627 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjIwNjRmOTM0YWVmZjgxMjZmMjMwODhhYjE4YzA5OWV3emnY: 00:24:13.627 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:24:13.627 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.627 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:13.627 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:13.627 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:13.627 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.627 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:13.627 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.627 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.627 11:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.627 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.627 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:13.627 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:13.627 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:13.627 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.627 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.627 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:13.627 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.627 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:13.627 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:13.627 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:13.627 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:13.627 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.627 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.890 nvme0n1 00:24:13.890 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.890 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.890 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.890 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.890 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.890 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.890 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.890 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.890 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.890 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.890 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.890 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.890 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:13.890 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.890 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:13.890 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:13.890 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:13.890 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWNhMDBjY2YwNTE3NDk2YWQwOWM4NDJlYTgwZmQ0Yzc3OWVhZDM4YzkxNDEyOWJhNGEyZjQwMDk3MDk1ZDIwMGoWRFc=: 00:24:13.890 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:13.890 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:13.890 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:13.890 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWNhMDBjY2YwNTE3NDk2YWQwOWM4NDJlYTgwZmQ0Yzc3OWVhZDM4YzkxNDEyOWJhNGEyZjQwMDk3MDk1ZDIwMGoWRFc=: 00:24:13.890 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:13.890 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:24:13.890 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.890 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:13.890 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:13.890 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:13.890 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.890 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:13.890 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.890 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.890 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.890 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.890 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:13.890 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:13.890 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:13.890 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.890 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.890 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:13.890 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.890 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:13.890 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:13.890 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:13.890 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:13.890 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.890 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.148 nvme0n1 00:24:14.148 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.148 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.148 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.148 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.148 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.148 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.148 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.148 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.148 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.148 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.148 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.148 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:14.148 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.148 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:14.148 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.148 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:14.148 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:14.148 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:14.148 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGMwNzgzZTM3NmEyMDg2MzExMWIxNzI1NTk1ODFhZTKPXjS/: 00:24:14.148 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQ1ZjlmZmUxNjFmZmRlNDlkMDdkNmY4NzYzNDkzY2I2Y2M2NzdjMmMzNzMxMjhkN2M5NThlM2NiZDYzZTYzMHZ8h6g=: 00:24:14.148 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:14.148 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:14.148 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGMwNzgzZTM3NmEyMDg2MzExMWIxNzI1NTk1ODFhZTKPXjS/: 00:24:14.148 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQ1ZjlmZmUxNjFmZmRlNDlkMDdkNmY4NzYzNDkzY2I2Y2M2NzdjMmMzNzMxMjhkN2M5NThlM2NiZDYzZTYzMHZ8h6g=: ]] 00:24:14.148 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQ1ZjlmZmUxNjFmZmRlNDlkMDdkNmY4NzYzNDkzY2I2Y2M2NzdjMmMzNzMxMjhkN2M5NThlM2NiZDYzZTYzMHZ8h6g=: 00:24:14.148 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:24:14.148 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.148 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:14.148 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:14.148 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:14.148 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.148 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:14.148 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.148 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.148 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.148 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.148 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:14.148 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:14.148 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:14.149 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.149 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.149 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:14.149 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.149 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:14.149 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:14.149 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:14.149 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:14.149 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.149 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.406 nvme0n1 00:24:14.406 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.406 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.406 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.406 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.406 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.406 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.672 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.672 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.672 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.672 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.672 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.672 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.672 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:14.672 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.672 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:14.672 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:14.672 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:14.672 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA2NWRhMzUxZDJjMzJjYTczMTQzMzM4MWQ1NjAzZDExN2EyNDNjZDVkZjQwNmY2tJNDLQ==: 00:24:14.672 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: 00:24:14.672 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:14.672 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:14.672 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA2NWRhMzUxZDJjMzJjYTczMTQzMzM4MWQ1NjAzZDExN2EyNDNjZDVkZjQwNmY2tJNDLQ==: 00:24:14.672 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: ]] 00:24:14.672 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: 00:24:14.672 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:24:14.672 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.672 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:14.672 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:14.672 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:14.672 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.672 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:14.672 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.672 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.672 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.672 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.672 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:14.672 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:14.672 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:14.672 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.672 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.672 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:14.672 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.672 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:14.672 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:14.672 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:14.672 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:14.672 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.672 11:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.990 nvme0n1 00:24:14.990 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.990 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.990 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.990 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.990 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.990 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.990 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.990 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.990 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.990 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.990 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.990 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.990 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:14.990 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.990 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:14.990 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:14.990 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:14.990 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTExOWU4YzgyOTQ0YzA1ZjNkZDNjYjNjNmJjZDIyMTfYrXrE: 00:24:14.990 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: 00:24:14.990 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:14.991 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:14.991 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTExOWU4YzgyOTQ0YzA1ZjNkZDNjYjNjNmJjZDIyMTfYrXrE: 00:24:14.991 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: ]] 00:24:14.991 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: 00:24:14.991 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:24:14.991 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.991 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:14.991 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:14.991 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:14.991 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.991 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:14.991 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.991 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.991 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.991 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.991 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:14.991 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:14.991 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:14.991 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.991 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.991 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:14.991 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.991 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:14.991 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:14.991 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:14.991 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:14.991 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.991 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.273 nvme0n1 00:24:15.273 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.273 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.273 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.273 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.273 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:15.273 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.273 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.273 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.273 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.273 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.273 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.273 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:15.273 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:15.273 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:15.273 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:15.273 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:15.273 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:15.273 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjU1NjRkZWZjN2ZjZGU2NDU0OTg2NWMwNWU5MDI5YjM5ZWM5ZDY5Y2M1MjEzN2FiLvFwSw==: 00:24:15.273 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjIwNjRmOTM0YWVmZjgxMjZmMjMwODhhYjE4YzA5OWV3emnY: 00:24:15.273 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:15.273 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:15.273 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjU1NjRkZWZjN2ZjZGU2NDU0OTg2NWMwNWU5MDI5YjM5ZWM5ZDY5Y2M1MjEzN2FiLvFwSw==: 00:24:15.274 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjIwNjRmOTM0YWVmZjgxMjZmMjMwODhhYjE4YzA5OWV3emnY: ]] 00:24:15.274 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjIwNjRmOTM0YWVmZjgxMjZmMjMwODhhYjE4YzA5OWV3emnY: 00:24:15.274 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:24:15.274 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:15.274 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:15.274 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:15.274 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:15.274 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:15.274 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:15.274 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.274 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.274 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.274 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:15.274 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:15.274 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:15.274 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:15.274 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.274 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.274 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:15.274 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.274 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:15.274 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:15.274 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:15.274 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:15.274 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.274 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.532 nvme0n1 00:24:15.532 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.532 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.532 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.532 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.532 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:15.532 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.532 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.532 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.532 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.532 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.532 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.532 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:15.532 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:15.532 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:15.532 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:15.532 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:15.532 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:15.532 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWNhMDBjY2YwNTE3NDk2YWQwOWM4NDJlYTgwZmQ0Yzc3OWVhZDM4YzkxNDEyOWJhNGEyZjQwMDk3MDk1ZDIwMGoWRFc=: 00:24:15.532 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:15.532 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:15.532 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:15.532 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWNhMDBjY2YwNTE3NDk2YWQwOWM4NDJlYTgwZmQ0Yzc3OWVhZDM4YzkxNDEyOWJhNGEyZjQwMDk3MDk1ZDIwMGoWRFc=: 00:24:15.532 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:15.532 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:24:15.532 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:15.532 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:15.532 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:15.532 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:15.532 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:15.532 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:15.532 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.532 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.532 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.532 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:15.532 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:15.532 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:15.532 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:15.532 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.532 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.532 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:15.532 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.532 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:15.532 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:15.532 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:15.532 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:15.532 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.532 11:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.790 nvme0n1 00:24:15.790 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.790 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.790 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.790 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.790 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:15.791 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.791 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.791 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.791 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.791 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.049 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.049 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:16.049 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:16.049 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:16.049 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:16.049 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:16.049 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:16.049 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:16.049 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGMwNzgzZTM3NmEyMDg2MzExMWIxNzI1NTk1ODFhZTKPXjS/: 00:24:16.049 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQ1ZjlmZmUxNjFmZmRlNDlkMDdkNmY4NzYzNDkzY2I2Y2M2NzdjMmMzNzMxMjhkN2M5NThlM2NiZDYzZTYzMHZ8h6g=: 00:24:16.049 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:16.049 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:16.049 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGMwNzgzZTM3NmEyMDg2MzExMWIxNzI1NTk1ODFhZTKPXjS/: 00:24:16.049 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQ1ZjlmZmUxNjFmZmRlNDlkMDdkNmY4NzYzNDkzY2I2Y2M2NzdjMmMzNzMxMjhkN2M5NThlM2NiZDYzZTYzMHZ8h6g=: ]] 00:24:16.049 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQ1ZjlmZmUxNjFmZmRlNDlkMDdkNmY4NzYzNDkzY2I2Y2M2NzdjMmMzNzMxMjhkN2M5NThlM2NiZDYzZTYzMHZ8h6g=: 00:24:16.049 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:24:16.049 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:16.049 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:16.049 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:16.049 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:16.049 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:16.049 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:16.049 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.049 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.049 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.049 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:16.049 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:16.049 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:16.049 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:16.049 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.049 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.049 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:16.049 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.049 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:16.049 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:16.049 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:16.049 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:16.049 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.049 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.616 nvme0n1 00:24:16.616 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.616 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.616 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:16.616 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.616 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.616 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.616 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.616 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.616 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.616 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.616 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.616 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:16.616 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:16.616 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:16.616 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:16.616 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:16.616 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:16.616 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA2NWRhMzUxZDJjMzJjYTczMTQzMzM4MWQ1NjAzZDExN2EyNDNjZDVkZjQwNmY2tJNDLQ==: 00:24:16.616 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: 00:24:16.616 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:16.616 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:16.616 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA2NWRhMzUxZDJjMzJjYTczMTQzMzM4MWQ1NjAzZDExN2EyNDNjZDVkZjQwNmY2tJNDLQ==: 00:24:16.616 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: ]] 00:24:16.616 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: 00:24:16.616 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:24:16.616 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:16.616 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:16.616 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:16.616 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:16.616 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:16.616 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:16.616 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.616 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.616 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.616 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:16.616 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:16.616 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:16.616 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:16.616 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.616 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.616 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:16.616 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.616 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:16.616 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:16.616 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:16.616 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:16.616 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.617 11:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.186 nvme0n1 00:24:17.186 11:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.187 11:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.187 11:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.187 11:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.187 11:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.187 11:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.187 11:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.187 11:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.187 11:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.187 11:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.187 11:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.187 11:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.187 11:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:24:17.187 11:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.187 11:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:17.187 11:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:17.187 11:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:17.187 11:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTExOWU4YzgyOTQ0YzA1ZjNkZDNjYjNjNmJjZDIyMTfYrXrE: 00:24:17.187 11:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: 00:24:17.187 11:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:17.188 11:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:17.188 11:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTExOWU4YzgyOTQ0YzA1ZjNkZDNjYjNjNmJjZDIyMTfYrXrE: 00:24:17.188 11:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: ]] 00:24:17.188 11:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: 00:24:17.188 11:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:24:17.188 11:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.188 11:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:17.188 11:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:17.188 11:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:17.188 11:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.188 11:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:17.188 11:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.188 11:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.188 11:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.188 11:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.188 11:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:17.188 11:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:17.188 11:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:17.188 11:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.188 11:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.188 11:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:17.188 11:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.188 11:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:17.188 11:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:17.188 11:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:17.188 11:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:17.188 11:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.188 11:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.757 nvme0n1 00:24:17.757 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.757 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.757 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.757 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.757 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.757 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.757 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.757 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.757 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.757 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.757 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.757 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.757 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:24:17.757 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.757 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:17.757 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:17.757 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:17.757 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjU1NjRkZWZjN2ZjZGU2NDU0OTg2NWMwNWU5MDI5YjM5ZWM5ZDY5Y2M1MjEzN2FiLvFwSw==: 00:24:17.757 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjIwNjRmOTM0YWVmZjgxMjZmMjMwODhhYjE4YzA5OWV3emnY: 00:24:17.757 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:17.757 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:17.757 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjU1NjRkZWZjN2ZjZGU2NDU0OTg2NWMwNWU5MDI5YjM5ZWM5ZDY5Y2M1MjEzN2FiLvFwSw==: 00:24:17.757 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjIwNjRmOTM0YWVmZjgxMjZmMjMwODhhYjE4YzA5OWV3emnY: ]] 00:24:17.757 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjIwNjRmOTM0YWVmZjgxMjZmMjMwODhhYjE4YzA5OWV3emnY: 00:24:17.757 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:24:17.757 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.757 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:17.757 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:17.757 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:17.757 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.757 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:17.757 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.757 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.757 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.757 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.757 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:17.757 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:17.757 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:17.757 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.757 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.757 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:17.757 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.757 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:17.757 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:17.757 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:17.757 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:17.757 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.757 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.323 nvme0n1 00:24:18.323 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.323 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:18.323 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.323 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.323 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.323 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.582 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.582 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.582 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.582 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.582 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.582 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:18.582 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:24:18.582 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.582 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:18.582 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:18.582 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:18.582 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWNhMDBjY2YwNTE3NDk2YWQwOWM4NDJlYTgwZmQ0Yzc3OWVhZDM4YzkxNDEyOWJhNGEyZjQwMDk3MDk1ZDIwMGoWRFc=: 00:24:18.582 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:18.582 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:18.582 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:18.582 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWNhMDBjY2YwNTE3NDk2YWQwOWM4NDJlYTgwZmQ0Yzc3OWVhZDM4YzkxNDEyOWJhNGEyZjQwMDk3MDk1ZDIwMGoWRFc=: 00:24:18.582 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:18.582 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:24:18.582 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:18.582 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:18.582 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:18.582 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:18.582 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:18.582 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:18.582 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.582 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.582 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.582 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:18.582 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:18.582 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:18.582 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:18.582 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.582 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.582 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:18.582 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.582 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:18.582 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:18.582 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:18.582 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:18.582 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.582 11:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.150 nvme0n1 00:24:19.150 11:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.150 11:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.150 11:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.150 11:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.150 11:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:19.150 11:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.150 11:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.150 11:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.150 11:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.150 11:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.150 11:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.150 11:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:19.150 11:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.150 11:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:24:19.150 11:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.150 11:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:19.150 11:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:19.150 11:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:19.150 11:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGMwNzgzZTM3NmEyMDg2MzExMWIxNzI1NTk1ODFhZTKPXjS/: 00:24:19.150 11:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQ1ZjlmZmUxNjFmZmRlNDlkMDdkNmY4NzYzNDkzY2I2Y2M2NzdjMmMzNzMxMjhkN2M5NThlM2NiZDYzZTYzMHZ8h6g=: 00:24:19.150 11:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:19.150 11:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:19.150 11:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGMwNzgzZTM3NmEyMDg2MzExMWIxNzI1NTk1ODFhZTKPXjS/: 00:24:19.150 11:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQ1ZjlmZmUxNjFmZmRlNDlkMDdkNmY4NzYzNDkzY2I2Y2M2NzdjMmMzNzMxMjhkN2M5NThlM2NiZDYzZTYzMHZ8h6g=: ]] 00:24:19.150 11:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQ1ZjlmZmUxNjFmZmRlNDlkMDdkNmY4NzYzNDkzY2I2Y2M2NzdjMmMzNzMxMjhkN2M5NThlM2NiZDYzZTYzMHZ8h6g=: 00:24:19.150 11:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:24:19.150 11:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.150 11:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:19.150 11:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:19.150 11:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:19.150 11:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.150 11:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:19.150 11:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.150 11:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.150 11:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.150 11:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.150 11:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:19.150 11:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:19.150 11:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:19.150 11:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.150 11:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.150 11:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:19.150 11:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.150 11:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:19.150 11:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:19.150 11:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:19.150 11:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:19.150 11:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.150 11:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.086 nvme0n1 00:24:20.086 11:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.086 11:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.086 11:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.086 11:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.086 11:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.086 11:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.086 11:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.086 11:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.086 11:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.086 11:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.086 11:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.086 11:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.086 11:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:24:20.086 11:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.086 11:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:20.086 11:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:20.086 11:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:20.086 11:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA2NWRhMzUxZDJjMzJjYTczMTQzMzM4MWQ1NjAzZDExN2EyNDNjZDVkZjQwNmY2tJNDLQ==: 00:24:20.086 11:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: 00:24:20.086 11:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:20.086 11:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:20.086 11:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA2NWRhMzUxZDJjMzJjYTczMTQzMzM4MWQ1NjAzZDExN2EyNDNjZDVkZjQwNmY2tJNDLQ==: 00:24:20.086 11:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: ]] 00:24:20.086 11:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: 00:24:20.086 11:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:24:20.086 11:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.086 11:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:20.086 11:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:20.086 11:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:20.086 11:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.086 11:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:20.086 11:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.086 11:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.086 11:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.086 11:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.086 11:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:20.086 11:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:20.086 11:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:20.086 11:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.086 11:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.086 11:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:20.086 11:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.086 11:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:20.086 11:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:20.086 11:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:20.086 11:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:20.086 11:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.086 11:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.021 nvme0n1 00:24:21.021 11:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.021 11:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.021 11:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:21.021 11:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.021 11:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.021 11:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.021 11:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.021 11:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.021 11:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.021 11:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.021 11:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.021 11:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.021 11:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:24:21.021 11:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.021 11:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:21.021 11:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:21.021 11:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:21.021 11:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTExOWU4YzgyOTQ0YzA1ZjNkZDNjYjNjNmJjZDIyMTfYrXrE: 00:24:21.021 11:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: 00:24:21.021 11:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:21.021 11:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:21.021 11:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTExOWU4YzgyOTQ0YzA1ZjNkZDNjYjNjNmJjZDIyMTfYrXrE: 00:24:21.021 11:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: ]] 00:24:21.021 11:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: 00:24:21.021 11:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:24:21.021 11:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.021 11:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:21.021 11:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:21.021 11:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:21.021 11:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.021 11:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:21.021 11:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.021 11:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.021 11:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.021 11:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.021 11:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:21.021 11:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:21.021 11:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:21.021 11:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.021 11:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.021 11:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:21.021 11:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.021 11:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:21.021 11:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:21.021 11:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:21.021 11:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:21.021 11:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.021 11:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.955 nvme0n1 00:24:21.955 11:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.955 11:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.955 11:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.955 11:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.955 11:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:21.955 11:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.955 11:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.955 11:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.955 11:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.955 11:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.955 11:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.955 11:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.955 11:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:24:21.955 11:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.955 11:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:21.955 11:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:21.955 11:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:21.955 11:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjU1NjRkZWZjN2ZjZGU2NDU0OTg2NWMwNWU5MDI5YjM5ZWM5ZDY5Y2M1MjEzN2FiLvFwSw==: 00:24:21.955 11:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjIwNjRmOTM0YWVmZjgxMjZmMjMwODhhYjE4YzA5OWV3emnY: 00:24:21.955 11:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:21.955 11:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:21.955 11:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjU1NjRkZWZjN2ZjZGU2NDU0OTg2NWMwNWU5MDI5YjM5ZWM5ZDY5Y2M1MjEzN2FiLvFwSw==: 00:24:21.955 11:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjIwNjRmOTM0YWVmZjgxMjZmMjMwODhhYjE4YzA5OWV3emnY: ]] 00:24:21.955 11:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjIwNjRmOTM0YWVmZjgxMjZmMjMwODhhYjE4YzA5OWV3emnY: 00:24:21.955 11:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:24:21.955 11:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.955 11:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:21.955 11:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:21.955 11:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:21.955 11:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.955 11:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:21.955 11:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.955 11:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.955 11:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.955 11:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.955 11:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:21.955 11:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:21.955 11:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:21.955 11:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.955 11:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.955 11:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:21.955 11:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.955 11:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:21.955 11:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:21.955 11:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:21.955 11:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:21.955 11:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.955 11:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.888 nvme0n1 00:24:22.888 11:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.888 11:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.888 11:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.888 11:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.888 11:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.888 11:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.888 11:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.888 11:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.888 11:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.888 11:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.888 11:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.888 11:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.888 11:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:24:22.888 11:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.888 11:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:22.888 11:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:22.888 11:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:22.888 11:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWNhMDBjY2YwNTE3NDk2YWQwOWM4NDJlYTgwZmQ0Yzc3OWVhZDM4YzkxNDEyOWJhNGEyZjQwMDk3MDk1ZDIwMGoWRFc=: 00:24:22.888 11:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:22.888 11:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:22.888 11:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:22.888 11:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWNhMDBjY2YwNTE3NDk2YWQwOWM4NDJlYTgwZmQ0Yzc3OWVhZDM4YzkxNDEyOWJhNGEyZjQwMDk3MDk1ZDIwMGoWRFc=: 00:24:22.888 11:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:22.888 11:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:24:22.888 11:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.888 11:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:22.888 11:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:22.888 11:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:22.888 11:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.888 11:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:22.888 11:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.888 11:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.888 11:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.888 11:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.888 11:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:22.888 11:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:22.888 11:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:22.888 11:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.888 11:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.888 11:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:22.888 11:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.888 11:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:22.888 11:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:22.888 11:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:22.888 11:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:22.888 11:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.888 11:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.822 nvme0n1 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGMwNzgzZTM3NmEyMDg2MzExMWIxNzI1NTk1ODFhZTKPXjS/: 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQ1ZjlmZmUxNjFmZmRlNDlkMDdkNmY4NzYzNDkzY2I2Y2M2NzdjMmMzNzMxMjhkN2M5NThlM2NiZDYzZTYzMHZ8h6g=: 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGMwNzgzZTM3NmEyMDg2MzExMWIxNzI1NTk1ODFhZTKPXjS/: 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQ1ZjlmZmUxNjFmZmRlNDlkMDdkNmY4NzYzNDkzY2I2Y2M2NzdjMmMzNzMxMjhkN2M5NThlM2NiZDYzZTYzMHZ8h6g=: ]] 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQ1ZjlmZmUxNjFmZmRlNDlkMDdkNmY4NzYzNDkzY2I2Y2M2NzdjMmMzNzMxMjhkN2M5NThlM2NiZDYzZTYzMHZ8h6g=: 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.822 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.081 nvme0n1 00:24:24.081 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.081 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.081 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.081 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.081 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.081 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.081 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.081 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.081 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.081 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.081 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.081 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.081 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:24:24.081 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.081 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:24.081 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:24.081 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:24.081 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA2NWRhMzUxZDJjMzJjYTczMTQzMzM4MWQ1NjAzZDExN2EyNDNjZDVkZjQwNmY2tJNDLQ==: 00:24:24.081 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: 00:24:24.081 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:24.081 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:24.081 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA2NWRhMzUxZDJjMzJjYTczMTQzMzM4MWQ1NjAzZDExN2EyNDNjZDVkZjQwNmY2tJNDLQ==: 00:24:24.081 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: ]] 00:24:24.081 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: 00:24:24.081 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:24:24.081 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.081 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:24.081 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:24.081 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:24.081 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.081 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:24.081 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.081 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.081 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.081 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.081 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:24.081 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:24.081 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:24.081 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.081 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.081 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:24.081 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.081 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:24.081 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:24.081 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:24.081 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:24.081 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.081 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.339 nvme0n1 00:24:24.339 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.339 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.339 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.339 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.339 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.339 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.339 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.339 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.339 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.339 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.339 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.339 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.339 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:24:24.339 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.339 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:24.339 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:24.339 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:24.339 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTExOWU4YzgyOTQ0YzA1ZjNkZDNjYjNjNmJjZDIyMTfYrXrE: 00:24:24.339 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: 00:24:24.339 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:24.339 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:24.339 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTExOWU4YzgyOTQ0YzA1ZjNkZDNjYjNjNmJjZDIyMTfYrXrE: 00:24:24.339 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: ]] 00:24:24.339 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: 00:24:24.339 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:24:24.339 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.339 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:24.339 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:24.339 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:24.339 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.339 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:24.339 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.339 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.339 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.339 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.339 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:24.339 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:24.339 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:24.339 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.339 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.339 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:24.339 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.339 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:24.340 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:24.340 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:24.340 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:24.340 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.340 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.598 nvme0n1 00:24:24.598 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.598 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.598 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.598 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.598 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.598 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.598 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.598 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.598 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.598 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.598 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.598 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.598 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:24:24.598 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.598 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:24.598 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:24.598 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:24.598 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjU1NjRkZWZjN2ZjZGU2NDU0OTg2NWMwNWU5MDI5YjM5ZWM5ZDY5Y2M1MjEzN2FiLvFwSw==: 00:24:24.598 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjIwNjRmOTM0YWVmZjgxMjZmMjMwODhhYjE4YzA5OWV3emnY: 00:24:24.598 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:24.598 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:24.598 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjU1NjRkZWZjN2ZjZGU2NDU0OTg2NWMwNWU5MDI5YjM5ZWM5ZDY5Y2M1MjEzN2FiLvFwSw==: 00:24:24.598 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjIwNjRmOTM0YWVmZjgxMjZmMjMwODhhYjE4YzA5OWV3emnY: ]] 00:24:24.598 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjIwNjRmOTM0YWVmZjgxMjZmMjMwODhhYjE4YzA5OWV3emnY: 00:24:24.598 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:24:24.598 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.598 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:24.598 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:24.598 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:24.598 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.598 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:24.598 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.598 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.598 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.598 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.598 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:24.598 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:24.598 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:24.598 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.598 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.598 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:24.598 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.598 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:24.598 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:24.598 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:24.598 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:24.599 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.599 11:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.599 nvme0n1 00:24:24.599 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.599 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.599 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.599 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.599 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.599 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.857 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.857 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.857 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.857 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.857 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.857 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.857 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:24:24.857 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.857 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:24.857 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:24.857 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:24.857 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWNhMDBjY2YwNTE3NDk2YWQwOWM4NDJlYTgwZmQ0Yzc3OWVhZDM4YzkxNDEyOWJhNGEyZjQwMDk3MDk1ZDIwMGoWRFc=: 00:24:24.857 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:24.857 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:24.857 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:24.857 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWNhMDBjY2YwNTE3NDk2YWQwOWM4NDJlYTgwZmQ0Yzc3OWVhZDM4YzkxNDEyOWJhNGEyZjQwMDk3MDk1ZDIwMGoWRFc=: 00:24:24.857 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:24.857 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:24:24.857 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.857 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:24.857 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:24.857 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:24.857 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.858 nvme0n1 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGMwNzgzZTM3NmEyMDg2MzExMWIxNzI1NTk1ODFhZTKPXjS/: 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQ1ZjlmZmUxNjFmZmRlNDlkMDdkNmY4NzYzNDkzY2I2Y2M2NzdjMmMzNzMxMjhkN2M5NThlM2NiZDYzZTYzMHZ8h6g=: 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGMwNzgzZTM3NmEyMDg2MzExMWIxNzI1NTk1ODFhZTKPXjS/: 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQ1ZjlmZmUxNjFmZmRlNDlkMDdkNmY4NzYzNDkzY2I2Y2M2NzdjMmMzNzMxMjhkN2M5NThlM2NiZDYzZTYzMHZ8h6g=: ]] 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQ1ZjlmZmUxNjFmZmRlNDlkMDdkNmY4NzYzNDkzY2I2Y2M2NzdjMmMzNzMxMjhkN2M5NThlM2NiZDYzZTYzMHZ8h6g=: 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.858 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.116 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.116 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.116 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:25.116 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:25.116 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:25.116 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.116 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.116 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:25.116 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.116 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:25.116 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:25.116 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:25.116 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:25.116 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.116 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.116 nvme0n1 00:24:25.116 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.116 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.116 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.116 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.116 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.116 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.116 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.116 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.116 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.116 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.375 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.375 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.375 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:24:25.375 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.375 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:25.375 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:25.375 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:25.375 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA2NWRhMzUxZDJjMzJjYTczMTQzMzM4MWQ1NjAzZDExN2EyNDNjZDVkZjQwNmY2tJNDLQ==: 00:24:25.375 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: 00:24:25.375 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:25.375 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:25.375 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA2NWRhMzUxZDJjMzJjYTczMTQzMzM4MWQ1NjAzZDExN2EyNDNjZDVkZjQwNmY2tJNDLQ==: 00:24:25.375 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: ]] 00:24:25.375 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: 00:24:25.375 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:24:25.375 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.375 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:25.375 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:25.375 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:25.375 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.375 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:25.375 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.375 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.375 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.375 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.375 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:25.375 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:25.375 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:25.375 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.375 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.375 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:25.375 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.375 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:25.375 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:25.375 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:25.375 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:25.375 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.375 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.375 nvme0n1 00:24:25.375 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.375 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.375 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.375 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.375 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.375 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.633 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.633 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.633 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.633 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.633 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.633 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.633 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:24:25.633 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.633 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:25.633 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:25.633 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:25.633 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTExOWU4YzgyOTQ0YzA1ZjNkZDNjYjNjNmJjZDIyMTfYrXrE: 00:24:25.633 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: 00:24:25.633 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:25.633 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:25.633 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTExOWU4YzgyOTQ0YzA1ZjNkZDNjYjNjNmJjZDIyMTfYrXrE: 00:24:25.633 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: ]] 00:24:25.633 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: 00:24:25.633 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:24:25.633 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.633 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:25.633 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:25.633 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:25.633 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.633 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:25.633 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.633 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.634 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.634 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.634 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:25.634 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:25.634 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:25.634 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.634 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.634 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:25.634 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.634 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:25.634 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:25.634 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:25.634 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:25.634 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.634 11:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.892 nvme0n1 00:24:25.892 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.892 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.892 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.892 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.892 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.892 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.892 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.892 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.892 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.892 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.892 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.892 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.892 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:24:25.892 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.892 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:25.892 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:25.892 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:25.892 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjU1NjRkZWZjN2ZjZGU2NDU0OTg2NWMwNWU5MDI5YjM5ZWM5ZDY5Y2M1MjEzN2FiLvFwSw==: 00:24:25.892 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjIwNjRmOTM0YWVmZjgxMjZmMjMwODhhYjE4YzA5OWV3emnY: 00:24:25.892 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:25.892 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:25.892 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjU1NjRkZWZjN2ZjZGU2NDU0OTg2NWMwNWU5MDI5YjM5ZWM5ZDY5Y2M1MjEzN2FiLvFwSw==: 00:24:25.892 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjIwNjRmOTM0YWVmZjgxMjZmMjMwODhhYjE4YzA5OWV3emnY: ]] 00:24:25.892 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjIwNjRmOTM0YWVmZjgxMjZmMjMwODhhYjE4YzA5OWV3emnY: 00:24:25.892 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:24:25.892 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.892 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:25.892 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:25.892 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:25.892 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.892 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:25.892 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.892 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.892 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.893 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.893 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:25.893 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:25.893 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:25.893 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.893 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.893 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:25.893 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.893 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:25.893 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:25.893 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:25.893 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:25.893 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.893 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.151 nvme0n1 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWNhMDBjY2YwNTE3NDk2YWQwOWM4NDJlYTgwZmQ0Yzc3OWVhZDM4YzkxNDEyOWJhNGEyZjQwMDk3MDk1ZDIwMGoWRFc=: 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWNhMDBjY2YwNTE3NDk2YWQwOWM4NDJlYTgwZmQ0Yzc3OWVhZDM4YzkxNDEyOWJhNGEyZjQwMDk3MDk1ZDIwMGoWRFc=: 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.151 nvme0n1 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.151 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.410 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.410 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:26.410 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.410 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.410 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.410 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.410 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.410 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.410 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:26.410 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:26.410 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:24:26.410 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.410 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:26.410 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:26.410 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:26.410 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGMwNzgzZTM3NmEyMDg2MzExMWIxNzI1NTk1ODFhZTKPXjS/: 00:24:26.410 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQ1ZjlmZmUxNjFmZmRlNDlkMDdkNmY4NzYzNDkzY2I2Y2M2NzdjMmMzNzMxMjhkN2M5NThlM2NiZDYzZTYzMHZ8h6g=: 00:24:26.410 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:26.410 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:26.410 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGMwNzgzZTM3NmEyMDg2MzExMWIxNzI1NTk1ODFhZTKPXjS/: 00:24:26.410 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQ1ZjlmZmUxNjFmZmRlNDlkMDdkNmY4NzYzNDkzY2I2Y2M2NzdjMmMzNzMxMjhkN2M5NThlM2NiZDYzZTYzMHZ8h6g=: ]] 00:24:26.410 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQ1ZjlmZmUxNjFmZmRlNDlkMDdkNmY4NzYzNDkzY2I2Y2M2NzdjMmMzNzMxMjhkN2M5NThlM2NiZDYzZTYzMHZ8h6g=: 00:24:26.410 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:24:26.410 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:26.410 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:26.410 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:26.410 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:26.410 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.410 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:26.410 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.410 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.410 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.410 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:26.410 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:26.410 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:26.410 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:26.410 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.410 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.410 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:26.410 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.410 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:26.410 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:26.410 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:26.410 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:26.410 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.410 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.668 nvme0n1 00:24:26.668 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.668 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.668 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.668 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.668 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:26.668 11:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.668 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.668 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.668 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.668 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.668 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.668 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:26.668 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:24:26.668 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.668 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:26.668 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:26.668 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:26.668 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA2NWRhMzUxZDJjMzJjYTczMTQzMzM4MWQ1NjAzZDExN2EyNDNjZDVkZjQwNmY2tJNDLQ==: 00:24:26.668 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: 00:24:26.668 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:26.668 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:26.668 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA2NWRhMzUxZDJjMzJjYTczMTQzMzM4MWQ1NjAzZDExN2EyNDNjZDVkZjQwNmY2tJNDLQ==: 00:24:26.668 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: ]] 00:24:26.668 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: 00:24:26.668 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:24:26.668 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:26.668 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:26.668 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:26.668 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:26.668 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.668 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:26.668 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.668 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.668 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.668 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:26.668 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:26.668 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:26.668 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:26.668 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.668 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.668 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:26.668 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.668 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:26.668 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:26.668 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:26.668 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:26.668 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.668 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.927 nvme0n1 00:24:26.927 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.927 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.927 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.927 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.927 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:26.927 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.927 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.927 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.927 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.927 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.927 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.927 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:26.927 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:24:26.927 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.927 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:26.927 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:26.927 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:26.927 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTExOWU4YzgyOTQ0YzA1ZjNkZDNjYjNjNmJjZDIyMTfYrXrE: 00:24:26.927 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: 00:24:26.927 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:26.927 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:26.927 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTExOWU4YzgyOTQ0YzA1ZjNkZDNjYjNjNmJjZDIyMTfYrXrE: 00:24:26.927 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: ]] 00:24:26.927 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: 00:24:26.927 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:24:26.927 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:26.927 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:26.927 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:26.927 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:26.927 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.927 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:26.927 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.927 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.927 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.927 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:26.927 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:26.927 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:26.927 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:26.927 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.927 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.927 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:26.927 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.927 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:26.927 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:26.927 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:26.927 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:26.927 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.927 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.494 nvme0n1 00:24:27.494 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.494 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.494 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.494 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:27.494 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.494 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.494 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.494 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.494 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.494 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.494 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.494 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:27.494 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:24:27.494 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:27.494 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:27.494 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:27.494 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:27.494 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjU1NjRkZWZjN2ZjZGU2NDU0OTg2NWMwNWU5MDI5YjM5ZWM5ZDY5Y2M1MjEzN2FiLvFwSw==: 00:24:27.494 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjIwNjRmOTM0YWVmZjgxMjZmMjMwODhhYjE4YzA5OWV3emnY: 00:24:27.494 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:27.494 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:27.494 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjU1NjRkZWZjN2ZjZGU2NDU0OTg2NWMwNWU5MDI5YjM5ZWM5ZDY5Y2M1MjEzN2FiLvFwSw==: 00:24:27.494 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjIwNjRmOTM0YWVmZjgxMjZmMjMwODhhYjE4YzA5OWV3emnY: ]] 00:24:27.494 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjIwNjRmOTM0YWVmZjgxMjZmMjMwODhhYjE4YzA5OWV3emnY: 00:24:27.494 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:24:27.494 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:27.494 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:27.494 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:27.494 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:27.494 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:27.494 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:27.494 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.494 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.494 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.494 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:27.494 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:27.494 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:27.494 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:27.494 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.494 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.494 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:27.494 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.494 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:27.494 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:27.494 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:27.494 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:27.494 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.494 11:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.753 nvme0n1 00:24:27.753 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.753 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:27.753 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.753 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.753 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.753 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.753 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.753 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.753 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.753 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.753 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.753 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:27.753 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:24:27.753 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:27.753 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:27.753 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:27.753 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:27.753 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWNhMDBjY2YwNTE3NDk2YWQwOWM4NDJlYTgwZmQ0Yzc3OWVhZDM4YzkxNDEyOWJhNGEyZjQwMDk3MDk1ZDIwMGoWRFc=: 00:24:27.753 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:27.753 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:27.753 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:27.753 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWNhMDBjY2YwNTE3NDk2YWQwOWM4NDJlYTgwZmQ0Yzc3OWVhZDM4YzkxNDEyOWJhNGEyZjQwMDk3MDk1ZDIwMGoWRFc=: 00:24:27.753 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:27.753 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:24:27.753 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:27.753 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:27.753 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:27.753 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:27.753 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:27.753 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:27.753 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.753 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.753 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.753 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:27.753 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:27.753 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:27.753 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:27.753 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.753 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.753 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:27.753 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.753 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:27.753 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:27.753 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:27.753 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:27.753 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.753 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.011 nvme0n1 00:24:28.012 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.012 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.012 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:28.012 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.012 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.270 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.270 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.270 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.270 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.270 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.270 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.270 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:28.270 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:28.270 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:24:28.270 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.270 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:28.270 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:28.270 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:28.270 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGMwNzgzZTM3NmEyMDg2MzExMWIxNzI1NTk1ODFhZTKPXjS/: 00:24:28.270 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQ1ZjlmZmUxNjFmZmRlNDlkMDdkNmY4NzYzNDkzY2I2Y2M2NzdjMmMzNzMxMjhkN2M5NThlM2NiZDYzZTYzMHZ8h6g=: 00:24:28.270 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:28.270 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:28.270 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGMwNzgzZTM3NmEyMDg2MzExMWIxNzI1NTk1ODFhZTKPXjS/: 00:24:28.270 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQ1ZjlmZmUxNjFmZmRlNDlkMDdkNmY4NzYzNDkzY2I2Y2M2NzdjMmMzNzMxMjhkN2M5NThlM2NiZDYzZTYzMHZ8h6g=: ]] 00:24:28.270 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQ1ZjlmZmUxNjFmZmRlNDlkMDdkNmY4NzYzNDkzY2I2Y2M2NzdjMmMzNzMxMjhkN2M5NThlM2NiZDYzZTYzMHZ8h6g=: 00:24:28.270 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:24:28.270 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:28.270 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:28.270 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:28.270 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:28.270 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:28.270 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:28.270 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.270 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.270 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.270 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:28.270 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:28.270 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:28.270 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:28.270 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.270 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.270 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:28.270 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.270 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:28.270 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:28.270 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:28.270 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:28.270 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.270 11:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.837 nvme0n1 00:24:28.837 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.837 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.837 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.837 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.837 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:28.837 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.837 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.837 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.837 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.837 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.837 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.837 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:28.837 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:24:28.837 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.837 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:28.837 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:28.837 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:28.837 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA2NWRhMzUxZDJjMzJjYTczMTQzMzM4MWQ1NjAzZDExN2EyNDNjZDVkZjQwNmY2tJNDLQ==: 00:24:28.837 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: 00:24:28.837 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:28.837 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:28.837 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA2NWRhMzUxZDJjMzJjYTczMTQzMzM4MWQ1NjAzZDExN2EyNDNjZDVkZjQwNmY2tJNDLQ==: 00:24:28.837 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: ]] 00:24:28.837 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: 00:24:28.837 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:24:28.837 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:28.837 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:28.837 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:28.837 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:28.837 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:28.837 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:28.837 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.837 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.837 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.837 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:28.837 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:28.837 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:28.837 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:28.837 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.837 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.837 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:28.837 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.837 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:28.837 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:28.837 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:28.837 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:28.837 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.837 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.403 nvme0n1 00:24:29.403 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.403 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.403 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.403 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.403 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.403 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.403 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.403 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.403 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.403 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.403 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.403 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.403 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:24:29.403 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.403 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:29.403 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:29.403 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:29.403 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTExOWU4YzgyOTQ0YzA1ZjNkZDNjYjNjNmJjZDIyMTfYrXrE: 00:24:29.403 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: 00:24:29.403 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:29.403 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:29.403 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTExOWU4YzgyOTQ0YzA1ZjNkZDNjYjNjNmJjZDIyMTfYrXrE: 00:24:29.403 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: ]] 00:24:29.403 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: 00:24:29.403 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:24:29.403 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.403 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:29.403 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:29.403 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:29.403 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.403 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:29.403 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.403 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.403 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.403 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:29.403 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:29.403 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:29.403 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:29.403 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.403 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.403 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:29.403 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.403 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:29.403 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:29.403 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:29.403 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:29.403 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.403 11:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.970 nvme0n1 00:24:29.970 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.970 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.970 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.970 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.970 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.970 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.970 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.970 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.970 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.970 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.970 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.970 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.970 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:24:29.970 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.970 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:29.970 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:29.970 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:29.970 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjU1NjRkZWZjN2ZjZGU2NDU0OTg2NWMwNWU5MDI5YjM5ZWM5ZDY5Y2M1MjEzN2FiLvFwSw==: 00:24:29.970 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjIwNjRmOTM0YWVmZjgxMjZmMjMwODhhYjE4YzA5OWV3emnY: 00:24:29.970 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:29.970 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:29.970 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjU1NjRkZWZjN2ZjZGU2NDU0OTg2NWMwNWU5MDI5YjM5ZWM5ZDY5Y2M1MjEzN2FiLvFwSw==: 00:24:29.970 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjIwNjRmOTM0YWVmZjgxMjZmMjMwODhhYjE4YzA5OWV3emnY: ]] 00:24:29.970 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjIwNjRmOTM0YWVmZjgxMjZmMjMwODhhYjE4YzA5OWV3emnY: 00:24:29.970 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:24:29.970 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.970 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:29.970 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:29.970 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:29.970 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.970 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:29.970 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.970 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.970 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.970 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:29.970 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:29.970 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:29.970 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:29.970 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.970 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.970 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:29.970 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.970 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:29.970 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:29.970 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:29.970 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:29.970 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.970 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.537 nvme0n1 00:24:30.537 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.537 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.537 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.537 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.537 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.537 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.537 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.537 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.537 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.537 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.537 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.537 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.537 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:24:30.537 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.537 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:30.537 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:30.537 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:30.537 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWNhMDBjY2YwNTE3NDk2YWQwOWM4NDJlYTgwZmQ0Yzc3OWVhZDM4YzkxNDEyOWJhNGEyZjQwMDk3MDk1ZDIwMGoWRFc=: 00:24:30.537 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:30.537 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:30.537 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:30.537 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWNhMDBjY2YwNTE3NDk2YWQwOWM4NDJlYTgwZmQ0Yzc3OWVhZDM4YzkxNDEyOWJhNGEyZjQwMDk3MDk1ZDIwMGoWRFc=: 00:24:30.537 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:30.537 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:24:30.537 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.537 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:30.537 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:30.537 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:30.537 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.537 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:30.537 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.537 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.537 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.537 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.537 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:30.537 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:30.537 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:30.537 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.537 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.537 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:30.537 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.537 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:30.537 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:30.537 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:30.537 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:30.537 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.538 11:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.104 nvme0n1 00:24:31.104 11:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.104 11:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.104 11:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.104 11:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.104 11:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.104 11:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.104 11:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.104 11:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.104 11:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.104 11:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.362 11:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.362 11:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:31.362 11:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.362 11:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:24:31.362 11:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.362 11:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:31.362 11:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:31.362 11:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:31.362 11:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGMwNzgzZTM3NmEyMDg2MzExMWIxNzI1NTk1ODFhZTKPXjS/: 00:24:31.362 11:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQ1ZjlmZmUxNjFmZmRlNDlkMDdkNmY4NzYzNDkzY2I2Y2M2NzdjMmMzNzMxMjhkN2M5NThlM2NiZDYzZTYzMHZ8h6g=: 00:24:31.362 11:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:31.362 11:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:31.362 11:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGMwNzgzZTM3NmEyMDg2MzExMWIxNzI1NTk1ODFhZTKPXjS/: 00:24:31.362 11:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQ1ZjlmZmUxNjFmZmRlNDlkMDdkNmY4NzYzNDkzY2I2Y2M2NzdjMmMzNzMxMjhkN2M5NThlM2NiZDYzZTYzMHZ8h6g=: ]] 00:24:31.362 11:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQ1ZjlmZmUxNjFmZmRlNDlkMDdkNmY4NzYzNDkzY2I2Y2M2NzdjMmMzNzMxMjhkN2M5NThlM2NiZDYzZTYzMHZ8h6g=: 00:24:31.362 11:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:24:31.362 11:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.362 11:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:31.362 11:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:31.362 11:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:31.362 11:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.362 11:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:31.362 11:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.362 11:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.362 11:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.362 11:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.362 11:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:31.362 11:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:31.362 11:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:31.362 11:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.362 11:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.362 11:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:31.362 11:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.362 11:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:31.362 11:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:31.362 11:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:31.362 11:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:31.362 11:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.362 11:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.299 nvme0n1 00:24:32.299 11:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.299 11:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.299 11:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.299 11:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.299 11:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.299 11:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.299 11:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.299 11:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.299 11:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.299 11:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.299 11:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.299 11:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.299 11:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:24:32.299 11:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.299 11:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:32.299 11:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:32.299 11:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:32.299 11:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA2NWRhMzUxZDJjMzJjYTczMTQzMzM4MWQ1NjAzZDExN2EyNDNjZDVkZjQwNmY2tJNDLQ==: 00:24:32.299 11:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: 00:24:32.299 11:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:32.299 11:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:32.299 11:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA2NWRhMzUxZDJjMzJjYTczMTQzMzM4MWQ1NjAzZDExN2EyNDNjZDVkZjQwNmY2tJNDLQ==: 00:24:32.299 11:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: ]] 00:24:32.299 11:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: 00:24:32.299 11:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:24:32.299 11:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.299 11:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:32.299 11:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:32.299 11:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:32.299 11:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.299 11:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:32.299 11:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.299 11:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.299 11:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.299 11:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.299 11:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:32.299 11:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:32.299 11:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:32.299 11:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.299 11:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.299 11:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:32.299 11:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.299 11:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:32.299 11:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:32.299 11:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:32.299 11:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:32.299 11:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.299 11:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.234 nvme0n1 00:24:33.234 11:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.234 11:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.234 11:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.234 11:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.234 11:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.234 11:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.234 11:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.234 11:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.234 11:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.234 11:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.234 11:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.234 11:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.234 11:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:24:33.234 11:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.234 11:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:33.234 11:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:33.235 11:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:33.235 11:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTExOWU4YzgyOTQ0YzA1ZjNkZDNjYjNjNmJjZDIyMTfYrXrE: 00:24:33.235 11:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: 00:24:33.235 11:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:33.235 11:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:33.235 11:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTExOWU4YzgyOTQ0YzA1ZjNkZDNjYjNjNmJjZDIyMTfYrXrE: 00:24:33.235 11:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: ]] 00:24:33.235 11:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: 00:24:33.235 11:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:24:33.235 11:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.235 11:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:33.235 11:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:33.235 11:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:33.235 11:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.235 11:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:33.235 11:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.235 11:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.235 11:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.235 11:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.235 11:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:33.235 11:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:33.235 11:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:33.235 11:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.235 11:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.235 11:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:33.235 11:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.235 11:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:33.235 11:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:33.235 11:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:33.235 11:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:33.235 11:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.235 11:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.170 nvme0n1 00:24:34.170 11:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.170 11:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.170 11:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.170 11:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.170 11:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.170 11:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.170 11:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.170 11:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.170 11:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.170 11:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.428 11:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.428 11:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.428 11:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:24:34.428 11:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.428 11:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:34.428 11:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:34.428 11:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:34.428 11:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjU1NjRkZWZjN2ZjZGU2NDU0OTg2NWMwNWU5MDI5YjM5ZWM5ZDY5Y2M1MjEzN2FiLvFwSw==: 00:24:34.428 11:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjIwNjRmOTM0YWVmZjgxMjZmMjMwODhhYjE4YzA5OWV3emnY: 00:24:34.428 11:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:34.428 11:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:34.428 11:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjU1NjRkZWZjN2ZjZGU2NDU0OTg2NWMwNWU5MDI5YjM5ZWM5ZDY5Y2M1MjEzN2FiLvFwSw==: 00:24:34.428 11:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjIwNjRmOTM0YWVmZjgxMjZmMjMwODhhYjE4YzA5OWV3emnY: ]] 00:24:34.428 11:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjIwNjRmOTM0YWVmZjgxMjZmMjMwODhhYjE4YzA5OWV3emnY: 00:24:34.428 11:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:24:34.428 11:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.428 11:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:34.428 11:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:34.428 11:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:34.428 11:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.428 11:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:34.428 11:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.428 11:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.428 11:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.428 11:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.428 11:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:34.428 11:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:34.428 11:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:34.428 11:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.428 11:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.428 11:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:34.428 11:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.428 11:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:34.428 11:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:34.428 11:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:34.428 11:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:34.428 11:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.428 11:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.363 nvme0n1 00:24:35.363 11:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.363 11:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.363 11:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.363 11:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.363 11:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:35.363 11:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.363 11:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.363 11:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.363 11:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.363 11:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.363 11:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.363 11:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:35.363 11:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:24:35.363 11:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.363 11:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:35.363 11:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:35.363 11:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:35.363 11:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWNhMDBjY2YwNTE3NDk2YWQwOWM4NDJlYTgwZmQ0Yzc3OWVhZDM4YzkxNDEyOWJhNGEyZjQwMDk3MDk1ZDIwMGoWRFc=: 00:24:35.363 11:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:35.363 11:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:35.363 11:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:35.363 11:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWNhMDBjY2YwNTE3NDk2YWQwOWM4NDJlYTgwZmQ0Yzc3OWVhZDM4YzkxNDEyOWJhNGEyZjQwMDk3MDk1ZDIwMGoWRFc=: 00:24:35.363 11:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:35.363 11:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:24:35.363 11:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.363 11:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:35.363 11:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:35.363 11:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:35.363 11:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.363 11:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:35.363 11:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.363 11:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.363 11:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.363 11:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.363 11:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:35.363 11:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:35.364 11:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:35.364 11:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.364 11:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.364 11:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:35.364 11:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.364 11:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:35.364 11:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:35.364 11:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:35.364 11:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:35.364 11:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.364 11:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.299 nvme0n1 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA2NWRhMzUxZDJjMzJjYTczMTQzMzM4MWQ1NjAzZDExN2EyNDNjZDVkZjQwNmY2tJNDLQ==: 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA2NWRhMzUxZDJjMzJjYTczMTQzMzM4MWQ1NjAzZDExN2EyNDNjZDVkZjQwNmY2tJNDLQ==: 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: ]] 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.299 request: 00:24:36.299 { 00:24:36.299 "name": "nvme0", 00:24:36.299 "trtype": "tcp", 00:24:36.299 "traddr": "10.0.0.1", 00:24:36.299 "adrfam": "ipv4", 00:24:36.299 "trsvcid": "4420", 00:24:36.299 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:36.299 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:36.299 "prchk_reftag": false, 00:24:36.299 "prchk_guard": false, 00:24:36.299 "hdgst": false, 00:24:36.299 "ddgst": false, 00:24:36.299 "allow_unrecognized_csi": false, 00:24:36.299 "method": "bdev_nvme_attach_controller", 00:24:36.299 "req_id": 1 00:24:36.299 } 00:24:36.299 Got JSON-RPC error response 00:24:36.299 response: 00:24:36.299 { 00:24:36.299 "code": -5, 00:24:36.299 "message": "Input/output error" 00:24:36.299 } 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:36.299 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:36.300 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:36.300 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:36.300 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:36.300 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:36.300 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:36.300 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:36.300 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:36.300 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:36.300 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.300 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.300 request: 00:24:36.300 { 00:24:36.300 "name": "nvme0", 00:24:36.300 "trtype": "tcp", 00:24:36.300 "traddr": "10.0.0.1", 00:24:36.300 "adrfam": "ipv4", 00:24:36.300 "trsvcid": "4420", 00:24:36.300 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:36.300 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:36.300 "prchk_reftag": false, 00:24:36.300 "prchk_guard": false, 00:24:36.300 "hdgst": false, 00:24:36.300 "ddgst": false, 00:24:36.300 "dhchap_key": "key2", 00:24:36.300 "allow_unrecognized_csi": false, 00:24:36.300 "method": "bdev_nvme_attach_controller", 00:24:36.300 "req_id": 1 00:24:36.300 } 00:24:36.300 Got JSON-RPC error response 00:24:36.300 response: 00:24:36.300 { 00:24:36.300 "code": -5, 00:24:36.300 "message": "Input/output error" 00:24:36.300 } 00:24:36.300 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:36.300 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:36.300 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:36.300 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:36.300 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:36.300 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.300 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:24:36.300 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.300 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.300 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.300 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:24:36.560 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:24:36.560 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:36.560 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:36.560 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:36.560 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.560 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.560 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:36.560 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.560 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:36.560 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:36.560 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:36.560 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:36.560 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:36.560 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:36.560 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:36.560 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:36.560 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:36.560 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:36.560 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:36.560 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.560 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.560 request: 00:24:36.560 { 00:24:36.560 "name": "nvme0", 00:24:36.560 "trtype": "tcp", 00:24:36.560 "traddr": "10.0.0.1", 00:24:36.560 "adrfam": "ipv4", 00:24:36.560 "trsvcid": "4420", 00:24:36.560 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:36.560 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:36.560 "prchk_reftag": false, 00:24:36.560 "prchk_guard": false, 00:24:36.560 "hdgst": false, 00:24:36.560 "ddgst": false, 00:24:36.560 "dhchap_key": "key1", 00:24:36.560 "dhchap_ctrlr_key": "ckey2", 00:24:36.560 "allow_unrecognized_csi": false, 00:24:36.560 "method": "bdev_nvme_attach_controller", 00:24:36.560 "req_id": 1 00:24:36.560 } 00:24:36.560 Got JSON-RPC error response 00:24:36.560 response: 00:24:36.560 { 00:24:36.560 "code": -5, 00:24:36.560 "message": "Input/output error" 00:24:36.560 } 00:24:36.560 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:36.560 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:36.560 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:36.560 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:36.560 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:36.560 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:24:36.560 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:36.560 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:36.560 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:36.560 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.560 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.560 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:36.560 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.560 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:36.560 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:36.560 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:36.560 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:36.560 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.560 11:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.560 nvme0n1 00:24:36.560 11:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.560 11:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:36.560 11:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.560 11:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:36.560 11:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:36.560 11:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:36.560 11:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTExOWU4YzgyOTQ0YzA1ZjNkZDNjYjNjNmJjZDIyMTfYrXrE: 00:24:36.560 11:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: 00:24:36.560 11:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:36.560 11:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:36.560 11:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTExOWU4YzgyOTQ0YzA1ZjNkZDNjYjNjNmJjZDIyMTfYrXrE: 00:24:36.560 11:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: ]] 00:24:36.560 11:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: 00:24:36.560 11:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:36.560 11:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.560 11:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.819 11:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.819 11:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.819 11:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.819 11:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.819 11:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:24:36.819 11:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.819 11:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.819 11:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:36.819 11:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:36.819 11:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:36.819 11:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:36.819 11:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:36.819 11:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:36.819 11:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:36.819 11:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:36.819 11:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.819 11:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.819 request: 00:24:36.819 { 00:24:36.819 "name": "nvme0", 00:24:36.819 "dhchap_key": "key1", 00:24:36.819 "dhchap_ctrlr_key": "ckey2", 00:24:36.819 "method": "bdev_nvme_set_keys", 00:24:36.819 "req_id": 1 00:24:36.819 } 00:24:36.819 Got JSON-RPC error response 00:24:36.819 response: 00:24:36.819 { 00:24:36.819 "code": -13, 00:24:36.819 "message": "Permission denied" 00:24:36.819 } 00:24:36.819 11:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:36.819 11:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:36.819 11:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:36.819 11:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:36.819 11:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:36.819 11:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.819 11:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.819 11:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.819 11:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:24:36.819 11:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.819 11:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:24:36.819 11:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:24:38.194 11:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.194 11:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:24:38.194 11:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.194 11:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.194 11:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.194 11:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:24:38.194 11:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDA2NWRhMzUxZDJjMzJjYTczMTQzMzM4MWQ1NjAzZDExN2EyNDNjZDVkZjQwNmY2tJNDLQ==: 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDA2NWRhMzUxZDJjMzJjYTczMTQzMzM4MWQ1NjAzZDExN2EyNDNjZDVkZjQwNmY2tJNDLQ==: 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: ]] 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTUxYmMxYThjZjg5MTEzMTFkNzM4MmQ4MDI0OGJjMmIzN2RlMjEyODljODQzNDg5SQl0sw==: 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.130 nvme0n1 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTExOWU4YzgyOTQ0YzA1ZjNkZDNjYjNjNmJjZDIyMTfYrXrE: 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTExOWU4YzgyOTQ0YzA1ZjNkZDNjYjNjNmJjZDIyMTfYrXrE: 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: ]] 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjE1OGIxNTExZDRiM2VkYjcxMjFlM2FhMzk3OGYwMWbpqFuX: 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.130 request: 00:24:39.130 { 00:24:39.130 "name": "nvme0", 00:24:39.130 "dhchap_key": "key2", 00:24:39.130 "dhchap_ctrlr_key": "ckey1", 00:24:39.130 "method": "bdev_nvme_set_keys", 00:24:39.130 "req_id": 1 00:24:39.130 } 00:24:39.130 Got JSON-RPC error response 00:24:39.130 response: 00:24:39.130 { 00:24:39.130 "code": -13, 00:24:39.130 "message": "Permission denied" 00:24:39.130 } 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:24:39.130 11:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:24:40.594 11:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.594 11:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.594 11:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.594 11:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:24:40.594 11:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.594 11:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:24:40.594 11:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:24:40.594 11:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:24:40.594 11:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:24:40.594 11:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:40.594 11:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:24:40.594 11:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:40.594 11:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:24:40.594 11:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:40.594 11:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:40.594 rmmod nvme_tcp 00:24:40.594 rmmod nvme_fabrics 00:24:40.594 11:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:40.594 11:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:24:40.594 11:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:24:40.594 11:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2705724 ']' 00:24:40.594 11:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2705724 00:24:40.594 11:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 2705724 ']' 00:24:40.594 11:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 2705724 00:24:40.594 11:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:24:40.594 11:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:40.594 11:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2705724 00:24:40.594 11:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:40.594 11:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:40.594 11:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2705724' 00:24:40.594 killing process with pid 2705724 00:24:40.594 11:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 2705724 00:24:40.594 11:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 2705724 00:24:40.594 11:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:40.594 11:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:40.594 11:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:40.594 11:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:24:40.594 11:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:24:40.594 11:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:40.594 11:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:24:40.594 11:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:40.594 11:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:40.594 11:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:40.594 11:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:40.594 11:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:42.498 11:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:42.757 11:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:42.757 11:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:42.757 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:24:42.757 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:24:42.757 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:24:42.757 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:42.757 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:42.757 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:42.757 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:42.757 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:24:42.757 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:24:42.757 11:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:44.134 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:44.134 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:44.134 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:44.134 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:44.134 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:44.134 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:44.134 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:44.134 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:44.134 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:44.134 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:44.134 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:44.134 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:44.134 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:44.134 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:44.134 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:44.134 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:46.044 0000:81:00.0 (8086 0a54): nvme -> vfio-pci 00:24:46.044 11:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Aq3 /tmp/spdk.key-null.6Po /tmp/spdk.key-sha256.vSH /tmp/spdk.key-sha384.I0z /tmp/spdk.key-sha512.xOZ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:24:46.044 11:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:47.421 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:24:47.421 0000:81:00.0 (8086 0a54): Already using the vfio-pci driver 00:24:47.421 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:24:47.421 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:24:47.421 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:24:47.421 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:24:47.421 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:24:47.421 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:24:47.421 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:24:47.421 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:24:47.421 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:24:47.421 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:24:47.421 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:24:47.421 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:24:47.421 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:24:47.421 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:24:47.681 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:24:47.681 00:24:47.681 real 0m55.829s 00:24:47.681 user 0m51.930s 00:24:47.681 sys 0m6.990s 00:24:47.681 11:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:47.681 11:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.681 ************************************ 00:24:47.681 END TEST nvmf_auth_host 00:24:47.681 ************************************ 00:24:47.681 11:26:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:24:47.681 11:26:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:47.681 11:26:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:47.681 11:26:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:47.681 11:26:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.681 ************************************ 00:24:47.681 START TEST nvmf_digest 00:24:47.681 ************************************ 00:24:47.681 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:47.941 * Looking for test storage... 00:24:47.941 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:47.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.941 --rc genhtml_branch_coverage=1 00:24:47.941 --rc genhtml_function_coverage=1 00:24:47.941 --rc genhtml_legend=1 00:24:47.941 --rc geninfo_all_blocks=1 00:24:47.941 --rc geninfo_unexecuted_blocks=1 00:24:47.941 00:24:47.941 ' 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:47.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.941 --rc genhtml_branch_coverage=1 00:24:47.941 --rc genhtml_function_coverage=1 00:24:47.941 --rc genhtml_legend=1 00:24:47.941 --rc geninfo_all_blocks=1 00:24:47.941 --rc geninfo_unexecuted_blocks=1 00:24:47.941 00:24:47.941 ' 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:47.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.941 --rc genhtml_branch_coverage=1 00:24:47.941 --rc genhtml_function_coverage=1 00:24:47.941 --rc genhtml_legend=1 00:24:47.941 --rc geninfo_all_blocks=1 00:24:47.941 --rc geninfo_unexecuted_blocks=1 00:24:47.941 00:24:47.941 ' 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:47.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.941 --rc genhtml_branch_coverage=1 00:24:47.941 --rc genhtml_function_coverage=1 00:24:47.941 --rc genhtml_legend=1 00:24:47.941 --rc geninfo_all_blocks=1 00:24:47.941 --rc geninfo_unexecuted_blocks=1 00:24:47.941 00:24:47.941 ' 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:24:47.941 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:24:47.942 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:47.942 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:47.942 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:47.942 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:47.942 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:47.942 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:24:47.942 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:47.942 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:47.942 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:47.942 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.942 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.942 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.942 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:24:47.942 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.942 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:24:47.942 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:47.942 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:47.942 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:47.942 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:47.942 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:47.942 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:47.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:47.942 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:47.942 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:47.942 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:47.942 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:24:47.942 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:24:47.942 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:24:47.942 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:24:47.942 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:24:47.942 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:47.942 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:47.942 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:47.942 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:47.942 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:47.942 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:47.942 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:47.942 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:47.942 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:47.942 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:47.942 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:24:47.942 11:26:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:50.474 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:24:50.475 Found 0000:82:00.0 (0x8086 - 0x159b) 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:24:50.475 Found 0000:82:00.1 (0x8086 - 0x159b) 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:24:50.475 Found net devices under 0000:82:00.0: cvl_0_0 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:24:50.475 Found net devices under 0000:82:00.1: cvl_0_1 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:50.475 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:50.734 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:50.734 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:50.734 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:50.734 11:26:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:50.734 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:50.734 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:24:50.734 00:24:50.734 --- 10.0.0.2 ping statistics --- 00:24:50.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.734 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:24:50.734 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:50.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:50.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:24:50.734 00:24:50.734 --- 10.0.0.1 ping statistics --- 00:24:50.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.734 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:24:50.734 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:50.734 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:24:50.734 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:50.734 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:50.734 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:50.734 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:50.734 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:50.734 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:50.734 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:50.734 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:50.734 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:24:50.734 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:24:50.734 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:50.734 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:50.734 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:50.734 ************************************ 00:24:50.734 START TEST nvmf_digest_clean 00:24:50.734 ************************************ 00:24:50.734 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:24:50.734 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:24:50.734 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:24:50.734 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:24:50.734 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:24:50.734 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:24:50.734 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:50.734 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:50.734 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:50.734 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=2716436 00:24:50.734 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:50.734 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 2716436 00:24:50.734 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2716436 ']' 00:24:50.734 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:50.734 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:50.734 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:50.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:50.734 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:50.734 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:50.735 [2024-11-19 11:26:46.102588] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:24:50.735 [2024-11-19 11:26:46.102664] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:50.735 [2024-11-19 11:26:46.183867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.993 [2024-11-19 11:26:46.240581] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:50.993 [2024-11-19 11:26:46.240629] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:50.993 [2024-11-19 11:26:46.240652] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:50.993 [2024-11-19 11:26:46.240663] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:50.993 [2024-11-19 11:26:46.240673] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:50.993 [2024-11-19 11:26:46.241227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:50.993 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:50.993 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:50.993 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:50.993 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:50.993 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:50.993 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:50.993 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:24:50.993 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:24:50.993 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:24:50.993 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.993 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:50.993 null0 00:24:50.993 [2024-11-19 11:26:46.453803] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:50.993 [2024-11-19 11:26:46.478047] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:50.993 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.993 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:24:50.993 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:50.993 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:50.993 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:50.993 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:50.993 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:50.993 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:50.993 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2716462 00:24:50.993 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:50.993 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2716462 /var/tmp/bperf.sock 00:24:50.993 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2716462 ']' 00:24:50.993 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:50.993 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:50.993 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:50.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:50.993 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:50.993 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:51.251 [2024-11-19 11:26:46.525820] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:24:51.251 [2024-11-19 11:26:46.525884] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2716462 ] 00:24:51.251 [2024-11-19 11:26:46.599105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.251 [2024-11-19 11:26:46.655639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:51.508 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:51.508 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:51.508 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:51.508 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:51.508 11:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:51.766 11:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:51.766 11:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:52.331 nvme0n1 00:24:52.331 11:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:52.331 11:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:52.331 Running I/O for 2 seconds... 00:24:54.640 19222.00 IOPS, 75.09 MiB/s [2024-11-19T10:26:50.137Z] 19728.50 IOPS, 77.06 MiB/s 00:24:54.640 Latency(us) 00:24:54.640 [2024-11-19T10:26:50.137Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.640 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:54.640 nvme0n1 : 2.04 19339.41 75.54 0.00 0.00 6479.11 3301.07 48351.00 00:24:54.640 [2024-11-19T10:26:50.137Z] =================================================================================================================== 00:24:54.640 [2024-11-19T10:26:50.137Z] Total : 19339.41 75.54 0.00 0.00 6479.11 3301.07 48351.00 00:24:54.640 { 00:24:54.640 "results": [ 00:24:54.640 { 00:24:54.640 "job": "nvme0n1", 00:24:54.640 "core_mask": "0x2", 00:24:54.640 "workload": "randread", 00:24:54.640 "status": "finished", 00:24:54.640 "queue_depth": 128, 00:24:54.640 "io_size": 4096, 00:24:54.640 "runtime": 2.044582, 00:24:54.640 "iops": 19339.40531609884, 00:24:54.640 "mibps": 75.5445520160111, 00:24:54.640 "io_failed": 0, 00:24:54.640 "io_timeout": 0, 00:24:54.640 "avg_latency_us": 6479.1070014340485, 00:24:54.640 "min_latency_us": 3301.0725925925926, 00:24:54.640 "max_latency_us": 48351.00444444444 00:24:54.640 } 00:24:54.640 ], 00:24:54.640 "core_count": 1 00:24:54.640 } 00:24:54.640 11:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:54.640 11:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:54.640 11:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:54.640 11:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:54.640 11:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:54.640 | select(.opcode=="crc32c") 00:24:54.640 | "\(.module_name) \(.executed)"' 00:24:54.902 11:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:54.902 11:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:54.902 11:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:54.902 11:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:54.902 11:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2716462 00:24:54.902 11:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2716462 ']' 00:24:54.902 11:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2716462 00:24:54.902 11:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:24:54.902 11:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:54.902 11:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2716462 00:24:54.902 11:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:54.902 11:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:54.902 11:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2716462' 00:24:54.902 killing process with pid 2716462 00:24:54.902 11:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2716462 00:24:54.902 Received shutdown signal, test time was about 2.000000 seconds 00:24:54.902 00:24:54.902 Latency(us) 00:24:54.902 [2024-11-19T10:26:50.399Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.902 [2024-11-19T10:26:50.399Z] =================================================================================================================== 00:24:54.902 [2024-11-19T10:26:50.399Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:54.902 11:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2716462 00:24:54.902 11:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:24:54.902 11:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:54.902 11:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:54.902 11:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:54.902 11:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:54.902 11:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:54.902 11:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:54.902 11:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2716872 00:24:54.902 11:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:54.902 11:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2716872 /var/tmp/bperf.sock 00:24:54.902 11:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2716872 ']' 00:24:54.902 11:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:54.902 11:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:54.902 11:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:54.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:54.902 11:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:54.902 11:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:55.160 [2024-11-19 11:26:50.428776] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:24:55.160 [2024-11-19 11:26:50.428869] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2716872 ] 00:24:55.160 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:55.160 Zero copy mechanism will not be used. 00:24:55.160 [2024-11-19 11:26:50.504603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.160 [2024-11-19 11:26:50.561593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:55.418 11:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:55.418 11:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:55.418 11:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:55.418 11:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:55.418 11:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:55.676 11:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:55.676 11:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:55.934 nvme0n1 00:24:55.934 11:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:55.935 11:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:56.193 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:56.193 Zero copy mechanism will not be used. 00:24:56.193 Running I/O for 2 seconds... 00:24:58.062 4616.00 IOPS, 577.00 MiB/s [2024-11-19T10:26:53.559Z] 4734.00 IOPS, 591.75 MiB/s 00:24:58.062 Latency(us) 00:24:58.062 [2024-11-19T10:26:53.559Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:58.062 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:58.062 nvme0n1 : 2.00 4735.37 591.92 0.00 0.00 3375.35 546.13 5339.97 00:24:58.062 [2024-11-19T10:26:53.559Z] =================================================================================================================== 00:24:58.062 [2024-11-19T10:26:53.559Z] Total : 4735.37 591.92 0.00 0.00 3375.35 546.13 5339.97 00:24:58.062 { 00:24:58.062 "results": [ 00:24:58.062 { 00:24:58.062 "job": "nvme0n1", 00:24:58.062 "core_mask": "0x2", 00:24:58.062 "workload": "randread", 00:24:58.062 "status": "finished", 00:24:58.062 "queue_depth": 16, 00:24:58.062 "io_size": 131072, 00:24:58.062 "runtime": 2.002801, 00:24:58.062 "iops": 4735.368116952209, 00:24:58.062 "mibps": 591.9210146190261, 00:24:58.062 "io_failed": 0, 00:24:58.062 "io_timeout": 0, 00:24:58.062 "avg_latency_us": 3375.353501726104, 00:24:58.062 "min_latency_us": 546.1333333333333, 00:24:58.062 "max_latency_us": 5339.970370370371 00:24:58.062 } 00:24:58.062 ], 00:24:58.062 "core_count": 1 00:24:58.062 } 00:24:58.062 11:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:58.062 11:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:58.062 11:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:58.062 11:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:58.062 | select(.opcode=="crc32c") 00:24:58.062 | "\(.module_name) \(.executed)"' 00:24:58.062 11:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:58.320 11:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:58.320 11:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:58.320 11:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:58.320 11:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:58.320 11:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2716872 00:24:58.320 11:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2716872 ']' 00:24:58.320 11:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2716872 00:24:58.579 11:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:24:58.579 11:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:58.579 11:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2716872 00:24:58.579 11:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:58.579 11:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:58.579 11:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2716872' 00:24:58.579 killing process with pid 2716872 00:24:58.579 11:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2716872 00:24:58.579 Received shutdown signal, test time was about 2.000000 seconds 00:24:58.579 00:24:58.579 Latency(us) 00:24:58.579 [2024-11-19T10:26:54.076Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:58.579 [2024-11-19T10:26:54.076Z] =================================================================================================================== 00:24:58.579 [2024-11-19T10:26:54.076Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:58.579 11:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2716872 00:24:58.579 11:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:24:58.579 11:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:58.579 11:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:58.579 11:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:58.579 11:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:58.579 11:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:58.579 11:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:58.579 11:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2717397 00:24:58.579 11:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2717397 /var/tmp/bperf.sock 00:24:58.579 11:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:58.579 11:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2717397 ']' 00:24:58.579 11:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:58.579 11:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:58.579 11:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:58.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:58.579 11:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:58.579 11:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:58.837 [2024-11-19 11:26:54.118312] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:24:58.837 [2024-11-19 11:26:54.118424] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2717397 ] 00:24:58.837 [2024-11-19 11:26:54.193172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:58.837 [2024-11-19 11:26:54.251104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:59.095 11:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:59.095 11:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:59.095 11:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:59.095 11:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:59.095 11:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:59.353 11:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:59.353 11:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:59.919 nvme0n1 00:24:59.919 11:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:59.919 11:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:59.919 Running I/O for 2 seconds... 00:25:02.226 23195.00 IOPS, 90.61 MiB/s [2024-11-19T10:26:57.723Z] 23234.50 IOPS, 90.76 MiB/s 00:25:02.226 Latency(us) 00:25:02.226 [2024-11-19T10:26:57.723Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:02.226 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:02.226 nvme0n1 : 2.00 23242.01 90.79 0.00 0.00 5501.63 2500.08 14175.19 00:25:02.226 [2024-11-19T10:26:57.723Z] =================================================================================================================== 00:25:02.226 [2024-11-19T10:26:57.723Z] Total : 23242.01 90.79 0.00 0.00 5501.63 2500.08 14175.19 00:25:02.226 { 00:25:02.226 "results": [ 00:25:02.226 { 00:25:02.226 "job": "nvme0n1", 00:25:02.226 "core_mask": "0x2", 00:25:02.226 "workload": "randwrite", 00:25:02.226 "status": "finished", 00:25:02.226 "queue_depth": 128, 00:25:02.226 "io_size": 4096, 00:25:02.226 "runtime": 2.004861, 00:25:02.226 "iops": 23242.010293980482, 00:25:02.226 "mibps": 90.78910271086126, 00:25:02.226 "io_failed": 0, 00:25:02.226 "io_timeout": 0, 00:25:02.226 "avg_latency_us": 5501.625826618944, 00:25:02.226 "min_latency_us": 2500.077037037037, 00:25:02.226 "max_latency_us": 14175.194074074074 00:25:02.226 } 00:25:02.226 ], 00:25:02.226 "core_count": 1 00:25:02.226 } 00:25:02.226 11:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:02.226 11:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:02.226 11:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:02.226 11:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:02.226 | select(.opcode=="crc32c") 00:25:02.226 | "\(.module_name) \(.executed)"' 00:25:02.226 11:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:02.226 11:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:02.226 11:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:02.226 11:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:02.226 11:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:02.226 11:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2717397 00:25:02.226 11:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2717397 ']' 00:25:02.226 11:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2717397 00:25:02.226 11:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:02.226 11:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:02.226 11:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2717397 00:25:02.226 11:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:02.226 11:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:02.226 11:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2717397' 00:25:02.226 killing process with pid 2717397 00:25:02.226 11:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2717397 00:25:02.226 Received shutdown signal, test time was about 2.000000 seconds 00:25:02.226 00:25:02.226 Latency(us) 00:25:02.226 [2024-11-19T10:26:57.723Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:02.226 [2024-11-19T10:26:57.723Z] =================================================================================================================== 00:25:02.226 [2024-11-19T10:26:57.723Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:02.226 11:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2717397 00:25:02.485 11:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:25:02.485 11:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:02.485 11:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:02.485 11:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:02.485 11:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:02.485 11:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:02.485 11:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:02.485 11:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2717807 00:25:02.485 11:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2717807 /var/tmp/bperf.sock 00:25:02.485 11:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:02.485 11:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2717807 ']' 00:25:02.485 11:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:02.485 11:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:02.485 11:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:02.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:02.485 11:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:02.485 11:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:02.485 [2024-11-19 11:26:57.935956] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:25:02.485 [2024-11-19 11:26:57.936050] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2717807 ] 00:25:02.485 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:02.485 Zero copy mechanism will not be used. 00:25:02.743 [2024-11-19 11:26:58.011941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.743 [2024-11-19 11:26:58.070254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:02.743 11:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:02.743 11:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:02.743 11:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:02.743 11:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:02.743 11:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:03.310 11:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:03.310 11:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:03.568 nvme0n1 00:25:03.568 11:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:03.568 11:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:03.568 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:03.568 Zero copy mechanism will not be used. 00:25:03.568 Running I/O for 2 seconds... 00:25:05.878 5037.00 IOPS, 629.62 MiB/s [2024-11-19T10:27:01.375Z] 5394.50 IOPS, 674.31 MiB/s 00:25:05.878 Latency(us) 00:25:05.878 [2024-11-19T10:27:01.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:05.878 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:05.878 nvme0n1 : 2.00 5393.53 674.19 0.00 0.00 2959.78 2305.90 6796.33 00:25:05.878 [2024-11-19T10:27:01.375Z] =================================================================================================================== 00:25:05.878 [2024-11-19T10:27:01.375Z] Total : 5393.53 674.19 0.00 0.00 2959.78 2305.90 6796.33 00:25:05.878 { 00:25:05.878 "results": [ 00:25:05.878 { 00:25:05.878 "job": "nvme0n1", 00:25:05.878 "core_mask": "0x2", 00:25:05.878 "workload": "randwrite", 00:25:05.878 "status": "finished", 00:25:05.878 "queue_depth": 16, 00:25:05.878 "io_size": 131072, 00:25:05.878 "runtime": 2.003325, 00:25:05.878 "iops": 5393.533250970262, 00:25:05.878 "mibps": 674.1916563712828, 00:25:05.878 "io_failed": 0, 00:25:05.878 "io_timeout": 0, 00:25:05.878 "avg_latency_us": 2959.7753424169196, 00:25:05.878 "min_latency_us": 2305.8962962962964, 00:25:05.878 "max_latency_us": 6796.325925925926 00:25:05.878 } 00:25:05.878 ], 00:25:05.878 "core_count": 1 00:25:05.878 } 00:25:05.878 11:27:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:05.878 11:27:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:05.878 11:27:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:05.879 11:27:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:05.879 11:27:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:05.879 | select(.opcode=="crc32c") 00:25:05.879 | "\(.module_name) \(.executed)"' 00:25:05.879 11:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:05.879 11:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:05.879 11:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:05.879 11:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:05.879 11:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2717807 00:25:05.879 11:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2717807 ']' 00:25:05.879 11:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2717807 00:25:05.879 11:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:05.879 11:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:05.879 11:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2717807 00:25:05.879 11:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:05.879 11:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:05.879 11:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2717807' 00:25:05.879 killing process with pid 2717807 00:25:05.879 11:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2717807 00:25:05.879 Received shutdown signal, test time was about 2.000000 seconds 00:25:05.879 00:25:05.879 Latency(us) 00:25:05.879 [2024-11-19T10:27:01.376Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:05.879 [2024-11-19T10:27:01.376Z] =================================================================================================================== 00:25:05.879 [2024-11-19T10:27:01.376Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:05.879 11:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2717807 00:25:06.137 11:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2716436 00:25:06.137 11:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2716436 ']' 00:25:06.137 11:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2716436 00:25:06.137 11:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:06.137 11:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:06.137 11:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2716436 00:25:06.137 11:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:06.137 11:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:06.137 11:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2716436' 00:25:06.137 killing process with pid 2716436 00:25:06.137 11:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2716436 00:25:06.137 11:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2716436 00:25:06.395 00:25:06.396 real 0m15.732s 00:25:06.396 user 0m30.980s 00:25:06.396 sys 0m5.045s 00:25:06.396 11:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:06.396 11:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:06.396 ************************************ 00:25:06.396 END TEST nvmf_digest_clean 00:25:06.396 ************************************ 00:25:06.396 11:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:25:06.396 11:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:06.396 11:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:06.396 11:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:06.396 ************************************ 00:25:06.396 START TEST nvmf_digest_error 00:25:06.396 ************************************ 00:25:06.396 11:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:25:06.396 11:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:25:06.396 11:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:06.396 11:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:06.396 11:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:06.396 11:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=2718348 00:25:06.396 11:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:06.396 11:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 2718348 00:25:06.396 11:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2718348 ']' 00:25:06.396 11:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:06.396 11:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:06.396 11:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:06.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:06.396 11:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:06.396 11:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:06.396 [2024-11-19 11:27:01.881879] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:25:06.396 [2024-11-19 11:27:01.881958] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:06.654 [2024-11-19 11:27:01.968693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.654 [2024-11-19 11:27:02.028563] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:06.654 [2024-11-19 11:27:02.028618] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:06.654 [2024-11-19 11:27:02.028633] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:06.654 [2024-11-19 11:27:02.028659] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:06.654 [2024-11-19 11:27:02.028677] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:06.654 [2024-11-19 11:27:02.029239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.654 11:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:06.654 11:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:06.654 11:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:06.654 11:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:06.654 11:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:06.654 11:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:06.654 11:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:25:06.654 11:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.654 11:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:06.654 [2024-11-19 11:27:02.149982] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:25:06.913 11:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.913 11:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:25:06.913 11:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:25:06.913 11:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.913 11:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:06.913 null0 00:25:06.913 [2024-11-19 11:27:02.267216] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:06.913 [2024-11-19 11:27:02.291481] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:06.913 11:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.913 11:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:25:06.913 11:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:06.913 11:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:06.913 11:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:06.913 11:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:06.913 11:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2718498 00:25:06.913 11:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:25:06.913 11:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2718498 /var/tmp/bperf.sock 00:25:06.913 11:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2718498 ']' 00:25:06.913 11:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:06.913 11:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:06.913 11:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:06.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:06.913 11:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:06.913 11:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:06.913 [2024-11-19 11:27:02.338705] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:25:06.913 [2024-11-19 11:27:02.338785] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2718498 ] 00:25:07.171 [2024-11-19 11:27:02.412878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:07.171 [2024-11-19 11:27:02.469854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:07.171 11:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:07.171 11:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:07.171 11:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:07.171 11:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:07.430 11:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:07.430 11:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.430 11:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:07.430 11:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.430 11:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:07.430 11:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:07.997 nvme0n1 00:25:07.997 11:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:07.997 11:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.997 11:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:07.997 11:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.997 11:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:07.997 11:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:07.997 Running I/O for 2 seconds... 00:25:07.997 [2024-11-19 11:27:03.478250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:07.997 [2024-11-19 11:27:03.478301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.997 [2024-11-19 11:27:03.478324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.319 [2024-11-19 11:27:03.494472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.319 [2024-11-19 11:27:03.494507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.319 [2024-11-19 11:27:03.494527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.319 [2024-11-19 11:27:03.508122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.319 [2024-11-19 11:27:03.508154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.319 [2024-11-19 11:27:03.508171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.319 [2024-11-19 11:27:03.520297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.319 [2024-11-19 11:27:03.520328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.319 [2024-11-19 11:27:03.520359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.319 [2024-11-19 11:27:03.534718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.319 [2024-11-19 11:27:03.534749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.319 [2024-11-19 11:27:03.534766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.319 [2024-11-19 11:27:03.544915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.319 [2024-11-19 11:27:03.544945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.319 [2024-11-19 11:27:03.544962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.319 [2024-11-19 11:27:03.558329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.319 [2024-11-19 11:27:03.558361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.319 [2024-11-19 11:27:03.558401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.319 [2024-11-19 11:27:03.573246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.319 [2024-11-19 11:27:03.573278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.319 [2024-11-19 11:27:03.573303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.319 [2024-11-19 11:27:03.588808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.319 [2024-11-19 11:27:03.588839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.319 [2024-11-19 11:27:03.588856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.319 [2024-11-19 11:27:03.603256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.319 [2024-11-19 11:27:03.603288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.319 [2024-11-19 11:27:03.603305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.319 [2024-11-19 11:27:03.617317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.319 [2024-11-19 11:27:03.617372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.319 [2024-11-19 11:27:03.617399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.319 [2024-11-19 11:27:03.628823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.319 [2024-11-19 11:27:03.628853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.319 [2024-11-19 11:27:03.628869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.319 [2024-11-19 11:27:03.640373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.319 [2024-11-19 11:27:03.640413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:25506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.320 [2024-11-19 11:27:03.640432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.320 [2024-11-19 11:27:03.652081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.320 [2024-11-19 11:27:03.652111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.320 [2024-11-19 11:27:03.652128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.320 [2024-11-19 11:27:03.664057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.320 [2024-11-19 11:27:03.664087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.320 [2024-11-19 11:27:03.664104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.320 [2024-11-19 11:27:03.677297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.320 [2024-11-19 11:27:03.677327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.320 [2024-11-19 11:27:03.677367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.320 [2024-11-19 11:27:03.693921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.320 [2024-11-19 11:27:03.693958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.320 [2024-11-19 11:27:03.693975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.320 [2024-11-19 11:27:03.708848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.320 [2024-11-19 11:27:03.708880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.320 [2024-11-19 11:27:03.708896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.320 [2024-11-19 11:27:03.723170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.320 [2024-11-19 11:27:03.723201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.320 [2024-11-19 11:27:03.723218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.320 [2024-11-19 11:27:03.740250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.320 [2024-11-19 11:27:03.740292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.320 [2024-11-19 11:27:03.740309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.320 [2024-11-19 11:27:03.754793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.320 [2024-11-19 11:27:03.754828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:16097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.320 [2024-11-19 11:27:03.754847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.609 [2024-11-19 11:27:03.766226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.609 [2024-11-19 11:27:03.766257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.609 [2024-11-19 11:27:03.766274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.610 [2024-11-19 11:27:03.782309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.610 [2024-11-19 11:27:03.782340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.610 [2024-11-19 11:27:03.782381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.610 [2024-11-19 11:27:03.796872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.610 [2024-11-19 11:27:03.796902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.610 [2024-11-19 11:27:03.796919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.610 [2024-11-19 11:27:03.808444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.610 [2024-11-19 11:27:03.808475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.610 [2024-11-19 11:27:03.808493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.610 [2024-11-19 11:27:03.820629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.610 [2024-11-19 11:27:03.820661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.610 [2024-11-19 11:27:03.820679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.610 [2024-11-19 11:27:03.834081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.610 [2024-11-19 11:27:03.834111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.610 [2024-11-19 11:27:03.834128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.610 [2024-11-19 11:27:03.845676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.610 [2024-11-19 11:27:03.845706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.610 [2024-11-19 11:27:03.845723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.610 [2024-11-19 11:27:03.860688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.610 [2024-11-19 11:27:03.860718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.610 [2024-11-19 11:27:03.860735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.610 [2024-11-19 11:27:03.875489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.610 [2024-11-19 11:27:03.875519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.610 [2024-11-19 11:27:03.875537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.610 [2024-11-19 11:27:03.886466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.610 [2024-11-19 11:27:03.886498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.610 [2024-11-19 11:27:03.886515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.610 [2024-11-19 11:27:03.900542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.610 [2024-11-19 11:27:03.900572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.610 [2024-11-19 11:27:03.900590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.610 [2024-11-19 11:27:03.916862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.610 [2024-11-19 11:27:03.916892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.610 [2024-11-19 11:27:03.916909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.610 [2024-11-19 11:27:03.931439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.610 [2024-11-19 11:27:03.931476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.610 [2024-11-19 11:27:03.931495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.610 [2024-11-19 11:27:03.945346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.610 [2024-11-19 11:27:03.945402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.610 [2024-11-19 11:27:03.945420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.610 [2024-11-19 11:27:03.957235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.610 [2024-11-19 11:27:03.957267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.610 [2024-11-19 11:27:03.957284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.610 [2024-11-19 11:27:03.972430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.610 [2024-11-19 11:27:03.972461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.610 [2024-11-19 11:27:03.972479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.610 [2024-11-19 11:27:03.988258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.610 [2024-11-19 11:27:03.988288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.610 [2024-11-19 11:27:03.988305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.610 [2024-11-19 11:27:04.001430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.610 [2024-11-19 11:27:04.001462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.610 [2024-11-19 11:27:04.001480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.610 [2024-11-19 11:27:04.014680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.610 [2024-11-19 11:27:04.014711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.610 [2024-11-19 11:27:04.014728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.610 [2024-11-19 11:27:04.024771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.610 [2024-11-19 11:27:04.024802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.610 [2024-11-19 11:27:04.024819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.610 [2024-11-19 11:27:04.036768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.610 [2024-11-19 11:27:04.036797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.610 [2024-11-19 11:27:04.036813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.610 [2024-11-19 11:27:04.051215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.610 [2024-11-19 11:27:04.051246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.610 [2024-11-19 11:27:04.051262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.610 [2024-11-19 11:27:04.067218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.610 [2024-11-19 11:27:04.067249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.610 [2024-11-19 11:27:04.067266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.610 [2024-11-19 11:27:04.077620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.610 [2024-11-19 11:27:04.077652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.610 [2024-11-19 11:27:04.077670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.610 [2024-11-19 11:27:04.092304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.610 [2024-11-19 11:27:04.092335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.610 [2024-11-19 11:27:04.092374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.869 [2024-11-19 11:27:04.108116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.869 [2024-11-19 11:27:04.108147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.869 [2024-11-19 11:27:04.108164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.869 [2024-11-19 11:27:04.119023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.869 [2024-11-19 11:27:04.119053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.869 [2024-11-19 11:27:04.119070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.869 [2024-11-19 11:27:04.135468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.869 [2024-11-19 11:27:04.135500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.869 [2024-11-19 11:27:04.135518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.869 [2024-11-19 11:27:04.149731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.869 [2024-11-19 11:27:04.149763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.869 [2024-11-19 11:27:04.149796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.869 [2024-11-19 11:27:04.159481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.869 [2024-11-19 11:27:04.159512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.869 [2024-11-19 11:27:04.159536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.869 [2024-11-19 11:27:04.175000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.869 [2024-11-19 11:27:04.175028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.869 [2024-11-19 11:27:04.175044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.869 [2024-11-19 11:27:04.188638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.869 [2024-11-19 11:27:04.188668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.869 [2024-11-19 11:27:04.188684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.869 [2024-11-19 11:27:04.199845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.869 [2024-11-19 11:27:04.199875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.869 [2024-11-19 11:27:04.199891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.869 [2024-11-19 11:27:04.214158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.869 [2024-11-19 11:27:04.214188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.869 [2024-11-19 11:27:04.214204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.869 [2024-11-19 11:27:04.227925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.869 [2024-11-19 11:27:04.227955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.869 [2024-11-19 11:27:04.227971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.869 [2024-11-19 11:27:04.240092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.869 [2024-11-19 11:27:04.240121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.869 [2024-11-19 11:27:04.240138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.869 [2024-11-19 11:27:04.252401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.869 [2024-11-19 11:27:04.252431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.869 [2024-11-19 11:27:04.252448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.869 [2024-11-19 11:27:04.264417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.869 [2024-11-19 11:27:04.264448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.869 [2024-11-19 11:27:04.264466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.869 [2024-11-19 11:27:04.277038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.869 [2024-11-19 11:27:04.277071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.869 [2024-11-19 11:27:04.277089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.869 [2024-11-19 11:27:04.289081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.869 [2024-11-19 11:27:04.289109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.869 [2024-11-19 11:27:04.289126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.869 [2024-11-19 11:27:04.301621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.869 [2024-11-19 11:27:04.301652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.869 [2024-11-19 11:27:04.301684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.869 [2024-11-19 11:27:04.312510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.869 [2024-11-19 11:27:04.312540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.869 [2024-11-19 11:27:04.312557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.869 [2024-11-19 11:27:04.324651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.869 [2024-11-19 11:27:04.324696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.869 [2024-11-19 11:27:04.324713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.869 [2024-11-19 11:27:04.337053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.870 [2024-11-19 11:27:04.337082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.870 [2024-11-19 11:27:04.337097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.870 [2024-11-19 11:27:04.348320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.870 [2024-11-19 11:27:04.348348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.870 [2024-11-19 11:27:04.348371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.870 [2024-11-19 11:27:04.364920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:08.870 [2024-11-19 11:27:04.364965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.870 [2024-11-19 11:27:04.364993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.128 [2024-11-19 11:27:04.375411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.128 [2024-11-19 11:27:04.375440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.128 [2024-11-19 11:27:04.375456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.128 [2024-11-19 11:27:04.389089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.128 [2024-11-19 11:27:04.389118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.128 [2024-11-19 11:27:04.389134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.128 [2024-11-19 11:27:04.401840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.128 [2024-11-19 11:27:04.401869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.128 [2024-11-19 11:27:04.401886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.128 [2024-11-19 11:27:04.412867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.128 [2024-11-19 11:27:04.412896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.128 [2024-11-19 11:27:04.412912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.128 [2024-11-19 11:27:04.427531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.128 [2024-11-19 11:27:04.427561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.128 [2024-11-19 11:27:04.427593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.128 [2024-11-19 11:27:04.437278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.128 [2024-11-19 11:27:04.437307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.128 [2024-11-19 11:27:04.437323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.128 [2024-11-19 11:27:04.450400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.128 [2024-11-19 11:27:04.450431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:25315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.128 [2024-11-19 11:27:04.450459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.128 19051.00 IOPS, 74.42 MiB/s [2024-11-19T10:27:04.625Z] [2024-11-19 11:27:04.462438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.128 [2024-11-19 11:27:04.462468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.128 [2024-11-19 11:27:04.462486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.128 [2024-11-19 11:27:04.476502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.128 [2024-11-19 11:27:04.476533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.128 [2024-11-19 11:27:04.476549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.128 [2024-11-19 11:27:04.489682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.128 [2024-11-19 11:27:04.489712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.128 [2024-11-19 11:27:04.489734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.129 [2024-11-19 11:27:04.500139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.129 [2024-11-19 11:27:04.500167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.129 [2024-11-19 11:27:04.500183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.129 [2024-11-19 11:27:04.514879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.129 [2024-11-19 11:27:04.514908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.129 [2024-11-19 11:27:04.514924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.129 [2024-11-19 11:27:04.527041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.129 [2024-11-19 11:27:04.527071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.129 [2024-11-19 11:27:04.527088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.129 [2024-11-19 11:27:04.539507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.129 [2024-11-19 11:27:04.539538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.129 [2024-11-19 11:27:04.539556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.129 [2024-11-19 11:27:04.550236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.129 [2024-11-19 11:27:04.550264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.129 [2024-11-19 11:27:04.550280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.129 [2024-11-19 11:27:04.561909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.129 [2024-11-19 11:27:04.561938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:11193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.129 [2024-11-19 11:27:04.561954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.129 [2024-11-19 11:27:04.573335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.129 [2024-11-19 11:27:04.573388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.129 [2024-11-19 11:27:04.573407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.129 [2024-11-19 11:27:04.587810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.129 [2024-11-19 11:27:04.587838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.129 [2024-11-19 11:27:04.587854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.129 [2024-11-19 11:27:04.601577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.129 [2024-11-19 11:27:04.601608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:18574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.129 [2024-11-19 11:27:04.601625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.129 [2024-11-19 11:27:04.617186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.129 [2024-11-19 11:27:04.617216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.129 [2024-11-19 11:27:04.617231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.388 [2024-11-19 11:27:04.631955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.388 [2024-11-19 11:27:04.631985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.388 [2024-11-19 11:27:04.632001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.388 [2024-11-19 11:27:04.643288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.388 [2024-11-19 11:27:04.643317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.388 [2024-11-19 11:27:04.643333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.388 [2024-11-19 11:27:04.657399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.388 [2024-11-19 11:27:04.657428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.388 [2024-11-19 11:27:04.657444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.388 [2024-11-19 11:27:04.670704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.388 [2024-11-19 11:27:04.670746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.388 [2024-11-19 11:27:04.670762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.388 [2024-11-19 11:27:04.686852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.388 [2024-11-19 11:27:04.686888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.388 [2024-11-19 11:27:04.686905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.388 [2024-11-19 11:27:04.699482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.388 [2024-11-19 11:27:04.699513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.388 [2024-11-19 11:27:04.699531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.388 [2024-11-19 11:27:04.710055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.388 [2024-11-19 11:27:04.710083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.388 [2024-11-19 11:27:04.710104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.388 [2024-11-19 11:27:04.723854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.388 [2024-11-19 11:27:04.723882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.388 [2024-11-19 11:27:04.723898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.388 [2024-11-19 11:27:04.738017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.388 [2024-11-19 11:27:04.738047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.388 [2024-11-19 11:27:04.738063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.388 [2024-11-19 11:27:04.747200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.388 [2024-11-19 11:27:04.747228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.388 [2024-11-19 11:27:04.747245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.388 [2024-11-19 11:27:04.761555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.388 [2024-11-19 11:27:04.761584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.388 [2024-11-19 11:27:04.761599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.388 [2024-11-19 11:27:04.776690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.388 [2024-11-19 11:27:04.776719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.388 [2024-11-19 11:27:04.776735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.388 [2024-11-19 11:27:04.792679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.388 [2024-11-19 11:27:04.792724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.388 [2024-11-19 11:27:04.792741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.388 [2024-11-19 11:27:04.808605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.389 [2024-11-19 11:27:04.808636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.389 [2024-11-19 11:27:04.808666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.389 [2024-11-19 11:27:04.822216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.389 [2024-11-19 11:27:04.822246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.389 [2024-11-19 11:27:04.822263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.389 [2024-11-19 11:27:04.833537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.389 [2024-11-19 11:27:04.833573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.389 [2024-11-19 11:27:04.833592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.389 [2024-11-19 11:27:04.847431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.389 [2024-11-19 11:27:04.847476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.389 [2024-11-19 11:27:04.847493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.389 [2024-11-19 11:27:04.859683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.389 [2024-11-19 11:27:04.859713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.389 [2024-11-19 11:27:04.859730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.389 [2024-11-19 11:27:04.871472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.389 [2024-11-19 11:27:04.871501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.389 [2024-11-19 11:27:04.871518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.389 [2024-11-19 11:27:04.882945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.389 [2024-11-19 11:27:04.882992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.389 [2024-11-19 11:27:04.883009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.648 [2024-11-19 11:27:04.896440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.648 [2024-11-19 11:27:04.896470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.648 [2024-11-19 11:27:04.896487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.648 [2024-11-19 11:27:04.906416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.648 [2024-11-19 11:27:04.906446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.648 [2024-11-19 11:27:04.906463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.648 [2024-11-19 11:27:04.920408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.648 [2024-11-19 11:27:04.920438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.648 [2024-11-19 11:27:04.920454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.648 [2024-11-19 11:27:04.932282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.648 [2024-11-19 11:27:04.932312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.648 [2024-11-19 11:27:04.932328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.648 [2024-11-19 11:27:04.945274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.648 [2024-11-19 11:27:04.945303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.648 [2024-11-19 11:27:04.945319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.648 [2024-11-19 11:27:04.959858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.648 [2024-11-19 11:27:04.959886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:18414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.648 [2024-11-19 11:27:04.959902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.648 [2024-11-19 11:27:04.975492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.648 [2024-11-19 11:27:04.975522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.648 [2024-11-19 11:27:04.975539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.648 [2024-11-19 11:27:04.985812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.648 [2024-11-19 11:27:04.985841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.648 [2024-11-19 11:27:04.985857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.648 [2024-11-19 11:27:04.997487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.648 [2024-11-19 11:27:04.997517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.648 [2024-11-19 11:27:04.997534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.648 [2024-11-19 11:27:05.009547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.648 [2024-11-19 11:27:05.009599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.648 [2024-11-19 11:27:05.009616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.648 [2024-11-19 11:27:05.021413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.648 [2024-11-19 11:27:05.021442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.648 [2024-11-19 11:27:05.021460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.648 [2024-11-19 11:27:05.034733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.648 [2024-11-19 11:27:05.034762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.648 [2024-11-19 11:27:05.034779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.648 [2024-11-19 11:27:05.045929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.648 [2024-11-19 11:27:05.045957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.648 [2024-11-19 11:27:05.045978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.648 [2024-11-19 11:27:05.062220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.648 [2024-11-19 11:27:05.062250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.648 [2024-11-19 11:27:05.062266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.648 [2024-11-19 11:27:05.075030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.648 [2024-11-19 11:27:05.075060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.648 [2024-11-19 11:27:05.075076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.648 [2024-11-19 11:27:05.089108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.648 [2024-11-19 11:27:05.089138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.648 [2024-11-19 11:27:05.089155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.648 [2024-11-19 11:27:05.104102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.648 [2024-11-19 11:27:05.104131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.648 [2024-11-19 11:27:05.104148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.648 [2024-11-19 11:27:05.118222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.648 [2024-11-19 11:27:05.118251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.649 [2024-11-19 11:27:05.118267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.649 [2024-11-19 11:27:05.130007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.649 [2024-11-19 11:27:05.130037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.649 [2024-11-19 11:27:05.130053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.908 [2024-11-19 11:27:05.149360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.908 [2024-11-19 11:27:05.149399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.908 [2024-11-19 11:27:05.149417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.908 [2024-11-19 11:27:05.159132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.908 [2024-11-19 11:27:05.159161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.908 [2024-11-19 11:27:05.159177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.908 [2024-11-19 11:27:05.171753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.908 [2024-11-19 11:27:05.171788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.908 [2024-11-19 11:27:05.171804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.908 [2024-11-19 11:27:05.184723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.908 [2024-11-19 11:27:05.184753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.908 [2024-11-19 11:27:05.184769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.908 [2024-11-19 11:27:05.194781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.908 [2024-11-19 11:27:05.194810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.908 [2024-11-19 11:27:05.194826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.908 [2024-11-19 11:27:05.208107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.908 [2024-11-19 11:27:05.208137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.908 [2024-11-19 11:27:05.208153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.908 [2024-11-19 11:27:05.219972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.908 [2024-11-19 11:27:05.220001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.908 [2024-11-19 11:27:05.220017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.908 [2024-11-19 11:27:05.233901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.908 [2024-11-19 11:27:05.233931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.908 [2024-11-19 11:27:05.233948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.908 [2024-11-19 11:27:05.244007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.908 [2024-11-19 11:27:05.244036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.908 [2024-11-19 11:27:05.244052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.908 [2024-11-19 11:27:05.259347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.908 [2024-11-19 11:27:05.259401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.908 [2024-11-19 11:27:05.259419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.908 [2024-11-19 11:27:05.274091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.908 [2024-11-19 11:27:05.274121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.908 [2024-11-19 11:27:05.274138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.908 [2024-11-19 11:27:05.290171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.908 [2024-11-19 11:27:05.290201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.908 [2024-11-19 11:27:05.290218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.908 [2024-11-19 11:27:05.305607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.908 [2024-11-19 11:27:05.305666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.908 [2024-11-19 11:27:05.305684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.908 [2024-11-19 11:27:05.316287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.908 [2024-11-19 11:27:05.316316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.908 [2024-11-19 11:27:05.316332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.908 [2024-11-19 11:27:05.329303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.908 [2024-11-19 11:27:05.329332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.908 [2024-11-19 11:27:05.329371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.908 [2024-11-19 11:27:05.341969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.908 [2024-11-19 11:27:05.341998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.908 [2024-11-19 11:27:05.342014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.908 [2024-11-19 11:27:05.354580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.908 [2024-11-19 11:27:05.354611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.908 [2024-11-19 11:27:05.354628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.908 [2024-11-19 11:27:05.366241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.908 [2024-11-19 11:27:05.366270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.908 [2024-11-19 11:27:05.366287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.908 [2024-11-19 11:27:05.379314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.908 [2024-11-19 11:27:05.379344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.909 [2024-11-19 11:27:05.379360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.909 [2024-11-19 11:27:05.392129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.909 [2024-11-19 11:27:05.392159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.909 [2024-11-19 11:27:05.392182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.909 [2024-11-19 11:27:05.404037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:09.909 [2024-11-19 11:27:05.404070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.909 [2024-11-19 11:27:05.404088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.167 [2024-11-19 11:27:05.417775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:10.167 [2024-11-19 11:27:05.417805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.167 [2024-11-19 11:27:05.417822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.167 [2024-11-19 11:27:05.429077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:10.167 [2024-11-19 11:27:05.429108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:25090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.167 [2024-11-19 11:27:05.429125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.167 [2024-11-19 11:27:05.441333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:10.167 [2024-11-19 11:27:05.441384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.167 [2024-11-19 11:27:05.441402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.167 [2024-11-19 11:27:05.452724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:10.167 [2024-11-19 11:27:05.452753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.167 [2024-11-19 11:27:05.452770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.167 19321.00 IOPS, 75.47 MiB/s [2024-11-19T10:27:05.664Z] [2024-11-19 11:27:05.467893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x725510) 00:25:10.167 [2024-11-19 11:27:05.467922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.167 [2024-11-19 11:27:05.467939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.167 00:25:10.167 Latency(us) 00:25:10.167 [2024-11-19T10:27:05.664Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:10.167 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:10.167 nvme0n1 : 2.01 19315.86 75.45 0.00 0.00 6618.24 3276.80 24078.41 00:25:10.167 [2024-11-19T10:27:05.664Z] =================================================================================================================== 00:25:10.167 [2024-11-19T10:27:05.664Z] Total : 19315.86 75.45 0.00 0.00 6618.24 3276.80 24078.41 00:25:10.167 { 00:25:10.167 "results": [ 00:25:10.167 { 00:25:10.167 "job": "nvme0n1", 00:25:10.167 "core_mask": "0x2", 00:25:10.167 "workload": "randread", 00:25:10.167 "status": "finished", 00:25:10.167 "queue_depth": 128, 00:25:10.167 "io_size": 4096, 00:25:10.167 "runtime": 2.007159, 00:25:10.167 "iops": 19315.858883127843, 00:25:10.167 "mibps": 75.45257376221814, 00:25:10.167 "io_failed": 0, 00:25:10.167 "io_timeout": 0, 00:25:10.167 "avg_latency_us": 6618.236968713877, 00:25:10.167 "min_latency_us": 3276.8, 00:25:10.167 "max_latency_us": 24078.41185185185 00:25:10.167 } 00:25:10.167 ], 00:25:10.167 "core_count": 1 00:25:10.167 } 00:25:10.167 11:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:10.167 11:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:10.167 11:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:10.167 11:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:10.167 | .driver_specific 00:25:10.167 | .nvme_error 00:25:10.167 | .status_code 00:25:10.167 | .command_transient_transport_error' 00:25:10.426 11:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 152 > 0 )) 00:25:10.426 11:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2718498 00:25:10.426 11:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2718498 ']' 00:25:10.426 11:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2718498 00:25:10.426 11:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:10.426 11:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:10.426 11:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2718498 00:25:10.426 11:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:10.426 11:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:10.426 11:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2718498' 00:25:10.426 killing process with pid 2718498 00:25:10.426 11:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2718498 00:25:10.426 Received shutdown signal, test time was about 2.000000 seconds 00:25:10.426 00:25:10.426 Latency(us) 00:25:10.426 [2024-11-19T10:27:05.923Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:10.426 [2024-11-19T10:27:05.923Z] =================================================================================================================== 00:25:10.426 [2024-11-19T10:27:05.923Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:10.426 11:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2718498 00:25:10.685 11:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:25:10.685 11:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:10.685 11:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:10.685 11:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:10.685 11:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:10.685 11:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2718910 00:25:10.685 11:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2718910 /var/tmp/bperf.sock 00:25:10.685 11:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:25:10.685 11:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2718910 ']' 00:25:10.685 11:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:10.685 11:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:10.685 11:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:10.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:10.685 11:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:10.685 11:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:10.685 [2024-11-19 11:27:06.080554] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:25:10.685 [2024-11-19 11:27:06.080650] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2718910 ] 00:25:10.685 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:10.685 Zero copy mechanism will not be used. 00:25:10.685 [2024-11-19 11:27:06.156571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.943 [2024-11-19 11:27:06.214371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:10.943 11:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:10.943 11:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:10.943 11:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:10.943 11:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:11.202 11:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:11.202 11:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.202 11:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:11.202 11:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.202 11:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:11.202 11:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:11.770 nvme0n1 00:25:11.770 11:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:11.770 11:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.770 11:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:11.770 11:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.770 11:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:11.770 11:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:11.770 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:11.770 Zero copy mechanism will not be used. 00:25:11.770 Running I/O for 2 seconds... 00:25:11.770 [2024-11-19 11:27:07.147840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:11.770 [2024-11-19 11:27:07.147892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.770 [2024-11-19 11:27:07.147913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:11.770 [2024-11-19 11:27:07.154059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:11.770 [2024-11-19 11:27:07.154090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.770 [2024-11-19 11:27:07.154107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:11.770 [2024-11-19 11:27:07.159838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:11.770 [2024-11-19 11:27:07.159869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.770 [2024-11-19 11:27:07.159886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:11.770 [2024-11-19 11:27:07.165452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:11.770 [2024-11-19 11:27:07.165485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.770 [2024-11-19 11:27:07.165504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:11.770 [2024-11-19 11:27:07.171310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:11.770 [2024-11-19 11:27:07.171339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.770 [2024-11-19 11:27:07.171401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:11.770 [2024-11-19 11:27:07.177510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:11.770 [2024-11-19 11:27:07.177541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.770 [2024-11-19 11:27:07.177558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:11.770 [2024-11-19 11:27:07.184411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:11.770 [2024-11-19 11:27:07.184440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.770 [2024-11-19 11:27:07.184457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:11.770 [2024-11-19 11:27:07.191082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:11.770 [2024-11-19 11:27:07.191111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.770 [2024-11-19 11:27:07.191127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:11.770 [2024-11-19 11:27:07.197853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:11.770 [2024-11-19 11:27:07.197883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.770 [2024-11-19 11:27:07.197899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:11.770 [2024-11-19 11:27:07.204706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:11.770 [2024-11-19 11:27:07.204735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.770 [2024-11-19 11:27:07.204759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:11.770 [2024-11-19 11:27:07.211224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:11.770 [2024-11-19 11:27:07.211253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.770 [2024-11-19 11:27:07.211270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:11.770 [2024-11-19 11:27:07.218173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:11.770 [2024-11-19 11:27:07.218205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.770 [2024-11-19 11:27:07.218222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:11.770 [2024-11-19 11:27:07.224935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:11.770 [2024-11-19 11:27:07.224964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.770 [2024-11-19 11:27:07.224981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:11.770 [2024-11-19 11:27:07.231139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:11.770 [2024-11-19 11:27:07.231167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.770 [2024-11-19 11:27:07.231183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:11.770 [2024-11-19 11:27:07.236549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:11.770 [2024-11-19 11:27:07.236591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.770 [2024-11-19 11:27:07.236607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:11.770 [2024-11-19 11:27:07.241751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:11.770 [2024-11-19 11:27:07.241782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.770 [2024-11-19 11:27:07.241807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:11.770 [2024-11-19 11:27:07.246945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:11.770 [2024-11-19 11:27:07.246985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.770 [2024-11-19 11:27:07.247001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:11.770 [2024-11-19 11:27:07.252188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:11.771 [2024-11-19 11:27:07.252215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.771 [2024-11-19 11:27:07.252241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:11.771 [2024-11-19 11:27:07.257478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:11.771 [2024-11-19 11:27:07.257515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.771 [2024-11-19 11:27:07.257532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:11.771 [2024-11-19 11:27:07.262733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:11.771 [2024-11-19 11:27:07.262773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.771 [2024-11-19 11:27:07.262790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:12.030 [2024-11-19 11:27:07.268245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.030 [2024-11-19 11:27:07.268273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-19 11:27:07.268288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:12.030 [2024-11-19 11:27:07.273986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.030 [2024-11-19 11:27:07.274014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-19 11:27:07.274036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:12.030 [2024-11-19 11:27:07.281198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.030 [2024-11-19 11:27:07.281238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-19 11:27:07.281255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:12.030 [2024-11-19 11:27:07.288887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.030 [2024-11-19 11:27:07.288922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-19 11:27:07.288939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:12.030 [2024-11-19 11:27:07.296674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.030 [2024-11-19 11:27:07.296703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-19 11:27:07.296720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:12.030 [2024-11-19 11:27:07.304202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.030 [2024-11-19 11:27:07.304232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-19 11:27:07.304248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:12.030 [2024-11-19 11:27:07.312146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.030 [2024-11-19 11:27:07.312175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-19 11:27:07.312191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:12.030 [2024-11-19 11:27:07.318514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.030 [2024-11-19 11:27:07.318544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-19 11:27:07.318565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:12.030 [2024-11-19 11:27:07.323638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.030 [2024-11-19 11:27:07.323690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-19 11:27:07.323707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:12.030 [2024-11-19 11:27:07.328810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.030 [2024-11-19 11:27:07.328839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-19 11:27:07.328854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:12.030 [2024-11-19 11:27:07.334112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.030 [2024-11-19 11:27:07.334140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-19 11:27:07.334156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:12.030 [2024-11-19 11:27:07.339252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.030 [2024-11-19 11:27:07.339280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-19 11:27:07.339296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:12.030 [2024-11-19 11:27:07.344508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.030 [2024-11-19 11:27:07.344537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-19 11:27:07.344552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:12.030 [2024-11-19 11:27:07.347865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.030 [2024-11-19 11:27:07.347894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-19 11:27:07.347911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:12.030 [2024-11-19 11:27:07.354991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.030 [2024-11-19 11:27:07.355026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-19 11:27:07.355042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:12.030 [2024-11-19 11:27:07.363480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.030 [2024-11-19 11:27:07.363509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-19 11:27:07.363540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:12.030 [2024-11-19 11:27:07.370227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.030 [2024-11-19 11:27:07.370266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-19 11:27:07.370282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:12.030 [2024-11-19 11:27:07.377765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.030 [2024-11-19 11:27:07.377795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-19 11:27:07.377811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:12.030 [2024-11-19 11:27:07.385098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.030 [2024-11-19 11:27:07.385126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-19 11:27:07.385142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:12.030 [2024-11-19 11:27:07.392532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.030 [2024-11-19 11:27:07.392562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-19 11:27:07.392579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:12.030 [2024-11-19 11:27:07.400146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.030 [2024-11-19 11:27:07.400184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.030 [2024-11-19 11:27:07.400200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:12.030 [2024-11-19 11:27:07.407817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.030 [2024-11-19 11:27:07.407846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-19 11:27:07.407862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:12.031 [2024-11-19 11:27:07.415158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.031 [2024-11-19 11:27:07.415186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-19 11:27:07.415203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:12.031 [2024-11-19 11:27:07.422837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.031 [2024-11-19 11:27:07.422872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-19 11:27:07.422888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:12.031 [2024-11-19 11:27:07.430394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.031 [2024-11-19 11:27:07.430441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-19 11:27:07.430459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:12.031 [2024-11-19 11:27:07.437798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.031 [2024-11-19 11:27:07.437827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-19 11:27:07.437843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:12.031 [2024-11-19 11:27:07.445536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.031 [2024-11-19 11:27:07.445568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-19 11:27:07.445587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:12.031 [2024-11-19 11:27:07.452916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.031 [2024-11-19 11:27:07.452945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-19 11:27:07.452961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:12.031 [2024-11-19 11:27:07.460201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.031 [2024-11-19 11:27:07.460230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-19 11:27:07.460246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:12.031 [2024-11-19 11:27:07.466863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.031 [2024-11-19 11:27:07.466891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-19 11:27:07.466925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:12.031 [2024-11-19 11:27:07.474047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.031 [2024-11-19 11:27:07.474076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-19 11:27:07.474101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:12.031 [2024-11-19 11:27:07.481953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.031 [2024-11-19 11:27:07.481983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-19 11:27:07.481999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:12.031 [2024-11-19 11:27:07.488985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.031 [2024-11-19 11:27:07.489021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-19 11:27:07.489046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:12.031 [2024-11-19 11:27:07.495587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.031 [2024-11-19 11:27:07.495619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-19 11:27:07.495637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:12.031 [2024-11-19 11:27:07.502011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.031 [2024-11-19 11:27:07.502041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-19 11:27:07.502061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:12.031 [2024-11-19 11:27:07.508207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.031 [2024-11-19 11:27:07.508235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-19 11:27:07.508251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:12.031 [2024-11-19 11:27:07.513860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.031 [2024-11-19 11:27:07.513888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-19 11:27:07.513904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:12.031 [2024-11-19 11:27:07.519913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.031 [2024-11-19 11:27:07.519942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.031 [2024-11-19 11:27:07.519967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:12.291 [2024-11-19 11:27:07.526168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.291 [2024-11-19 11:27:07.526208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.291 [2024-11-19 11:27:07.526226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:12.291 [2024-11-19 11:27:07.532222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.291 [2024-11-19 11:27:07.532251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.291 [2024-11-19 11:27:07.532270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:12.291 [2024-11-19 11:27:07.539585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.291 [2024-11-19 11:27:07.539616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.291 [2024-11-19 11:27:07.539634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:12.291 [2024-11-19 11:27:07.547207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.291 [2024-11-19 11:27:07.547246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.291 [2024-11-19 11:27:07.547264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:12.291 [2024-11-19 11:27:07.555178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.291 [2024-11-19 11:27:07.555209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.291 [2024-11-19 11:27:07.555226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:12.291 [2024-11-19 11:27:07.562411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.291 [2024-11-19 11:27:07.562445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.291 [2024-11-19 11:27:07.562465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:12.291 [2024-11-19 11:27:07.570440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.291 [2024-11-19 11:27:07.570474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.291 [2024-11-19 11:27:07.570495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:12.291 [2024-11-19 11:27:07.578398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.291 [2024-11-19 11:27:07.578438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.291 [2024-11-19 11:27:07.578457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:12.291 [2024-11-19 11:27:07.585850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.291 [2024-11-19 11:27:07.585879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.291 [2024-11-19 11:27:07.585911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:12.291 [2024-11-19 11:27:07.594010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.291 [2024-11-19 11:27:07.594042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.291 [2024-11-19 11:27:07.594059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:12.291 [2024-11-19 11:27:07.602104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.291 [2024-11-19 11:27:07.602136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.291 [2024-11-19 11:27:07.602152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:12.291 [2024-11-19 11:27:07.609263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.291 [2024-11-19 11:27:07.609294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.291 [2024-11-19 11:27:07.609311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:12.291 [2024-11-19 11:27:07.616246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.291 [2024-11-19 11:27:07.616276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.291 [2024-11-19 11:27:07.616294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:12.291 [2024-11-19 11:27:07.623084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.291 [2024-11-19 11:27:07.623114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.291 [2024-11-19 11:27:07.623130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:12.291 [2024-11-19 11:27:07.630195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.291 [2024-11-19 11:27:07.630226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.291 [2024-11-19 11:27:07.630243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:12.291 [2024-11-19 11:27:07.637176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.291 [2024-11-19 11:27:07.637205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.291 [2024-11-19 11:27:07.637221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:12.291 [2024-11-19 11:27:07.644565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.291 [2024-11-19 11:27:07.644599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.291 [2024-11-19 11:27:07.644617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:12.291 [2024-11-19 11:27:07.652982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.291 [2024-11-19 11:27:07.653011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.291 [2024-11-19 11:27:07.653028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:12.291 [2024-11-19 11:27:07.661127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.291 [2024-11-19 11:27:07.661172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.291 [2024-11-19 11:27:07.661188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:12.292 [2024-11-19 11:27:07.668955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.292 [2024-11-19 11:27:07.668987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.292 [2024-11-19 11:27:07.669004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:12.292 [2024-11-19 11:27:07.675142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.292 [2024-11-19 11:27:07.675188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.292 [2024-11-19 11:27:07.675213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:12.292 [2024-11-19 11:27:07.680598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.292 [2024-11-19 11:27:07.680631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.292 [2024-11-19 11:27:07.680654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:12.292 [2024-11-19 11:27:07.685827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.292 [2024-11-19 11:27:07.685854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.292 [2024-11-19 11:27:07.685885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:12.292 [2024-11-19 11:27:07.691417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.292 [2024-11-19 11:27:07.691465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.292 [2024-11-19 11:27:07.691484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:12.292 [2024-11-19 11:27:07.696504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.292 [2024-11-19 11:27:07.696535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.292 [2024-11-19 11:27:07.696552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:12.292 [2024-11-19 11:27:07.701860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.292 [2024-11-19 11:27:07.701887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.292 [2024-11-19 11:27:07.701906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:12.292 [2024-11-19 11:27:07.707025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.292 [2024-11-19 11:27:07.707053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.292 [2024-11-19 11:27:07.707069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:12.292 [2024-11-19 11:27:07.712278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.292 [2024-11-19 11:27:07.712307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.292 [2024-11-19 11:27:07.712323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:12.292 [2024-11-19 11:27:07.717501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.292 [2024-11-19 11:27:07.717529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.292 [2024-11-19 11:27:07.717554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:12.292 [2024-11-19 11:27:07.722949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.292 [2024-11-19 11:27:07.722990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.292 [2024-11-19 11:27:07.723006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:12.292 [2024-11-19 11:27:07.728665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.292 [2024-11-19 11:27:07.728717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.292 [2024-11-19 11:27:07.728733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:12.292 [2024-11-19 11:27:07.734950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.292 [2024-11-19 11:27:07.734980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.292 [2024-11-19 11:27:07.734996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:12.292 [2024-11-19 11:27:07.741227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.292 [2024-11-19 11:27:07.741256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.292 [2024-11-19 11:27:07.741272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:12.292 [2024-11-19 11:27:07.748981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.292 [2024-11-19 11:27:07.749011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.292 [2024-11-19 11:27:07.749033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:12.292 [2024-11-19 11:27:07.755541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.292 [2024-11-19 11:27:07.755572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.292 [2024-11-19 11:27:07.755588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:12.292 [2024-11-19 11:27:07.762477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.292 [2024-11-19 11:27:07.762508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.292 [2024-11-19 11:27:07.762524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:12.292 [2024-11-19 11:27:07.768514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.292 [2024-11-19 11:27:07.768543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.292 [2024-11-19 11:27:07.768560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:12.292 [2024-11-19 11:27:07.775348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.292 [2024-11-19 11:27:07.775411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.292 [2024-11-19 11:27:07.775429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:12.292 [2024-11-19 11:27:07.782734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.292 [2024-11-19 11:27:07.782765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.292 [2024-11-19 11:27:07.782798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:12.552 [2024-11-19 11:27:07.791034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.552 [2024-11-19 11:27:07.791063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.552 [2024-11-19 11:27:07.791080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:12.552 [2024-11-19 11:27:07.799138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.552 [2024-11-19 11:27:07.799167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.552 [2024-11-19 11:27:07.799184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:12.553 [2024-11-19 11:27:07.806198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.553 [2024-11-19 11:27:07.806227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-11-19 11:27:07.806243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:12.553 [2024-11-19 11:27:07.812878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.553 [2024-11-19 11:27:07.812908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-11-19 11:27:07.812925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:12.553 [2024-11-19 11:27:07.818279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.553 [2024-11-19 11:27:07.818307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-11-19 11:27:07.818323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:12.553 [2024-11-19 11:27:07.823438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.553 [2024-11-19 11:27:07.823468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-11-19 11:27:07.823484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:12.553 [2024-11-19 11:27:07.828485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.553 [2024-11-19 11:27:07.828514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-11-19 11:27:07.828530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:12.553 [2024-11-19 11:27:07.833648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.553 [2024-11-19 11:27:07.833708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-11-19 11:27:07.833725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:12.553 [2024-11-19 11:27:07.838659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.553 [2024-11-19 11:27:07.838690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-11-19 11:27:07.838706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:12.553 [2024-11-19 11:27:07.843708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.553 [2024-11-19 11:27:07.843750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-11-19 11:27:07.843766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:12.553 [2024-11-19 11:27:07.848849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.553 [2024-11-19 11:27:07.848876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-11-19 11:27:07.848892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:12.553 [2024-11-19 11:27:07.854060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.553 [2024-11-19 11:27:07.854099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-11-19 11:27:07.854115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:12.553 [2024-11-19 11:27:07.859038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.553 [2024-11-19 11:27:07.859065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-11-19 11:27:07.859082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:12.553 [2024-11-19 11:27:07.864799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.553 [2024-11-19 11:27:07.864836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-11-19 11:27:07.864852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:12.553 [2024-11-19 11:27:07.871811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.553 [2024-11-19 11:27:07.871850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-11-19 11:27:07.871866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:12.553 [2024-11-19 11:27:07.878996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.553 [2024-11-19 11:27:07.879026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-11-19 11:27:07.879042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:12.553 [2024-11-19 11:27:07.884892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.553 [2024-11-19 11:27:07.884922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-11-19 11:27:07.884941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:12.553 [2024-11-19 11:27:07.890572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.553 [2024-11-19 11:27:07.890602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-11-19 11:27:07.890619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:12.553 [2024-11-19 11:27:07.896031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.553 [2024-11-19 11:27:07.896071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-11-19 11:27:07.896087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:12.553 [2024-11-19 11:27:07.902127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.553 [2024-11-19 11:27:07.902158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-11-19 11:27:07.902186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:12.553 [2024-11-19 11:27:07.909585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.553 [2024-11-19 11:27:07.909617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-11-19 11:27:07.909634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:12.553 [2024-11-19 11:27:07.917679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.553 [2024-11-19 11:27:07.917710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-11-19 11:27:07.917727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:12.553 [2024-11-19 11:27:07.926161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.553 [2024-11-19 11:27:07.926191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-11-19 11:27:07.926208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:12.553 [2024-11-19 11:27:07.934048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.553 [2024-11-19 11:27:07.934078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-11-19 11:27:07.934107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:12.553 [2024-11-19 11:27:07.940878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.553 [2024-11-19 11:27:07.940908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-11-19 11:27:07.940933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:12.553 [2024-11-19 11:27:07.948236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.553 [2024-11-19 11:27:07.948265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-11-19 11:27:07.948282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:12.553 [2024-11-19 11:27:07.956439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.553 [2024-11-19 11:27:07.956468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.553 [2024-11-19 11:27:07.956485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:12.554 [2024-11-19 11:27:07.963378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.554 [2024-11-19 11:27:07.963408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-11-19 11:27:07.963425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:12.554 [2024-11-19 11:27:07.969430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.554 [2024-11-19 11:27:07.969460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-11-19 11:27:07.969477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:12.554 [2024-11-19 11:27:07.975445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.554 [2024-11-19 11:27:07.975483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-11-19 11:27:07.975500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:12.554 [2024-11-19 11:27:07.981419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.554 [2024-11-19 11:27:07.981448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-11-19 11:27:07.981466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:12.554 [2024-11-19 11:27:07.988944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.554 [2024-11-19 11:27:07.988973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-11-19 11:27:07.988989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:12.554 [2024-11-19 11:27:07.996462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.554 [2024-11-19 11:27:07.996502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-11-19 11:27:07.996518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:12.554 [2024-11-19 11:27:08.002759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.554 [2024-11-19 11:27:08.002795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-11-19 11:27:08.002811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:12.554 [2024-11-19 11:27:08.008234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.554 [2024-11-19 11:27:08.008263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-11-19 11:27:08.008286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:12.554 [2024-11-19 11:27:08.014391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.554 [2024-11-19 11:27:08.014422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-11-19 11:27:08.014439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:12.554 [2024-11-19 11:27:08.019918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.554 [2024-11-19 11:27:08.019947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-11-19 11:27:08.019963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:12.554 [2024-11-19 11:27:08.026583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.554 [2024-11-19 11:27:08.026614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-11-19 11:27:08.026632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:12.554 [2024-11-19 11:27:08.034239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.554 [2024-11-19 11:27:08.034268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-11-19 11:27:08.034284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:12.554 [2024-11-19 11:27:08.040406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.554 [2024-11-19 11:27:08.040435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-11-19 11:27:08.040452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:12.554 [2024-11-19 11:27:08.047046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.554 [2024-11-19 11:27:08.047075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.554 [2024-11-19 11:27:08.047092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:12.814 [2024-11-19 11:27:08.054020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.814 [2024-11-19 11:27:08.054051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.814 [2024-11-19 11:27:08.054068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:12.814 [2024-11-19 11:27:08.059807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.814 [2024-11-19 11:27:08.059835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.814 [2024-11-19 11:27:08.059852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:12.814 [2024-11-19 11:27:08.066617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.814 [2024-11-19 11:27:08.066648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.814 [2024-11-19 11:27:08.066680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:12.815 [2024-11-19 11:27:08.073216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.815 [2024-11-19 11:27:08.073246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.815 [2024-11-19 11:27:08.073263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:12.815 [2024-11-19 11:27:08.079487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.815 [2024-11-19 11:27:08.079517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.815 [2024-11-19 11:27:08.079534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:12.815 [2024-11-19 11:27:08.085604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.815 [2024-11-19 11:27:08.085634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.815 [2024-11-19 11:27:08.085666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:12.815 [2024-11-19 11:27:08.091686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.815 [2024-11-19 11:27:08.091717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.815 [2024-11-19 11:27:08.091734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:12.815 [2024-11-19 11:27:08.097221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.815 [2024-11-19 11:27:08.097249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.815 [2024-11-19 11:27:08.097265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:12.815 [2024-11-19 11:27:08.103425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.815 [2024-11-19 11:27:08.103456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.815 [2024-11-19 11:27:08.103473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:12.815 [2024-11-19 11:27:08.110088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.815 [2024-11-19 11:27:08.110119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.815 [2024-11-19 11:27:08.110143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:12.815 [2024-11-19 11:27:08.116924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.815 [2024-11-19 11:27:08.116953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.815 [2024-11-19 11:27:08.116971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:12.815 [2024-11-19 11:27:08.123236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.815 [2024-11-19 11:27:08.123264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.815 [2024-11-19 11:27:08.123281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:12.815 [2024-11-19 11:27:08.129965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.815 [2024-11-19 11:27:08.129995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.815 [2024-11-19 11:27:08.130011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:12.815 4702.00 IOPS, 587.75 MiB/s [2024-11-19T10:27:08.312Z] [2024-11-19 11:27:08.137960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.815 [2024-11-19 11:27:08.137989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.815 [2024-11-19 11:27:08.138005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:12.815 [2024-11-19 11:27:08.144116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.815 [2024-11-19 11:27:08.144146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.815 [2024-11-19 11:27:08.144162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:12.815 [2024-11-19 11:27:08.147619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.815 [2024-11-19 11:27:08.147648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.815 [2024-11-19 11:27:08.147664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:12.815 [2024-11-19 11:27:08.153699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.815 [2024-11-19 11:27:08.153729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.815 [2024-11-19 11:27:08.153746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:12.815 [2024-11-19 11:27:08.158952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.815 [2024-11-19 11:27:08.158980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.815 [2024-11-19 11:27:08.158996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:12.815 [2024-11-19 11:27:08.166045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.815 [2024-11-19 11:27:08.166074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.815 [2024-11-19 11:27:08.166091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:12.815 [2024-11-19 11:27:08.172466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.815 [2024-11-19 11:27:08.172512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.815 [2024-11-19 11:27:08.172529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:12.815 [2024-11-19 11:27:08.178098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.815 [2024-11-19 11:27:08.178126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.815 [2024-11-19 11:27:08.178142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:12.815 [2024-11-19 11:27:08.184147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.815 [2024-11-19 11:27:08.184179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.815 [2024-11-19 11:27:08.184196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:12.815 [2024-11-19 11:27:08.190850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.815 [2024-11-19 11:27:08.190879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.815 [2024-11-19 11:27:08.190895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:12.815 [2024-11-19 11:27:08.197117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.815 [2024-11-19 11:27:08.197146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.815 [2024-11-19 11:27:08.197163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:12.815 [2024-11-19 11:27:08.202831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.815 [2024-11-19 11:27:08.202859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.815 [2024-11-19 11:27:08.202875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:12.815 [2024-11-19 11:27:08.208709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.815 [2024-11-19 11:27:08.208737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.815 [2024-11-19 11:27:08.208754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:12.815 [2024-11-19 11:27:08.214423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.816 [2024-11-19 11:27:08.214453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.816 [2024-11-19 11:27:08.214483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:12.816 [2024-11-19 11:27:08.219433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.816 [2024-11-19 11:27:08.219462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.816 [2024-11-19 11:27:08.219479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:12.816 [2024-11-19 11:27:08.224460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.816 [2024-11-19 11:27:08.224489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.816 [2024-11-19 11:27:08.224506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:12.816 [2024-11-19 11:27:08.229640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.816 [2024-11-19 11:27:08.229691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.816 [2024-11-19 11:27:08.229708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:12.816 [2024-11-19 11:27:08.234881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.816 [2024-11-19 11:27:08.234917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.816 [2024-11-19 11:27:08.234934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:12.816 [2024-11-19 11:27:08.240465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.816 [2024-11-19 11:27:08.240499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.816 [2024-11-19 11:27:08.240515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:12.816 [2024-11-19 11:27:08.246230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.816 [2024-11-19 11:27:08.246259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.816 [2024-11-19 11:27:08.246275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:12.816 [2024-11-19 11:27:08.251515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.816 [2024-11-19 11:27:08.251544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.816 [2024-11-19 11:27:08.251559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:12.816 [2024-11-19 11:27:08.257047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.816 [2024-11-19 11:27:08.257075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.816 [2024-11-19 11:27:08.257091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:12.816 [2024-11-19 11:27:08.262588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.816 [2024-11-19 11:27:08.262623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.816 [2024-11-19 11:27:08.262642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:12.816 [2024-11-19 11:27:08.268219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.816 [2024-11-19 11:27:08.268246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.816 [2024-11-19 11:27:08.268261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:12.816 [2024-11-19 11:27:08.273827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.816 [2024-11-19 11:27:08.273856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.816 [2024-11-19 11:27:08.273871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:12.816 [2024-11-19 11:27:08.279460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.816 [2024-11-19 11:27:08.279488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.816 [2024-11-19 11:27:08.279505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:12.816 [2024-11-19 11:27:08.285022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.816 [2024-11-19 11:27:08.285052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.816 [2024-11-19 11:27:08.285069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:12.816 [2024-11-19 11:27:08.290814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.816 [2024-11-19 11:27:08.290842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.816 [2024-11-19 11:27:08.290858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:12.816 [2024-11-19 11:27:08.296489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.816 [2024-11-19 11:27:08.296518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.816 [2024-11-19 11:27:08.296535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:12.816 [2024-11-19 11:27:08.302193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.816 [2024-11-19 11:27:08.302221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.816 [2024-11-19 11:27:08.302237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:12.816 [2024-11-19 11:27:08.307807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:12.816 [2024-11-19 11:27:08.307852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.816 [2024-11-19 11:27:08.307869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:13.076 [2024-11-19 11:27:08.313629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.076 [2024-11-19 11:27:08.313658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.076 [2024-11-19 11:27:08.313674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:13.076 [2024-11-19 11:27:08.319306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.076 [2024-11-19 11:27:08.319334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.076 [2024-11-19 11:27:08.319374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:13.076 [2024-11-19 11:27:08.324919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.076 [2024-11-19 11:27:08.324948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.076 [2024-11-19 11:27:08.324964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:13.076 [2024-11-19 11:27:08.330866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.076 [2024-11-19 11:27:08.330894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.076 [2024-11-19 11:27:08.330909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:13.076 [2024-11-19 11:27:08.336591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.076 [2024-11-19 11:27:08.336620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.076 [2024-11-19 11:27:08.336636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:13.076 [2024-11-19 11:27:08.342265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.076 [2024-11-19 11:27:08.342294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.076 [2024-11-19 11:27:08.342310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:13.076 [2024-11-19 11:27:08.347513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.076 [2024-11-19 11:27:08.347542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.076 [2024-11-19 11:27:08.347560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:13.076 [2024-11-19 11:27:08.352750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.076 [2024-11-19 11:27:08.352778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.076 [2024-11-19 11:27:08.352795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:13.076 [2024-11-19 11:27:08.357916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.076 [2024-11-19 11:27:08.357944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.076 [2024-11-19 11:27:08.357965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:13.076 [2024-11-19 11:27:08.363670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.076 [2024-11-19 11:27:08.363714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.076 [2024-11-19 11:27:08.363730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:13.076 [2024-11-19 11:27:08.369196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.076 [2024-11-19 11:27:08.369224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.076 [2024-11-19 11:27:08.369241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:13.076 [2024-11-19 11:27:08.374580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.076 [2024-11-19 11:27:08.374609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.076 [2024-11-19 11:27:08.374626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:13.076 [2024-11-19 11:27:08.379974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.076 [2024-11-19 11:27:08.380002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.076 [2024-11-19 11:27:08.380019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:13.076 [2024-11-19 11:27:08.385872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.076 [2024-11-19 11:27:08.385900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.076 [2024-11-19 11:27:08.385916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:13.076 [2024-11-19 11:27:08.392003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.076 [2024-11-19 11:27:08.392031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.076 [2024-11-19 11:27:08.392048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:13.076 [2024-11-19 11:27:08.398139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.076 [2024-11-19 11:27:08.398168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.076 [2024-11-19 11:27:08.398184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:13.076 [2024-11-19 11:27:08.404046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.076 [2024-11-19 11:27:08.404075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.076 [2024-11-19 11:27:08.404091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:13.076 [2024-11-19 11:27:08.410122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.076 [2024-11-19 11:27:08.410150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.076 [2024-11-19 11:27:08.410167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:13.076 [2024-11-19 11:27:08.414075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.076 [2024-11-19 11:27:08.414102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.077 [2024-11-19 11:27:08.414118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:13.077 [2024-11-19 11:27:08.420651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.077 [2024-11-19 11:27:08.420695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.077 [2024-11-19 11:27:08.420712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:13.077 [2024-11-19 11:27:08.426803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.077 [2024-11-19 11:27:08.426834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.077 [2024-11-19 11:27:08.426854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:13.077 [2024-11-19 11:27:08.433557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.077 [2024-11-19 11:27:08.433588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.077 [2024-11-19 11:27:08.433606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:13.077 [2024-11-19 11:27:08.440428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.077 [2024-11-19 11:27:08.440459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.077 [2024-11-19 11:27:08.440477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:13.077 [2024-11-19 11:27:08.447106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.077 [2024-11-19 11:27:08.447134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.077 [2024-11-19 11:27:08.447150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:13.077 [2024-11-19 11:27:08.454133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.077 [2024-11-19 11:27:08.454163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.077 [2024-11-19 11:27:08.454179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:13.077 [2024-11-19 11:27:08.461003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.077 [2024-11-19 11:27:08.461033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.077 [2024-11-19 11:27:08.461069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:13.077 [2024-11-19 11:27:08.466842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.077 [2024-11-19 11:27:08.466870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.077 [2024-11-19 11:27:08.466886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:13.077 [2024-11-19 11:27:08.472879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.077 [2024-11-19 11:27:08.472907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.077 [2024-11-19 11:27:08.472923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:13.077 [2024-11-19 11:27:08.478930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.077 [2024-11-19 11:27:08.478958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.077 [2024-11-19 11:27:08.478973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:13.077 [2024-11-19 11:27:08.484948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.077 [2024-11-19 11:27:08.484976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.077 [2024-11-19 11:27:08.484992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:13.077 [2024-11-19 11:27:08.491289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.077 [2024-11-19 11:27:08.491318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.077 [2024-11-19 11:27:08.491334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:13.077 [2024-11-19 11:27:08.498217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.077 [2024-11-19 11:27:08.498245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.077 [2024-11-19 11:27:08.498261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:13.077 [2024-11-19 11:27:08.504332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.077 [2024-11-19 11:27:08.504384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.077 [2024-11-19 11:27:08.504402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:13.077 [2024-11-19 11:27:08.510521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.077 [2024-11-19 11:27:08.510551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.077 [2024-11-19 11:27:08.510567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:13.077 [2024-11-19 11:27:08.516712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.077 [2024-11-19 11:27:08.516745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.077 [2024-11-19 11:27:08.516761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:13.077 [2024-11-19 11:27:08.522851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.077 [2024-11-19 11:27:08.522879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.077 [2024-11-19 11:27:08.522894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:13.077 [2024-11-19 11:27:08.529226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.077 [2024-11-19 11:27:08.529255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.077 [2024-11-19 11:27:08.529272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:13.077 [2024-11-19 11:27:08.535863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.077 [2024-11-19 11:27:08.535892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.077 [2024-11-19 11:27:08.535908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:13.077 [2024-11-19 11:27:08.542322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.077 [2024-11-19 11:27:08.542371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.077 [2024-11-19 11:27:08.542390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:13.077 [2024-11-19 11:27:08.548958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.077 [2024-11-19 11:27:08.548986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.077 [2024-11-19 11:27:08.549001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:13.077 [2024-11-19 11:27:08.555924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.077 [2024-11-19 11:27:08.555952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.077 [2024-11-19 11:27:08.555967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:13.077 [2024-11-19 11:27:08.562742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.077 [2024-11-19 11:27:08.562772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.077 [2024-11-19 11:27:08.562789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:13.077 [2024-11-19 11:27:08.569600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.077 [2024-11-19 11:27:08.569631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.077 [2024-11-19 11:27:08.569648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:13.337 [2024-11-19 11:27:08.576571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.337 [2024-11-19 11:27:08.576619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.337 [2024-11-19 11:27:08.576636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:13.337 [2024-11-19 11:27:08.583542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.337 [2024-11-19 11:27:08.583573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.337 [2024-11-19 11:27:08.583589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:13.337 [2024-11-19 11:27:08.590210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.337 [2024-11-19 11:27:08.590239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.337 [2024-11-19 11:27:08.590256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:13.337 [2024-11-19 11:27:08.596990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.337 [2024-11-19 11:27:08.597018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.337 [2024-11-19 11:27:08.597034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:13.337 [2024-11-19 11:27:08.603581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.337 [2024-11-19 11:27:08.603610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.337 [2024-11-19 11:27:08.603626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:13.337 [2024-11-19 11:27:08.610227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.337 [2024-11-19 11:27:08.610255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.337 [2024-11-19 11:27:08.610272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:13.337 [2024-11-19 11:27:08.617049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.337 [2024-11-19 11:27:08.617077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.337 [2024-11-19 11:27:08.617093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:13.337 [2024-11-19 11:27:08.623783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.337 [2024-11-19 11:27:08.623811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.337 [2024-11-19 11:27:08.623827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:13.337 [2024-11-19 11:27:08.630678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.337 [2024-11-19 11:27:08.630706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.337 [2024-11-19 11:27:08.630727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:13.337 [2024-11-19 11:27:08.637187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.337 [2024-11-19 11:27:08.637215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.337 [2024-11-19 11:27:08.637231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:13.337 [2024-11-19 11:27:08.643756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.337 [2024-11-19 11:27:08.643784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.337 [2024-11-19 11:27:08.643800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:13.337 [2024-11-19 11:27:08.650356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.337 [2024-11-19 11:27:08.650392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.337 [2024-11-19 11:27:08.650424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:13.337 [2024-11-19 11:27:08.657045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.337 [2024-11-19 11:27:08.657074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.337 [2024-11-19 11:27:08.657090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:13.337 [2024-11-19 11:27:08.663833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.337 [2024-11-19 11:27:08.663862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.337 [2024-11-19 11:27:08.663877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:13.337 [2024-11-19 11:27:08.670598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.337 [2024-11-19 11:27:08.670628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.337 [2024-11-19 11:27:08.670647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:13.337 [2024-11-19 11:27:08.676490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.337 [2024-11-19 11:27:08.676522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.337 [2024-11-19 11:27:08.676540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:13.337 [2024-11-19 11:27:08.681920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.337 [2024-11-19 11:27:08.681949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.338 [2024-11-19 11:27:08.681966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:13.338 [2024-11-19 11:27:08.687659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.338 [2024-11-19 11:27:08.687695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.338 [2024-11-19 11:27:08.687728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:13.338 [2024-11-19 11:27:08.693807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.338 [2024-11-19 11:27:08.693836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.338 [2024-11-19 11:27:08.693853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:13.338 [2024-11-19 11:27:08.699459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.338 [2024-11-19 11:27:08.699490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.338 [2024-11-19 11:27:08.699508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:13.338 [2024-11-19 11:27:08.704974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.338 [2024-11-19 11:27:08.705015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.338 [2024-11-19 11:27:08.705031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:13.338 [2024-11-19 11:27:08.710506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.338 [2024-11-19 11:27:08.710538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.338 [2024-11-19 11:27:08.710557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:13.338 [2024-11-19 11:27:08.716264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.338 [2024-11-19 11:27:08.716293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.338 [2024-11-19 11:27:08.716309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:13.338 [2024-11-19 11:27:08.722381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.338 [2024-11-19 11:27:08.722421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.338 [2024-11-19 11:27:08.722440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:13.338 [2024-11-19 11:27:08.728131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.338 [2024-11-19 11:27:08.728168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.338 [2024-11-19 11:27:08.728185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:13.338 [2024-11-19 11:27:08.734279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.338 [2024-11-19 11:27:08.734308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.338 [2024-11-19 11:27:08.734330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:13.338 [2024-11-19 11:27:08.739989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.338 [2024-11-19 11:27:08.740019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.338 [2024-11-19 11:27:08.740036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:13.338 [2024-11-19 11:27:08.746144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.338 [2024-11-19 11:27:08.746173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.338 [2024-11-19 11:27:08.746189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:13.338 [2024-11-19 11:27:08.753487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.338 [2024-11-19 11:27:08.753522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.338 [2024-11-19 11:27:08.753540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:13.338 [2024-11-19 11:27:08.761246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.338 [2024-11-19 11:27:08.761275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.338 [2024-11-19 11:27:08.761292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:13.338 [2024-11-19 11:27:08.769181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.338 [2024-11-19 11:27:08.769209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.338 [2024-11-19 11:27:08.769226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:13.338 [2024-11-19 11:27:08.776943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.338 [2024-11-19 11:27:08.776972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.338 [2024-11-19 11:27:08.776989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:13.338 [2024-11-19 11:27:08.784462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.338 [2024-11-19 11:27:08.784492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.338 [2024-11-19 11:27:08.784509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:13.338 [2024-11-19 11:27:08.792559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.338 [2024-11-19 11:27:08.792589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.338 [2024-11-19 11:27:08.792606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:13.338 [2024-11-19 11:27:08.800064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.338 [2024-11-19 11:27:08.800113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.338 [2024-11-19 11:27:08.800130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:13.338 [2024-11-19 11:27:08.807779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.338 [2024-11-19 11:27:08.807808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.338 [2024-11-19 11:27:08.807833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:13.338 [2024-11-19 11:27:08.815328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.338 [2024-11-19 11:27:08.815382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.338 [2024-11-19 11:27:08.815400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:13.338 [2024-11-19 11:27:08.822837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.338 [2024-11-19 11:27:08.822866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.338 [2024-11-19 11:27:08.822883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:13.338 [2024-11-19 11:27:08.830276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.338 [2024-11-19 11:27:08.830304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.338 [2024-11-19 11:27:08.830320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:13.598 [2024-11-19 11:27:08.837901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.598 [2024-11-19 11:27:08.837948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.598 [2024-11-19 11:27:08.837966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:13.598 [2024-11-19 11:27:08.845548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.598 [2024-11-19 11:27:08.845578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.598 [2024-11-19 11:27:08.845595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:13.598 [2024-11-19 11:27:08.853098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.598 [2024-11-19 11:27:08.853126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.598 [2024-11-19 11:27:08.853142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:13.598 [2024-11-19 11:27:08.860506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.598 [2024-11-19 11:27:08.860536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.598 [2024-11-19 11:27:08.860552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:13.598 [2024-11-19 11:27:08.866749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.598 [2024-11-19 11:27:08.866779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.598 [2024-11-19 11:27:08.866796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:13.598 [2024-11-19 11:27:08.872104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.598 [2024-11-19 11:27:08.872133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.598 [2024-11-19 11:27:08.872150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:13.598 [2024-11-19 11:27:08.877865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.598 [2024-11-19 11:27:08.877894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.598 [2024-11-19 11:27:08.877909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:13.599 [2024-11-19 11:27:08.883894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.599 [2024-11-19 11:27:08.883932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.599 [2024-11-19 11:27:08.883948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:13.599 [2024-11-19 11:27:08.889801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.599 [2024-11-19 11:27:08.889830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.599 [2024-11-19 11:27:08.889846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:13.599 [2024-11-19 11:27:08.896983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.599 [2024-11-19 11:27:08.897012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.599 [2024-11-19 11:27:08.897037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:13.599 [2024-11-19 11:27:08.903849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.599 [2024-11-19 11:27:08.903879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.599 [2024-11-19 11:27:08.903895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:13.599 [2024-11-19 11:27:08.910552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.599 [2024-11-19 11:27:08.910583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.599 [2024-11-19 11:27:08.910601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:13.599 [2024-11-19 11:27:08.916458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.599 [2024-11-19 11:27:08.916487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.599 [2024-11-19 11:27:08.916511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:13.599 [2024-11-19 11:27:08.922433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.599 [2024-11-19 11:27:08.922463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.599 [2024-11-19 11:27:08.922481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:13.599 [2024-11-19 11:27:08.926516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.599 [2024-11-19 11:27:08.926548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.599 [2024-11-19 11:27:08.926566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:13.599 [2024-11-19 11:27:08.933257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.599 [2024-11-19 11:27:08.933287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.599 [2024-11-19 11:27:08.933303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:13.599 [2024-11-19 11:27:08.939939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.599 [2024-11-19 11:27:08.939969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.599 [2024-11-19 11:27:08.939985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:13.599 [2024-11-19 11:27:08.947060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.599 [2024-11-19 11:27:08.947089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.599 [2024-11-19 11:27:08.947105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:13.599 [2024-11-19 11:27:08.953485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.599 [2024-11-19 11:27:08.953515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.599 [2024-11-19 11:27:08.953532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:13.599 [2024-11-19 11:27:08.959742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.599 [2024-11-19 11:27:08.959772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.599 [2024-11-19 11:27:08.959789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:13.599 [2024-11-19 11:27:08.965451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.599 [2024-11-19 11:27:08.965481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.599 [2024-11-19 11:27:08.965499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:13.599 [2024-11-19 11:27:08.971931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.599 [2024-11-19 11:27:08.971967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.599 [2024-11-19 11:27:08.971984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:13.599 [2024-11-19 11:27:08.977786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.599 [2024-11-19 11:27:08.977815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.599 [2024-11-19 11:27:08.977831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:13.599 [2024-11-19 11:27:08.983047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.599 [2024-11-19 11:27:08.983076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.599 [2024-11-19 11:27:08.983091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:13.599 [2024-11-19 11:27:08.988718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.599 [2024-11-19 11:27:08.988747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.599 [2024-11-19 11:27:08.988763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:13.599 [2024-11-19 11:27:08.995188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.600 [2024-11-19 11:27:08.995217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.600 [2024-11-19 11:27:08.995233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:13.600 [2024-11-19 11:27:09.001749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.600 [2024-11-19 11:27:09.001778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.600 [2024-11-19 11:27:09.001803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:13.600 [2024-11-19 11:27:09.008138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.600 [2024-11-19 11:27:09.008167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.600 [2024-11-19 11:27:09.008183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:13.600 [2024-11-19 11:27:09.014736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.600 [2024-11-19 11:27:09.014774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.600 [2024-11-19 11:27:09.014790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:13.600 [2024-11-19 11:27:09.020901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.600 [2024-11-19 11:27:09.020932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.600 [2024-11-19 11:27:09.020948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:13.600 [2024-11-19 11:27:09.026856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.600 [2024-11-19 11:27:09.026894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.600 [2024-11-19 11:27:09.026911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:13.600 [2024-11-19 11:27:09.032683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.600 [2024-11-19 11:27:09.032711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.600 [2024-11-19 11:27:09.032728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:13.600 [2024-11-19 11:27:09.038168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.600 [2024-11-19 11:27:09.038197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.600 [2024-11-19 11:27:09.038213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:13.600 [2024-11-19 11:27:09.044302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.600 [2024-11-19 11:27:09.044331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.600 [2024-11-19 11:27:09.044367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:13.600 [2024-11-19 11:27:09.050613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.600 [2024-11-19 11:27:09.050642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.600 [2024-11-19 11:27:09.050673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:13.600 [2024-11-19 11:27:09.057801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.600 [2024-11-19 11:27:09.057831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.600 [2024-11-19 11:27:09.057848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:13.600 [2024-11-19 11:27:09.064544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.600 [2024-11-19 11:27:09.064575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.600 [2024-11-19 11:27:09.064592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:13.600 [2024-11-19 11:27:09.071171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.600 [2024-11-19 11:27:09.071200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.600 [2024-11-19 11:27:09.071216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:13.600 [2024-11-19 11:27:09.077841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.600 [2024-11-19 11:27:09.077871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.600 [2024-11-19 11:27:09.077895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:13.600 [2024-11-19 11:27:09.084201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.600 [2024-11-19 11:27:09.084230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.600 [2024-11-19 11:27:09.084246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:13.600 [2024-11-19 11:27:09.089611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.600 [2024-11-19 11:27:09.089642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.600 [2024-11-19 11:27:09.089674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:13.858 [2024-11-19 11:27:09.095619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.858 [2024-11-19 11:27:09.095669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.858 [2024-11-19 11:27:09.095686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:13.858 [2024-11-19 11:27:09.101419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.858 [2024-11-19 11:27:09.101450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.858 [2024-11-19 11:27:09.101467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:13.858 [2024-11-19 11:27:09.106787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.858 [2024-11-19 11:27:09.106816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.858 [2024-11-19 11:27:09.106832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:13.858 [2024-11-19 11:27:09.112768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.858 [2024-11-19 11:27:09.112812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.858 [2024-11-19 11:27:09.112829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:13.858 [2024-11-19 11:27:09.118515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.858 [2024-11-19 11:27:09.118549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.858 [2024-11-19 11:27:09.118567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:13.858 [2024-11-19 11:27:09.124387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.858 [2024-11-19 11:27:09.124419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.858 [2024-11-19 11:27:09.124437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:13.858 [2024-11-19 11:27:09.130594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.858 [2024-11-19 11:27:09.130626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.858 [2024-11-19 11:27:09.130644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:13.858 [2024-11-19 11:27:09.137981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10546f0) 00:25:13.858 [2024-11-19 11:27:09.138010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.858 [2024-11-19 11:27:09.138044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:13.858 4851.00 IOPS, 606.38 MiB/s 00:25:13.858 Latency(us) 00:25:13.858 [2024-11-19T10:27:09.355Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:13.858 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:13.858 nvme0n1 : 2.00 4847.95 605.99 0.00 0.00 3296.75 761.55 11942.12 00:25:13.858 [2024-11-19T10:27:09.355Z] =================================================================================================================== 00:25:13.858 [2024-11-19T10:27:09.355Z] Total : 4847.95 605.99 0.00 0.00 3296.75 761.55 11942.12 00:25:13.858 { 00:25:13.858 "results": [ 00:25:13.858 { 00:25:13.858 "job": "nvme0n1", 00:25:13.858 "core_mask": "0x2", 00:25:13.858 "workload": "randread", 00:25:13.858 "status": "finished", 00:25:13.858 "queue_depth": 16, 00:25:13.858 "io_size": 131072, 00:25:13.858 "runtime": 2.004766, 00:25:13.858 "iops": 4847.947341485241, 00:25:13.858 "mibps": 605.9934176856551, 00:25:13.858 "io_failed": 0, 00:25:13.858 "io_timeout": 0, 00:25:13.858 "avg_latency_us": 3296.753899235175, 00:25:13.858 "min_latency_us": 761.5525925925926, 00:25:13.858 "max_latency_us": 11942.115555555556 00:25:13.858 } 00:25:13.858 ], 00:25:13.858 "core_count": 1 00:25:13.858 } 00:25:13.858 11:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:13.858 11:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:13.858 11:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:13.858 11:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:13.858 | .driver_specific 00:25:13.858 | .nvme_error 00:25:13.858 | .status_code 00:25:13.858 | .command_transient_transport_error' 00:25:14.116 11:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 313 > 0 )) 00:25:14.116 11:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2718910 00:25:14.116 11:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2718910 ']' 00:25:14.116 11:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2718910 00:25:14.116 11:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:14.116 11:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:14.116 11:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2718910 00:25:14.116 11:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:14.116 11:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:14.116 11:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2718910' 00:25:14.116 killing process with pid 2718910 00:25:14.116 11:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2718910 00:25:14.116 Received shutdown signal, test time was about 2.000000 seconds 00:25:14.116 00:25:14.117 Latency(us) 00:25:14.117 [2024-11-19T10:27:09.614Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:14.117 [2024-11-19T10:27:09.614Z] =================================================================================================================== 00:25:14.117 [2024-11-19T10:27:09.614Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:14.117 11:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2718910 00:25:14.375 11:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:25:14.375 11:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:14.375 11:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:14.375 11:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:14.375 11:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:14.375 11:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2719819 00:25:14.375 11:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:25:14.375 11:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2719819 /var/tmp/bperf.sock 00:25:14.375 11:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2719819 ']' 00:25:14.375 11:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:14.375 11:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:14.375 11:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:14.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:14.375 11:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:14.375 11:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:14.375 [2024-11-19 11:27:09.727415] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:25:14.375 [2024-11-19 11:27:09.727491] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2719819 ] 00:25:14.375 [2024-11-19 11:27:09.801985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.375 [2024-11-19 11:27:09.864701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:14.633 11:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:14.633 11:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:14.633 11:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:14.633 11:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:14.890 11:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:14.890 11:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.890 11:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:14.890 11:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.890 11:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:14.890 11:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:15.455 nvme0n1 00:25:15.455 11:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:15.455 11:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.455 11:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:15.455 11:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.455 11:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:15.455 11:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:15.455 Running I/O for 2 seconds... 00:25:15.455 [2024-11-19 11:27:10.868998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166ef270 00:25:15.455 [2024-11-19 11:27:10.870156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.455 [2024-11-19 11:27:10.870191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:15.455 [2024-11-19 11:27:10.879900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166f81e0 00:25:15.455 [2024-11-19 11:27:10.880814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.455 [2024-11-19 11:27:10.880840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:15.455 [2024-11-19 11:27:10.893314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166dece0 00:25:15.455 [2024-11-19 11:27:10.894682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.455 [2024-11-19 11:27:10.894710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:15.455 [2024-11-19 11:27:10.904217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166dece0 00:25:15.455 [2024-11-19 11:27:10.905519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.455 [2024-11-19 11:27:10.905547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:15.455 [2024-11-19 11:27:10.915566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e5658 00:25:15.455 [2024-11-19 11:27:10.916839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.455 [2024-11-19 11:27:10.916865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:15.455 [2024-11-19 11:27:10.927026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e5a90 00:25:15.455 [2024-11-19 11:27:10.928303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.455 [2024-11-19 11:27:10.928329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:15.455 [2024-11-19 11:27:10.937494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166f96f8 00:25:15.455 [2024-11-19 11:27:10.938784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.456 [2024-11-19 11:27:10.938809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:15.456 [2024-11-19 11:27:10.949704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166df118 00:25:15.456 [2024-11-19 11:27:10.951371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.456 [2024-11-19 11:27:10.951398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:15.714 [2024-11-19 11:27:10.960821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166f4b08 00:25:15.714 [2024-11-19 11:27:10.961912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.714 [2024-11-19 11:27:10.961939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:15.714 [2024-11-19 11:27:10.971921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166ed0b0 00:25:15.714 [2024-11-19 11:27:10.973140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:3972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.714 [2024-11-19 11:27:10.973167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:15.714 [2024-11-19 11:27:10.983243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166de8a8 00:25:15.714 [2024-11-19 11:27:10.983955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.714 [2024-11-19 11:27:10.983981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:15.714 [2024-11-19 11:27:10.995184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166fc560 00:25:15.714 [2024-11-19 11:27:10.996146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.714 [2024-11-19 11:27:10.996171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:15.714 [2024-11-19 11:27:11.006560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166fc560 00:25:15.714 [2024-11-19 11:27:11.007573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.714 [2024-11-19 11:27:11.007602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:15.714 [2024-11-19 11:27:11.017185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e27f0 00:25:15.714 [2024-11-19 11:27:11.018196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.714 [2024-11-19 11:27:11.018221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:15.714 [2024-11-19 11:27:11.029172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166f2d80 00:25:15.714 [2024-11-19 11:27:11.030310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.714 [2024-11-19 11:27:11.030355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:15.714 [2024-11-19 11:27:11.041046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166fef90 00:25:15.714 [2024-11-19 11:27:11.042373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.714 [2024-11-19 11:27:11.042400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:15.714 [2024-11-19 11:27:11.052569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e5a90 00:25:15.714 [2024-11-19 11:27:11.053975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.714 [2024-11-19 11:27:11.054000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:15.714 [2024-11-19 11:27:11.063847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166fe2e8 00:25:15.714 [2024-11-19 11:27:11.065350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.714 [2024-11-19 11:27:11.065398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:15.714 [2024-11-19 11:27:11.073930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e5ec8 00:25:15.714 [2024-11-19 11:27:11.075217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:3126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.714 [2024-11-19 11:27:11.075243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:15.714 [2024-11-19 11:27:11.085113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166f96f8 00:25:15.714 [2024-11-19 11:27:11.086186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.714 [2024-11-19 11:27:11.086212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:15.714 [2024-11-19 11:27:11.096212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166f1ca0 00:25:15.714 [2024-11-19 11:27:11.097404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.714 [2024-11-19 11:27:11.097430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:15.714 [2024-11-19 11:27:11.106483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166ec840 00:25:15.714 [2024-11-19 11:27:11.107253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.714 [2024-11-19 11:27:11.107278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:15.714 [2024-11-19 11:27:11.117726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166f1430 00:25:15.715 [2024-11-19 11:27:11.118312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.715 [2024-11-19 11:27:11.118337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:15.715 [2024-11-19 11:27:11.129333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166f8a50 00:25:15.715 [2024-11-19 11:27:11.130131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.715 [2024-11-19 11:27:11.130163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:15.715 [2024-11-19 11:27:11.142679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e9e10 00:25:15.715 [2024-11-19 11:27:11.144298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.715 [2024-11-19 11:27:11.144323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:15.715 [2024-11-19 11:27:11.154616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e73e0 00:25:15.715 [2024-11-19 11:27:11.156341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.715 [2024-11-19 11:27:11.156388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:15.715 [2024-11-19 11:27:11.162422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166eaab8 00:25:15.715 [2024-11-19 11:27:11.163125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.715 [2024-11-19 11:27:11.163152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:15.715 [2024-11-19 11:27:11.176808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166de038 00:25:15.715 [2024-11-19 11:27:11.178623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.715 [2024-11-19 11:27:11.178654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:15.715 [2024-11-19 11:27:11.184708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166f6020 00:25:15.715 [2024-11-19 11:27:11.185468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:17074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.715 [2024-11-19 11:27:11.185494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:15.715 [2024-11-19 11:27:11.197180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e1f80 00:25:15.715 [2024-11-19 11:27:11.198775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:11663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.715 [2024-11-19 11:27:11.198801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:15.715 [2024-11-19 11:27:11.207761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166ed4e8 00:25:15.715 [2024-11-19 11:27:11.208775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.715 [2024-11-19 11:27:11.208801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:15.973 [2024-11-19 11:27:11.220265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166ee5c8 00:25:15.973 [2024-11-19 11:27:11.221516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.974 [2024-11-19 11:27:11.221543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:15.974 [2024-11-19 11:27:11.231948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e7c50 00:25:15.974 [2024-11-19 11:27:11.233144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.974 [2024-11-19 11:27:11.233170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:15.974 [2024-11-19 11:27:11.243293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e99d8 00:25:15.974 [2024-11-19 11:27:11.244265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.974 [2024-11-19 11:27:11.244291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:15.974 [2024-11-19 11:27:11.253812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166f2948 00:25:15.974 [2024-11-19 11:27:11.255255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.974 [2024-11-19 11:27:11.255280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:15.974 [2024-11-19 11:27:11.264109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e99d8 00:25:15.974 [2024-11-19 11:27:11.264963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.974 [2024-11-19 11:27:11.264989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:15.974 [2024-11-19 11:27:11.275630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166eb328 00:25:15.974 [2024-11-19 11:27:11.276607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.974 [2024-11-19 11:27:11.276634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:15.974 [2024-11-19 11:27:11.286769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e7818 00:25:15.974 [2024-11-19 11:27:11.287821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.974 [2024-11-19 11:27:11.287845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:15.974 [2024-11-19 11:27:11.298498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166fd208 00:25:15.974 [2024-11-19 11:27:11.299676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:14190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.974 [2024-11-19 11:27:11.299705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:15.974 [2024-11-19 11:27:11.312010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e27f0 00:25:15.974 [2024-11-19 11:27:11.313857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.974 [2024-11-19 11:27:11.313882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:15.974 [2024-11-19 11:27:11.319891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e1b48 00:25:15.974 [2024-11-19 11:27:11.320755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.974 [2024-11-19 11:27:11.320779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:15.974 [2024-11-19 11:27:11.333971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166f4f40 00:25:15.974 [2024-11-19 11:27:11.335654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.974 [2024-11-19 11:27:11.335694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:15.974 [2024-11-19 11:27:11.344555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166fe720 00:25:15.974 [2024-11-19 11:27:11.346018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.974 [2024-11-19 11:27:11.346043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:15.974 [2024-11-19 11:27:11.353760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e7818 00:25:15.974 [2024-11-19 11:27:11.354442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.974 [2024-11-19 11:27:11.354468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:15.974 [2024-11-19 11:27:11.367661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166ea248 00:25:15.974 [2024-11-19 11:27:11.369539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.974 [2024-11-19 11:27:11.369566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:15.974 [2024-11-19 11:27:11.375707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:15.974 [2024-11-19 11:27:11.376636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:14971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.974 [2024-11-19 11:27:11.376676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:15.974 [2024-11-19 11:27:11.387029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166f7100 00:25:15.974 [2024-11-19 11:27:11.387981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.974 [2024-11-19 11:27:11.388024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:15.974 [2024-11-19 11:27:11.400620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166f31b8 00:25:15.974 [2024-11-19 11:27:11.402069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.974 [2024-11-19 11:27:11.402094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:15.974 [2024-11-19 11:27:11.409781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166eee38 00:25:15.974 [2024-11-19 11:27:11.410618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.974 [2024-11-19 11:27:11.410643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:15.974 [2024-11-19 11:27:11.420276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e12d8 00:25:15.974 [2024-11-19 11:27:11.421071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.974 [2024-11-19 11:27:11.421101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:15.974 [2024-11-19 11:27:11.431948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166ebb98 00:25:15.974 [2024-11-19 11:27:11.432891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.974 [2024-11-19 11:27:11.432916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:15.974 [2024-11-19 11:27:11.443577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e4140 00:25:15.974 [2024-11-19 11:27:11.444624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.974 [2024-11-19 11:27:11.444651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:15.974 [2024-11-19 11:27:11.454834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166ddc00 00:25:15.974 [2024-11-19 11:27:11.455410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.974 [2024-11-19 11:27:11.455436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:15.974 [2024-11-19 11:27:11.466106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166ed0b0 00:25:15.974 [2024-11-19 11:27:11.467299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.974 [2024-11-19 11:27:11.467324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:16.233 [2024-11-19 11:27:11.478750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166ecc78 00:25:16.233 [2024-11-19 11:27:11.480018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.233 [2024-11-19 11:27:11.480044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:16.233 [2024-11-19 11:27:11.492560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e4578 00:25:16.233 [2024-11-19 11:27:11.494375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.233 [2024-11-19 11:27:11.494402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:16.233 [2024-11-19 11:27:11.500812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166edd58 00:25:16.233 [2024-11-19 11:27:11.501817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.233 [2024-11-19 11:27:11.501841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.233 [2024-11-19 11:27:11.514806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e1f80 00:25:16.233 [2024-11-19 11:27:11.516327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.233 [2024-11-19 11:27:11.516352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:16.233 [2024-11-19 11:27:11.525249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166f1868 00:25:16.233 [2024-11-19 11:27:11.526490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.233 [2024-11-19 11:27:11.526518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:16.233 [2024-11-19 11:27:11.538470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e5a90 00:25:16.233 [2024-11-19 11:27:11.540358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:4369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.233 [2024-11-19 11:27:11.540404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:16.233 [2024-11-19 11:27:11.546292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166f9b30 00:25:16.234 [2024-11-19 11:27:11.547202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.234 [2024-11-19 11:27:11.547227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.234 [2024-11-19 11:27:11.557483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e8d30 00:25:16.234 [2024-11-19 11:27:11.558459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.234 [2024-11-19 11:27:11.558485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:16.234 [2024-11-19 11:27:11.568788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166eb328 00:25:16.234 [2024-11-19 11:27:11.569787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.234 [2024-11-19 11:27:11.569811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:16.234 [2024-11-19 11:27:11.580206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166f92c0 00:25:16.234 [2024-11-19 11:27:11.580935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.234 [2024-11-19 11:27:11.580960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:16.234 [2024-11-19 11:27:11.592763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166f9f68 00:25:16.234 [2024-11-19 11:27:11.594179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.234 [2024-11-19 11:27:11.594205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.234 [2024-11-19 11:27:11.601952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166ebb98 00:25:16.234 [2024-11-19 11:27:11.602795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.234 [2024-11-19 11:27:11.602820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:16.234 [2024-11-19 11:27:11.612222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166df550 00:25:16.234 [2024-11-19 11:27:11.613082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.234 [2024-11-19 11:27:11.613106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:16.234 [2024-11-19 11:27:11.625733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166ddc00 00:25:16.234 [2024-11-19 11:27:11.627166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.234 [2024-11-19 11:27:11.627191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:16.234 [2024-11-19 11:27:11.637011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166f0ff8 00:25:16.234 [2024-11-19 11:27:11.638509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.234 [2024-11-19 11:27:11.638535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:16.234 [2024-11-19 11:27:11.648288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166f8a50 00:25:16.234 [2024-11-19 11:27:11.649358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:3651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.234 [2024-11-19 11:27:11.649394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:16.234 [2024-11-19 11:27:11.659069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e01f8 00:25:16.234 [2024-11-19 11:27:11.660691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.234 [2024-11-19 11:27:11.660732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:16.234 [2024-11-19 11:27:11.668882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166eaef0 00:25:16.234 [2024-11-19 11:27:11.669777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.234 [2024-11-19 11:27:11.669802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:16.234 [2024-11-19 11:27:11.682524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166eb760 00:25:16.234 [2024-11-19 11:27:11.683879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:3016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.234 [2024-11-19 11:27:11.683905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:16.234 [2024-11-19 11:27:11.692916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166f5be8 00:25:16.234 [2024-11-19 11:27:11.694096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.234 [2024-11-19 11:27:11.694123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:16.234 [2024-11-19 11:27:11.705789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166f31b8 00:25:16.234 [2024-11-19 11:27:11.707568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.234 [2024-11-19 11:27:11.707594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:16.234 [2024-11-19 11:27:11.713650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e4de8 00:25:16.234 [2024-11-19 11:27:11.714395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.234 [2024-11-19 11:27:11.714425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:16.234 [2024-11-19 11:27:11.724145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166f0788 00:25:16.234 [2024-11-19 11:27:11.724920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.234 [2024-11-19 11:27:11.724945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:16.493 [2024-11-19 11:27:11.737106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e5a90 00:25:16.493 [2024-11-19 11:27:11.738002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.493 [2024-11-19 11:27:11.738027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:16.493 [2024-11-19 11:27:11.749163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e0ea0 00:25:16.493 [2024-11-19 11:27:11.750269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.493 [2024-11-19 11:27:11.750294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:16.493 [2024-11-19 11:27:11.760954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e6b70 00:25:16.493 [2024-11-19 11:27:11.761586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.493 [2024-11-19 11:27:11.761614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:16.493 [2024-11-19 11:27:11.773007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166fc128 00:25:16.493 [2024-11-19 11:27:11.773789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.493 [2024-11-19 11:27:11.773815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:16.493 [2024-11-19 11:27:11.784040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166f20d8 00:25:16.493 [2024-11-19 11:27:11.785056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.493 [2024-11-19 11:27:11.785081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:16.493 [2024-11-19 11:27:11.795074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166de470 00:25:16.493 [2024-11-19 11:27:11.795961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.493 [2024-11-19 11:27:11.795986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:16.493 [2024-11-19 11:27:11.805540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166f81e0 00:25:16.493 [2024-11-19 11:27:11.806454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.493 [2024-11-19 11:27:11.806478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:16.493 [2024-11-19 11:27:11.818981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166fa7d8 00:25:16.493 [2024-11-19 11:27:11.820568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.493 [2024-11-19 11:27:11.820595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:16.493 [2024-11-19 11:27:11.829117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e8d30 00:25:16.493 [2024-11-19 11:27:11.830565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.493 [2024-11-19 11:27:11.830590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:16.493 [2024-11-19 11:27:11.838654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e1710 00:25:16.493 [2024-11-19 11:27:11.839413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.493 [2024-11-19 11:27:11.839438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:16.493 [2024-11-19 11:27:11.850289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e5ec8 00:25:16.493 [2024-11-19 11:27:11.851244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.493 [2024-11-19 11:27:11.851270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:16.493 22469.00 IOPS, 87.77 MiB/s [2024-11-19T10:27:11.990Z] [2024-11-19 11:27:11.863723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e27f0 00:25:16.493 [2024-11-19 11:27:11.865108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:18279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.493 [2024-11-19 11:27:11.865134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:16.493 [2024-11-19 11:27:11.873092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e6fa8 00:25:16.493 [2024-11-19 11:27:11.873989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.493 [2024-11-19 11:27:11.874017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:16.493 [2024-11-19 11:27:11.884524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166f7100 00:25:16.493 [2024-11-19 11:27:11.885348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.493 [2024-11-19 11:27:11.885383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:16.493 [2024-11-19 11:27:11.896157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166f0ff8 00:25:16.493 [2024-11-19 11:27:11.896944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.493 [2024-11-19 11:27:11.896976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:16.493 [2024-11-19 11:27:11.909335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166ec408 00:25:16.493 [2024-11-19 11:27:11.910841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.493 [2024-11-19 11:27:11.910869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:16.493 [2024-11-19 11:27:11.920537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166f8e88 00:25:16.493 [2024-11-19 11:27:11.921834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.493 [2024-11-19 11:27:11.921859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:16.493 [2024-11-19 11:27:11.932394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166f6458 00:25:16.493 [2024-11-19 11:27:11.933952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.493 [2024-11-19 11:27:11.933978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:16.493 [2024-11-19 11:27:11.942545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e8d30 00:25:16.493 [2024-11-19 11:27:11.944086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.493 [2024-11-19 11:27:11.944110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:16.493 [2024-11-19 11:27:11.952094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166f7538 00:25:16.493 [2024-11-19 11:27:11.952887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.494 [2024-11-19 11:27:11.952912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:16.494 [2024-11-19 11:27:11.963772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e4578 00:25:16.494 [2024-11-19 11:27:11.964695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.494 [2024-11-19 11:27:11.964720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:16.494 [2024-11-19 11:27:11.976270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166f20d8 00:25:16.494 [2024-11-19 11:27:11.977454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.494 [2024-11-19 11:27:11.977480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:16.494 [2024-11-19 11:27:11.987813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166eaef0 00:25:16.494 [2024-11-19 11:27:11.989089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.494 [2024-11-19 11:27:11.989115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:16.752 [2024-11-19 11:27:12.000067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166fb8b8 00:25:16.752 [2024-11-19 11:27:12.001321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:10072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.752 [2024-11-19 11:27:12.001346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:16.752 [2024-11-19 11:27:12.010703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166fd208 00:25:16.752 [2024-11-19 11:27:12.011914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.752 [2024-11-19 11:27:12.011944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:16.752 [2024-11-19 11:27:12.022332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e27f0 00:25:16.752 [2024-11-19 11:27:12.023704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:9520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.752 [2024-11-19 11:27:12.023729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:16.752 [2024-11-19 11:27:12.032780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166f81e0 00:25:16.752 [2024-11-19 11:27:12.033711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:24506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.752 [2024-11-19 11:27:12.033736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:16.752 [2024-11-19 11:27:12.044109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e5ec8 00:25:16.752 [2024-11-19 11:27:12.044971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.752 [2024-11-19 11:27:12.044996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:16.752 [2024-11-19 11:27:12.055807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e3498 00:25:16.752 [2024-11-19 11:27:12.056824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.752 [2024-11-19 11:27:12.056855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:16.752 [2024-11-19 11:27:12.066118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166f9b30 00:25:16.752 [2024-11-19 11:27:12.067179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:8310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.752 [2024-11-19 11:27:12.067204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:16.752 [2024-11-19 11:27:12.077464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166fc998 00:25:16.752 [2024-11-19 11:27:12.078397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.752 [2024-11-19 11:27:12.078423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:16.752 [2024-11-19 11:27:12.088696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e38d0 00:25:16.752 [2024-11-19 11:27:12.089586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.752 [2024-11-19 11:27:12.089611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:16.752 [2024-11-19 11:27:12.099714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e38d0 00:25:16.752 [2024-11-19 11:27:12.100647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.752 [2024-11-19 11:27:12.100698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:16.752 [2024-11-19 11:27:12.113648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e27f0 00:25:16.752 [2024-11-19 11:27:12.115469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.752 [2024-11-19 11:27:12.115496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:16.752 [2024-11-19 11:27:12.122069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166ee5c8 00:25:16.752 [2024-11-19 11:27:12.122924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.752 [2024-11-19 11:27:12.122952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:16.752 [2024-11-19 11:27:12.133454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166df550 00:25:16.752 [2024-11-19 11:27:12.134204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.752 [2024-11-19 11:27:12.134229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:16.752 [2024-11-19 11:27:12.144986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166f0bc0 00:25:16.752 [2024-11-19 11:27:12.146134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:9416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.752 [2024-11-19 11:27:12.146159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:16.752 [2024-11-19 11:27:12.157022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e0630 00:25:16.752 [2024-11-19 11:27:12.157691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.752 [2024-11-19 11:27:12.157733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:16.752 [2024-11-19 11:27:12.169076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e3498 00:25:16.752 [2024-11-19 11:27:12.169898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.752 [2024-11-19 11:27:12.169934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:16.752 [2024-11-19 11:27:12.182412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166f9b30 00:25:16.752 [2024-11-19 11:27:12.184180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.752 [2024-11-19 11:27:12.184206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:16.752 [2024-11-19 11:27:12.190425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166f5378 00:25:16.752 [2024-11-19 11:27:12.191367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.752 [2024-11-19 11:27:12.191417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.752 [2024-11-19 11:27:12.201996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e1f80 00:25:16.752 [2024-11-19 11:27:12.203073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.752 [2024-11-19 11:27:12.203098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:16.752 [2024-11-19 11:27:12.213566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166f0350 00:25:16.752 [2024-11-19 11:27:12.214791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.753 [2024-11-19 11:27:12.214817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.753 [2024-11-19 11:27:12.225082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:16.753 [2024-11-19 11:27:12.225245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:11097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.753 [2024-11-19 11:27:12.225271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.753 [2024-11-19 11:27:12.237102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:16.753 [2024-11-19 11:27:12.237281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.753 [2024-11-19 11:27:12.237306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.011 [2024-11-19 11:27:12.249791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.011 [2024-11-19 11:27:12.250031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.011 [2024-11-19 11:27:12.250055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.011 [2024-11-19 11:27:12.262311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.011 [2024-11-19 11:27:12.262548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.011 [2024-11-19 11:27:12.262573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.012 [2024-11-19 11:27:12.274472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.012 [2024-11-19 11:27:12.274693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.012 [2024-11-19 11:27:12.274719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.012 [2024-11-19 11:27:12.286681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.012 [2024-11-19 11:27:12.286923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.012 [2024-11-19 11:27:12.286949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.012 [2024-11-19 11:27:12.298855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.012 [2024-11-19 11:27:12.299086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.012 [2024-11-19 11:27:12.299111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.012 [2024-11-19 11:27:12.310955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.012 [2024-11-19 11:27:12.311141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.012 [2024-11-19 11:27:12.311171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.012 [2024-11-19 11:27:12.323091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.012 [2024-11-19 11:27:12.323329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.012 [2024-11-19 11:27:12.323355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.012 [2024-11-19 11:27:12.335203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.012 [2024-11-19 11:27:12.335407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.012 [2024-11-19 11:27:12.335432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.012 [2024-11-19 11:27:12.347256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.012 [2024-11-19 11:27:12.347521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.012 [2024-11-19 11:27:12.347547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.012 [2024-11-19 11:27:12.359456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.012 [2024-11-19 11:27:12.359671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.012 [2024-11-19 11:27:12.359697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.012 [2024-11-19 11:27:12.371585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.012 [2024-11-19 11:27:12.371817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.012 [2024-11-19 11:27:12.371842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.012 [2024-11-19 11:27:12.383794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.012 [2024-11-19 11:27:12.384031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.012 [2024-11-19 11:27:12.384057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.012 [2024-11-19 11:27:12.395872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.012 [2024-11-19 11:27:12.396067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.012 [2024-11-19 11:27:12.396093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.012 [2024-11-19 11:27:12.407953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.012 [2024-11-19 11:27:12.408201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.012 [2024-11-19 11:27:12.408228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.012 [2024-11-19 11:27:12.420575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.012 [2024-11-19 11:27:12.420813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.012 [2024-11-19 11:27:12.420838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.012 [2024-11-19 11:27:12.433076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.012 [2024-11-19 11:27:12.433303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.012 [2024-11-19 11:27:12.433328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.012 [2024-11-19 11:27:12.445212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.012 [2024-11-19 11:27:12.445466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.012 [2024-11-19 11:27:12.445493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.012 [2024-11-19 11:27:12.457320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.012 [2024-11-19 11:27:12.457572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.012 [2024-11-19 11:27:12.457599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.012 [2024-11-19 11:27:12.469512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.012 [2024-11-19 11:27:12.469734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.012 [2024-11-19 11:27:12.469759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.012 [2024-11-19 11:27:12.481718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.012 [2024-11-19 11:27:12.481962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.012 [2024-11-19 11:27:12.481988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.012 [2024-11-19 11:27:12.493862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.012 [2024-11-19 11:27:12.494094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.012 [2024-11-19 11:27:12.494119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.012 [2024-11-19 11:27:12.506212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.012 [2024-11-19 11:27:12.506480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.012 [2024-11-19 11:27:12.506507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.271 [2024-11-19 11:27:12.518894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.271 [2024-11-19 11:27:12.519091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.271 [2024-11-19 11:27:12.519116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.271 [2024-11-19 11:27:12.530844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.271 [2024-11-19 11:27:12.531039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.271 [2024-11-19 11:27:12.531065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.271 [2024-11-19 11:27:12.542800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.271 [2024-11-19 11:27:12.542989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:3314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.271 [2024-11-19 11:27:12.543015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.271 [2024-11-19 11:27:12.554855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.271 [2024-11-19 11:27:12.555052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.271 [2024-11-19 11:27:12.555077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.271 [2024-11-19 11:27:12.566984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.271 [2024-11-19 11:27:12.567214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.271 [2024-11-19 11:27:12.567240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.271 [2024-11-19 11:27:12.579154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.271 [2024-11-19 11:27:12.579353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.271 [2024-11-19 11:27:12.579391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.271 [2024-11-19 11:27:12.591264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.271 [2024-11-19 11:27:12.591487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.271 [2024-11-19 11:27:12.591514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.271 [2024-11-19 11:27:12.603258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.271 [2024-11-19 11:27:12.603461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.271 [2024-11-19 11:27:12.603487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.271 [2024-11-19 11:27:12.615175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.271 [2024-11-19 11:27:12.615374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.271 [2024-11-19 11:27:12.615399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.271 [2024-11-19 11:27:12.627174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.271 [2024-11-19 11:27:12.627423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:17286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.271 [2024-11-19 11:27:12.627453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.271 [2024-11-19 11:27:12.639314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.271 [2024-11-19 11:27:12.639562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.271 [2024-11-19 11:27:12.639589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.271 [2024-11-19 11:27:12.651425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.272 [2024-11-19 11:27:12.651618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.272 [2024-11-19 11:27:12.651643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.272 [2024-11-19 11:27:12.663504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.272 [2024-11-19 11:27:12.663732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:8746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.272 [2024-11-19 11:27:12.663760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.272 [2024-11-19 11:27:12.676189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.272 [2024-11-19 11:27:12.676382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.272 [2024-11-19 11:27:12.676420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.272 [2024-11-19 11:27:12.688728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.272 [2024-11-19 11:27:12.688961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.272 [2024-11-19 11:27:12.688987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.272 [2024-11-19 11:27:12.700855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.272 [2024-11-19 11:27:12.701079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.272 [2024-11-19 11:27:12.701104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.272 [2024-11-19 11:27:12.712954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.272 [2024-11-19 11:27:12.713196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.272 [2024-11-19 11:27:12.713222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.272 [2024-11-19 11:27:12.725098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.272 [2024-11-19 11:27:12.725325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.272 [2024-11-19 11:27:12.725372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.272 [2024-11-19 11:27:12.737304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.272 [2024-11-19 11:27:12.737536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:18612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.272 [2024-11-19 11:27:12.737565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.272 [2024-11-19 11:27:12.749875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.272 [2024-11-19 11:27:12.750038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.272 [2024-11-19 11:27:12.750064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.272 [2024-11-19 11:27:12.762440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.272 [2024-11-19 11:27:12.762594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.272 [2024-11-19 11:27:12.762619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.530 [2024-11-19 11:27:12.775698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.530 [2024-11-19 11:27:12.775881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.530 [2024-11-19 11:27:12.775906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.530 [2024-11-19 11:27:12.787908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.530 [2024-11-19 11:27:12.788136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.530 [2024-11-19 11:27:12.788160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.530 [2024-11-19 11:27:12.800000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.530 [2024-11-19 11:27:12.800240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.531 [2024-11-19 11:27:12.800264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.531 [2024-11-19 11:27:12.812102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.531 [2024-11-19 11:27:12.812329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.531 [2024-11-19 11:27:12.812376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.531 [2024-11-19 11:27:12.824214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.531 [2024-11-19 11:27:12.824470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.531 [2024-11-19 11:27:12.824496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.531 [2024-11-19 11:27:12.836335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.531 [2024-11-19 11:27:12.836591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.531 [2024-11-19 11:27:12.836617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.531 [2024-11-19 11:27:12.848469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.531 [2024-11-19 11:27:12.848711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.531 [2024-11-19 11:27:12.848736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.531 22013.50 IOPS, 85.99 MiB/s [2024-11-19T10:27:13.028Z] [2024-11-19 11:27:12.860540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1220) with pdu=0x2000166e23b8 00:25:17.531 [2024-11-19 11:27:12.860771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.531 [2024-11-19 11:27:12.860797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.531 00:25:17.531 Latency(us) 00:25:17.531 [2024-11-19T10:27:13.028Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:17.531 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:17.531 nvme0n1 : 2.01 22011.69 85.98 0.00 0.00 5803.45 2463.67 14951.92 00:25:17.531 [2024-11-19T10:27:13.028Z] =================================================================================================================== 00:25:17.531 [2024-11-19T10:27:13.028Z] Total : 22011.69 85.98 0.00 0.00 5803.45 2463.67 14951.92 00:25:17.531 { 00:25:17.531 "results": [ 00:25:17.531 { 00:25:17.531 "job": "nvme0n1", 00:25:17.531 "core_mask": "0x2", 00:25:17.531 "workload": "randwrite", 00:25:17.531 "status": "finished", 00:25:17.531 "queue_depth": 128, 00:25:17.531 "io_size": 4096, 00:25:17.531 "runtime": 2.007433, 00:25:17.531 "iops": 22011.693540955042, 00:25:17.531 "mibps": 85.98317789435563, 00:25:17.531 "io_failed": 0, 00:25:17.531 "io_timeout": 0, 00:25:17.531 "avg_latency_us": 5803.4494752185365, 00:25:17.531 "min_latency_us": 2463.6681481481482, 00:25:17.531 "max_latency_us": 14951.917037037038 00:25:17.531 } 00:25:17.531 ], 00:25:17.531 "core_count": 1 00:25:17.531 } 00:25:17.531 11:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:17.531 11:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:17.531 11:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:17.531 11:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:17.531 | .driver_specific 00:25:17.531 | .nvme_error 00:25:17.531 | .status_code 00:25:17.531 | .command_transient_transport_error' 00:25:17.789 11:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 173 > 0 )) 00:25:17.789 11:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2719819 00:25:17.789 11:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2719819 ']' 00:25:17.789 11:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2719819 00:25:17.789 11:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:17.789 11:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:17.789 11:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2719819 00:25:17.789 11:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:17.789 11:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:17.789 11:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2719819' 00:25:17.789 killing process with pid 2719819 00:25:17.789 11:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2719819 00:25:17.789 Received shutdown signal, test time was about 2.000000 seconds 00:25:17.789 00:25:17.789 Latency(us) 00:25:17.789 [2024-11-19T10:27:13.286Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:17.789 [2024-11-19T10:27:13.286Z] =================================================================================================================== 00:25:17.789 [2024-11-19T10:27:13.286Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:17.789 11:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2719819 00:25:18.047 11:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:25:18.047 11:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:18.047 11:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:18.047 11:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:18.047 11:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:18.047 11:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2720346 00:25:18.047 11:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:25:18.047 11:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2720346 /var/tmp/bperf.sock 00:25:18.047 11:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2720346 ']' 00:25:18.047 11:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:18.047 11:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:18.047 11:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:18.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:18.047 11:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:18.047 11:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:18.047 [2024-11-19 11:27:13.438667] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:25:18.047 [2024-11-19 11:27:13.438765] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2720346 ] 00:25:18.047 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:18.047 Zero copy mechanism will not be used. 00:25:18.047 [2024-11-19 11:27:13.513581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.304 [2024-11-19 11:27:13.572639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:18.305 11:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:18.305 11:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:18.305 11:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:18.305 11:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:18.562 11:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:18.562 11:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.562 11:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:18.562 11:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.562 11:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:18.562 11:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:19.128 nvme0n1 00:25:19.128 11:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:19.128 11:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.128 11:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:19.128 11:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.128 11:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:19.128 11:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:19.128 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:19.128 Zero copy mechanism will not be used. 00:25:19.128 Running I/O for 2 seconds... 00:25:19.128 [2024-11-19 11:27:14.573600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.128 [2024-11-19 11:27:14.573725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.128 [2024-11-19 11:27:14.573761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.128 [2024-11-19 11:27:14.579916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.128 [2024-11-19 11:27:14.580005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.128 [2024-11-19 11:27:14.580032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.128 [2024-11-19 11:27:14.586447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.128 [2024-11-19 11:27:14.586541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.128 [2024-11-19 11:27:14.586568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.128 [2024-11-19 11:27:14.592730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.128 [2024-11-19 11:27:14.592808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.128 [2024-11-19 11:27:14.592834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.128 [2024-11-19 11:27:14.599062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.128 [2024-11-19 11:27:14.599170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.128 [2024-11-19 11:27:14.599195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.128 [2024-11-19 11:27:14.605473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.128 [2024-11-19 11:27:14.605570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.128 [2024-11-19 11:27:14.605597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.128 [2024-11-19 11:27:14.612114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.128 [2024-11-19 11:27:14.612197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.128 [2024-11-19 11:27:14.612222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.128 [2024-11-19 11:27:14.618818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.128 [2024-11-19 11:27:14.618927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.128 [2024-11-19 11:27:14.618952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.387 [2024-11-19 11:27:14.626151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.387 [2024-11-19 11:27:14.626285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.387 [2024-11-19 11:27:14.626325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.387 [2024-11-19 11:27:14.632964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.387 [2024-11-19 11:27:14.633047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.387 [2024-11-19 11:27:14.633072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.387 [2024-11-19 11:27:14.639683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.387 [2024-11-19 11:27:14.639776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.387 [2024-11-19 11:27:14.639800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.387 [2024-11-19 11:27:14.646299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.387 [2024-11-19 11:27:14.646396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.387 [2024-11-19 11:27:14.646422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.387 [2024-11-19 11:27:14.653512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.387 [2024-11-19 11:27:14.653614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.387 [2024-11-19 11:27:14.653642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.387 [2024-11-19 11:27:14.660563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.387 [2024-11-19 11:27:14.660658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.387 [2024-11-19 11:27:14.660683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.387 [2024-11-19 11:27:14.667319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.387 [2024-11-19 11:27:14.667447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.387 [2024-11-19 11:27:14.667474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.387 [2024-11-19 11:27:14.673859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.387 [2024-11-19 11:27:14.673940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.387 [2024-11-19 11:27:14.673965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.387 [2024-11-19 11:27:14.680627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.387 [2024-11-19 11:27:14.680717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.387 [2024-11-19 11:27:14.680742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.387 [2024-11-19 11:27:14.687625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.387 [2024-11-19 11:27:14.687754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.387 [2024-11-19 11:27:14.687779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.387 [2024-11-19 11:27:14.694758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.387 [2024-11-19 11:27:14.694840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.387 [2024-11-19 11:27:14.694865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.387 [2024-11-19 11:27:14.701508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.387 [2024-11-19 11:27:14.701592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.387 [2024-11-19 11:27:14.701618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.387 [2024-11-19 11:27:14.708141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.387 [2024-11-19 11:27:14.708218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.387 [2024-11-19 11:27:14.708243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.387 [2024-11-19 11:27:14.714854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.387 [2024-11-19 11:27:14.714939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.387 [2024-11-19 11:27:14.714964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.387 [2024-11-19 11:27:14.722189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.387 [2024-11-19 11:27:14.722276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.387 [2024-11-19 11:27:14.722307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.387 [2024-11-19 11:27:14.729540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.387 [2024-11-19 11:27:14.729679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.387 [2024-11-19 11:27:14.729705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.387 [2024-11-19 11:27:14.736211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.388 [2024-11-19 11:27:14.736285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.388 [2024-11-19 11:27:14.736310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.388 [2024-11-19 11:27:14.742652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.388 [2024-11-19 11:27:14.742774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.388 [2024-11-19 11:27:14.742799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.388 [2024-11-19 11:27:14.748909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.388 [2024-11-19 11:27:14.749000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.388 [2024-11-19 11:27:14.749025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.388 [2024-11-19 11:27:14.755287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.388 [2024-11-19 11:27:14.755424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.388 [2024-11-19 11:27:14.755452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.388 [2024-11-19 11:27:14.761382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.388 [2024-11-19 11:27:14.761474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.388 [2024-11-19 11:27:14.761500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.388 [2024-11-19 11:27:14.767398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.388 [2024-11-19 11:27:14.767503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.388 [2024-11-19 11:27:14.767530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.388 [2024-11-19 11:27:14.773320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.388 [2024-11-19 11:27:14.773433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.388 [2024-11-19 11:27:14.773458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.388 [2024-11-19 11:27:14.779216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.388 [2024-11-19 11:27:14.779293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.388 [2024-11-19 11:27:14.779318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.388 [2024-11-19 11:27:14.785089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.388 [2024-11-19 11:27:14.785161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.388 [2024-11-19 11:27:14.785186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.388 [2024-11-19 11:27:14.791787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.388 [2024-11-19 11:27:14.791887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.388 [2024-11-19 11:27:14.791912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.388 [2024-11-19 11:27:14.798050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.388 [2024-11-19 11:27:14.798154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.388 [2024-11-19 11:27:14.798181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.388 [2024-11-19 11:27:14.804940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.388 [2024-11-19 11:27:14.805012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.388 [2024-11-19 11:27:14.805038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.388 [2024-11-19 11:27:14.811337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.388 [2024-11-19 11:27:14.811475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.388 [2024-11-19 11:27:14.811505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.388 [2024-11-19 11:27:14.817693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.388 [2024-11-19 11:27:14.817787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.388 [2024-11-19 11:27:14.817814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.388 [2024-11-19 11:27:14.824231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.388 [2024-11-19 11:27:14.824310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.388 [2024-11-19 11:27:14.824335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.388 [2024-11-19 11:27:14.830829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.388 [2024-11-19 11:27:14.830925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.388 [2024-11-19 11:27:14.830952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.388 [2024-11-19 11:27:14.837834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.388 [2024-11-19 11:27:14.837936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.388 [2024-11-19 11:27:14.837963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.388 [2024-11-19 11:27:14.844949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.388 [2024-11-19 11:27:14.845014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.388 [2024-11-19 11:27:14.845038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.388 [2024-11-19 11:27:14.852752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.388 [2024-11-19 11:27:14.852861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.388 [2024-11-19 11:27:14.852886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.388 [2024-11-19 11:27:14.860747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.388 [2024-11-19 11:27:14.860915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.388 [2024-11-19 11:27:14.860942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.388 [2024-11-19 11:27:14.868313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.388 [2024-11-19 11:27:14.868466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.388 [2024-11-19 11:27:14.868494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.388 [2024-11-19 11:27:14.875101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.388 [2024-11-19 11:27:14.875176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.388 [2024-11-19 11:27:14.875201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.388 [2024-11-19 11:27:14.881144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.388 [2024-11-19 11:27:14.881252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.388 [2024-11-19 11:27:14.881281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.648 [2024-11-19 11:27:14.887848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.648 [2024-11-19 11:27:14.887924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.648 [2024-11-19 11:27:14.887967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.648 [2024-11-19 11:27:14.893956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.648 [2024-11-19 11:27:14.894041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.648 [2024-11-19 11:27:14.894081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.648 [2024-11-19 11:27:14.899619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.648 [2024-11-19 11:27:14.899709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.648 [2024-11-19 11:27:14.899736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.648 [2024-11-19 11:27:14.905431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.648 [2024-11-19 11:27:14.905518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.648 [2024-11-19 11:27:14.905545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.648 [2024-11-19 11:27:14.911066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.648 [2024-11-19 11:27:14.911133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.648 [2024-11-19 11:27:14.911157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.648 [2024-11-19 11:27:14.916872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.648 [2024-11-19 11:27:14.916940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.648 [2024-11-19 11:27:14.916965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.648 [2024-11-19 11:27:14.922830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.648 [2024-11-19 11:27:14.922906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.648 [2024-11-19 11:27:14.922934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.648 [2024-11-19 11:27:14.929387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.648 [2024-11-19 11:27:14.929467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.648 [2024-11-19 11:27:14.929494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.648 [2024-11-19 11:27:14.935166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.648 [2024-11-19 11:27:14.935232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.648 [2024-11-19 11:27:14.935257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.648 [2024-11-19 11:27:14.940921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.648 [2024-11-19 11:27:14.940992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.648 [2024-11-19 11:27:14.941017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.648 [2024-11-19 11:27:14.946913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.648 [2024-11-19 11:27:14.947020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.648 [2024-11-19 11:27:14.947044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.648 [2024-11-19 11:27:14.952875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.648 [2024-11-19 11:27:14.952958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.648 [2024-11-19 11:27:14.952989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.648 [2024-11-19 11:27:14.958582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.648 [2024-11-19 11:27:14.958661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.648 [2024-11-19 11:27:14.958685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.648 [2024-11-19 11:27:14.965338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.648 [2024-11-19 11:27:14.965434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.648 [2024-11-19 11:27:14.965477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.648 [2024-11-19 11:27:14.971241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.648 [2024-11-19 11:27:14.971326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.648 [2024-11-19 11:27:14.971353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.648 [2024-11-19 11:27:14.977226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.648 [2024-11-19 11:27:14.977302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.648 [2024-11-19 11:27:14.977328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.648 [2024-11-19 11:27:14.983431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.648 [2024-11-19 11:27:14.983518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.648 [2024-11-19 11:27:14.983544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.648 [2024-11-19 11:27:14.989182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.648 [2024-11-19 11:27:14.989281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.648 [2024-11-19 11:27:14.989307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.648 [2024-11-19 11:27:14.994976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.648 [2024-11-19 11:27:14.995101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.648 [2024-11-19 11:27:14.995128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.648 [2024-11-19 11:27:15.000720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.648 [2024-11-19 11:27:15.000793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.648 [2024-11-19 11:27:15.000817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.648 [2024-11-19 11:27:15.006460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.648 [2024-11-19 11:27:15.006538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.648 [2024-11-19 11:27:15.006564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.648 [2024-11-19 11:27:15.012310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.648 [2024-11-19 11:27:15.012408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.648 [2024-11-19 11:27:15.012434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.648 [2024-11-19 11:27:15.018432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.648 [2024-11-19 11:27:15.018518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.648 [2024-11-19 11:27:15.018545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.648 [2024-11-19 11:27:15.024541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.648 [2024-11-19 11:27:15.024666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.648 [2024-11-19 11:27:15.024708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.648 [2024-11-19 11:27:15.030485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.648 [2024-11-19 11:27:15.030570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.649 [2024-11-19 11:27:15.030605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.649 [2024-11-19 11:27:15.036164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.649 [2024-11-19 11:27:15.036257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.649 [2024-11-19 11:27:15.036281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.649 [2024-11-19 11:27:15.041986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.649 [2024-11-19 11:27:15.042059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.649 [2024-11-19 11:27:15.042084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.649 [2024-11-19 11:27:15.047711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.649 [2024-11-19 11:27:15.047774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.649 [2024-11-19 11:27:15.047804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.649 [2024-11-19 11:27:15.053469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.649 [2024-11-19 11:27:15.053543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.649 [2024-11-19 11:27:15.053568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.649 [2024-11-19 11:27:15.059155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.649 [2024-11-19 11:27:15.059230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.649 [2024-11-19 11:27:15.059254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.649 [2024-11-19 11:27:15.065054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.649 [2024-11-19 11:27:15.065121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.649 [2024-11-19 11:27:15.065145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.649 [2024-11-19 11:27:15.071557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.649 [2024-11-19 11:27:15.071655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.649 [2024-11-19 11:27:15.071698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.649 [2024-11-19 11:27:15.077234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.649 [2024-11-19 11:27:15.077308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.649 [2024-11-19 11:27:15.077333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.649 [2024-11-19 11:27:15.083357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.649 [2024-11-19 11:27:15.083446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.649 [2024-11-19 11:27:15.083471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.649 [2024-11-19 11:27:15.089064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.649 [2024-11-19 11:27:15.089166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.649 [2024-11-19 11:27:15.089193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.649 [2024-11-19 11:27:15.094837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.649 [2024-11-19 11:27:15.094957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.649 [2024-11-19 11:27:15.094983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.649 [2024-11-19 11:27:15.101083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.649 [2024-11-19 11:27:15.101250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.649 [2024-11-19 11:27:15.101276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.649 [2024-11-19 11:27:15.107544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.649 [2024-11-19 11:27:15.107659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.649 [2024-11-19 11:27:15.107700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.649 [2024-11-19 11:27:15.114148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.649 [2024-11-19 11:27:15.114360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.649 [2024-11-19 11:27:15.114410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.649 [2024-11-19 11:27:15.120358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.649 [2024-11-19 11:27:15.120487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.649 [2024-11-19 11:27:15.120514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.649 [2024-11-19 11:27:15.126806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.649 [2024-11-19 11:27:15.127035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.649 [2024-11-19 11:27:15.127061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.649 [2024-11-19 11:27:15.133438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.649 [2024-11-19 11:27:15.133685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.649 [2024-11-19 11:27:15.133712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.649 [2024-11-19 11:27:15.139780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.649 [2024-11-19 11:27:15.139939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.649 [2024-11-19 11:27:15.139985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.909 [2024-11-19 11:27:15.146421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.909 [2024-11-19 11:27:15.146651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.909 [2024-11-19 11:27:15.146694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.909 [2024-11-19 11:27:15.153449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.909 [2024-11-19 11:27:15.153585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.909 [2024-11-19 11:27:15.153612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.909 [2024-11-19 11:27:15.159740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.909 [2024-11-19 11:27:15.159905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.909 [2024-11-19 11:27:15.159932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.909 [2024-11-19 11:27:15.166162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.909 [2024-11-19 11:27:15.166288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.909 [2024-11-19 11:27:15.166315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.909 [2024-11-19 11:27:15.172440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.909 [2024-11-19 11:27:15.172582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.909 [2024-11-19 11:27:15.172608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.909 [2024-11-19 11:27:15.178843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.909 [2024-11-19 11:27:15.178970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.909 [2024-11-19 11:27:15.178998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.909 [2024-11-19 11:27:15.185081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.909 [2024-11-19 11:27:15.185227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.909 [2024-11-19 11:27:15.185254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.909 [2024-11-19 11:27:15.191345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.909 [2024-11-19 11:27:15.191479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.909 [2024-11-19 11:27:15.191507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.909 [2024-11-19 11:27:15.196942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.909 [2024-11-19 11:27:15.197109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.909 [2024-11-19 11:27:15.197136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.909 [2024-11-19 11:27:15.203414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.909 [2024-11-19 11:27:15.203581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.909 [2024-11-19 11:27:15.203608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.910 [2024-11-19 11:27:15.209999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.910 [2024-11-19 11:27:15.210113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.910 [2024-11-19 11:27:15.210145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.910 [2024-11-19 11:27:15.217551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.910 [2024-11-19 11:27:15.217699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.910 [2024-11-19 11:27:15.217726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.910 [2024-11-19 11:27:15.223793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.910 [2024-11-19 11:27:15.223900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.910 [2024-11-19 11:27:15.223927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.910 [2024-11-19 11:27:15.229732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.910 [2024-11-19 11:27:15.229861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.910 [2024-11-19 11:27:15.229888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.910 [2024-11-19 11:27:15.235748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.910 [2024-11-19 11:27:15.235914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.910 [2024-11-19 11:27:15.235941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.910 [2024-11-19 11:27:15.242507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.910 [2024-11-19 11:27:15.242606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.910 [2024-11-19 11:27:15.242632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.910 [2024-11-19 11:27:15.250495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.910 [2024-11-19 11:27:15.250702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.910 [2024-11-19 11:27:15.250729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.910 [2024-11-19 11:27:15.257412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.910 [2024-11-19 11:27:15.257575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.910 [2024-11-19 11:27:15.257603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.910 [2024-11-19 11:27:15.263569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.910 [2024-11-19 11:27:15.263677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.910 [2024-11-19 11:27:15.263703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.910 [2024-11-19 11:27:15.269589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.910 [2024-11-19 11:27:15.269761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.910 [2024-11-19 11:27:15.269787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.910 [2024-11-19 11:27:15.275223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.910 [2024-11-19 11:27:15.275323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.910 [2024-11-19 11:27:15.275349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.910 [2024-11-19 11:27:15.281759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.910 [2024-11-19 11:27:15.281939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.910 [2024-11-19 11:27:15.281965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.910 [2024-11-19 11:27:15.289569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.910 [2024-11-19 11:27:15.289700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.910 [2024-11-19 11:27:15.289727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.910 [2024-11-19 11:27:15.295587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.910 [2024-11-19 11:27:15.295699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.910 [2024-11-19 11:27:15.295726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.910 [2024-11-19 11:27:15.302033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.910 [2024-11-19 11:27:15.302119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.910 [2024-11-19 11:27:15.302144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.910 [2024-11-19 11:27:15.307973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.910 [2024-11-19 11:27:15.308088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.910 [2024-11-19 11:27:15.308115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.910 [2024-11-19 11:27:15.314613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.910 [2024-11-19 11:27:15.314750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.910 [2024-11-19 11:27:15.314776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.910 [2024-11-19 11:27:15.321022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.910 [2024-11-19 11:27:15.321150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.910 [2024-11-19 11:27:15.321176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.910 [2024-11-19 11:27:15.327376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.910 [2024-11-19 11:27:15.327488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.910 [2024-11-19 11:27:15.327513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.910 [2024-11-19 11:27:15.334131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.910 [2024-11-19 11:27:15.334232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.910 [2024-11-19 11:27:15.334258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.910 [2024-11-19 11:27:15.340908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.910 [2024-11-19 11:27:15.341005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.910 [2024-11-19 11:27:15.341032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.910 [2024-11-19 11:27:15.347479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.910 [2024-11-19 11:27:15.347572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.910 [2024-11-19 11:27:15.347600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.910 [2024-11-19 11:27:15.353228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.910 [2024-11-19 11:27:15.353326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.910 [2024-11-19 11:27:15.353372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.910 [2024-11-19 11:27:15.358787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.910 [2024-11-19 11:27:15.358894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.910 [2024-11-19 11:27:15.358920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.910 [2024-11-19 11:27:15.364292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.910 [2024-11-19 11:27:15.364394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.910 [2024-11-19 11:27:15.364422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.910 [2024-11-19 11:27:15.370091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.910 [2024-11-19 11:27:15.370196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.911 [2024-11-19 11:27:15.370223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.911 [2024-11-19 11:27:15.376215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.911 [2024-11-19 11:27:15.376328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.911 [2024-11-19 11:27:15.376360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.911 [2024-11-19 11:27:15.382879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.911 [2024-11-19 11:27:15.383018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.911 [2024-11-19 11:27:15.383045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.911 [2024-11-19 11:27:15.390074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.911 [2024-11-19 11:27:15.390206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.911 [2024-11-19 11:27:15.390232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.911 [2024-11-19 11:27:15.397796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:19.911 [2024-11-19 11:27:15.397893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.911 [2024-11-19 11:27:15.397920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.170 [2024-11-19 11:27:15.405356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.170 [2024-11-19 11:27:15.405500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.170 [2024-11-19 11:27:15.405531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.170 [2024-11-19 11:27:15.411667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.170 [2024-11-19 11:27:15.411751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.170 [2024-11-19 11:27:15.411776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.171 [2024-11-19 11:27:15.417536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.171 [2024-11-19 11:27:15.417660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.171 [2024-11-19 11:27:15.417686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.171 [2024-11-19 11:27:15.423567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.171 [2024-11-19 11:27:15.423685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.171 [2024-11-19 11:27:15.423711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.171 [2024-11-19 11:27:15.429556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.171 [2024-11-19 11:27:15.429710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.171 [2024-11-19 11:27:15.429737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.171 [2024-11-19 11:27:15.435346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.171 [2024-11-19 11:27:15.435470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.171 [2024-11-19 11:27:15.435497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.171 [2024-11-19 11:27:15.441086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.171 [2024-11-19 11:27:15.441169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.171 [2024-11-19 11:27:15.441196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.171 [2024-11-19 11:27:15.447550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.171 [2024-11-19 11:27:15.447655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.171 [2024-11-19 11:27:15.447697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.171 [2024-11-19 11:27:15.453908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.171 [2024-11-19 11:27:15.454039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.171 [2024-11-19 11:27:15.454066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.171 [2024-11-19 11:27:15.460246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.171 [2024-11-19 11:27:15.460360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.171 [2024-11-19 11:27:15.460395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.171 [2024-11-19 11:27:15.466069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.171 [2024-11-19 11:27:15.466211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.171 [2024-11-19 11:27:15.466237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.171 [2024-11-19 11:27:15.472915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.171 [2024-11-19 11:27:15.473107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.171 [2024-11-19 11:27:15.473146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.171 [2024-11-19 11:27:15.481298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.171 [2024-11-19 11:27:15.481500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.171 [2024-11-19 11:27:15.481528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.171 [2024-11-19 11:27:15.487597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.171 [2024-11-19 11:27:15.487686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.171 [2024-11-19 11:27:15.487712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.171 [2024-11-19 11:27:15.493887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.171 [2024-11-19 11:27:15.494063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.171 [2024-11-19 11:27:15.494090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.171 [2024-11-19 11:27:15.499934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.171 [2024-11-19 11:27:15.500038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.171 [2024-11-19 11:27:15.500065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.171 [2024-11-19 11:27:15.506166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.171 [2024-11-19 11:27:15.506235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.171 [2024-11-19 11:27:15.506260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.171 [2024-11-19 11:27:15.512621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.171 [2024-11-19 11:27:15.512742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.171 [2024-11-19 11:27:15.512769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.171 [2024-11-19 11:27:15.519392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.171 [2024-11-19 11:27:15.519497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.171 [2024-11-19 11:27:15.519526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.171 [2024-11-19 11:27:15.525312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.171 [2024-11-19 11:27:15.525411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.171 [2024-11-19 11:27:15.525439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.171 [2024-11-19 11:27:15.531141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.171 [2024-11-19 11:27:15.531209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.171 [2024-11-19 11:27:15.531234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.171 [2024-11-19 11:27:15.536836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.171 [2024-11-19 11:27:15.536901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.171 [2024-11-19 11:27:15.536926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.171 [2024-11-19 11:27:15.542975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.171 [2024-11-19 11:27:15.543057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.171 [2024-11-19 11:27:15.543090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.171 [2024-11-19 11:27:15.548707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.171 [2024-11-19 11:27:15.548801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.171 [2024-11-19 11:27:15.548828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.171 [2024-11-19 11:27:15.554437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.171 [2024-11-19 11:27:15.554530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.171 [2024-11-19 11:27:15.554560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.171 [2024-11-19 11:27:15.560274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.171 [2024-11-19 11:27:15.560340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.171 [2024-11-19 11:27:15.560388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.171 [2024-11-19 11:27:15.566554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.171 [2024-11-19 11:27:15.566630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.171 [2024-11-19 11:27:15.566676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.171 4849.00 IOPS, 606.12 MiB/s [2024-11-19T10:27:15.668Z] [2024-11-19 11:27:15.573521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.172 [2024-11-19 11:27:15.573599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.172 [2024-11-19 11:27:15.573625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.172 [2024-11-19 11:27:15.579060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.172 [2024-11-19 11:27:15.579140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.172 [2024-11-19 11:27:15.579164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.172 [2024-11-19 11:27:15.585418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.172 [2024-11-19 11:27:15.585495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.172 [2024-11-19 11:27:15.585523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.172 [2024-11-19 11:27:15.591382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.172 [2024-11-19 11:27:15.591470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.172 [2024-11-19 11:27:15.591499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.172 [2024-11-19 11:27:15.597872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.172 [2024-11-19 11:27:15.597962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.172 [2024-11-19 11:27:15.597988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.172 [2024-11-19 11:27:15.603566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.172 [2024-11-19 11:27:15.603655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.172 [2024-11-19 11:27:15.603682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.172 [2024-11-19 11:27:15.609159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.172 [2024-11-19 11:27:15.609236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.172 [2024-11-19 11:27:15.609260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.172 [2024-11-19 11:27:15.615082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.172 [2024-11-19 11:27:15.615157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.172 [2024-11-19 11:27:15.615181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.172 [2024-11-19 11:27:15.621423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.172 [2024-11-19 11:27:15.621516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.172 [2024-11-19 11:27:15.621546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.172 [2024-11-19 11:27:15.627592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.172 [2024-11-19 11:27:15.627676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.172 [2024-11-19 11:27:15.627702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.172 [2024-11-19 11:27:15.633423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.172 [2024-11-19 11:27:15.633515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.172 [2024-11-19 11:27:15.633544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.172 [2024-11-19 11:27:15.639189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.172 [2024-11-19 11:27:15.639253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.172 [2024-11-19 11:27:15.639277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.172 [2024-11-19 11:27:15.645062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.172 [2024-11-19 11:27:15.645145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.172 [2024-11-19 11:27:15.645172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.172 [2024-11-19 11:27:15.650874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.172 [2024-11-19 11:27:15.650942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.172 [2024-11-19 11:27:15.650967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.172 [2024-11-19 11:27:15.657704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.172 [2024-11-19 11:27:15.657797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.172 [2024-11-19 11:27:15.657823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.172 [2024-11-19 11:27:15.663968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.172 [2024-11-19 11:27:15.664055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.172 [2024-11-19 11:27:15.664084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.432 [2024-11-19 11:27:15.671112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.432 [2024-11-19 11:27:15.671204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.432 [2024-11-19 11:27:15.671247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.432 [2024-11-19 11:27:15.677521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.432 [2024-11-19 11:27:15.677605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.432 [2024-11-19 11:27:15.677631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.432 [2024-11-19 11:27:15.683226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.432 [2024-11-19 11:27:15.683312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.432 [2024-11-19 11:27:15.683338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.432 [2024-11-19 11:27:15.689118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.432 [2024-11-19 11:27:15.689203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.432 [2024-11-19 11:27:15.689230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.432 [2024-11-19 11:27:15.694865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.432 [2024-11-19 11:27:15.694987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.432 [2024-11-19 11:27:15.695014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.432 [2024-11-19 11:27:15.700718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.432 [2024-11-19 11:27:15.700794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.432 [2024-11-19 11:27:15.700827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.432 [2024-11-19 11:27:15.706487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.432 [2024-11-19 11:27:15.706566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.432 [2024-11-19 11:27:15.706593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.432 [2024-11-19 11:27:15.712457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.432 [2024-11-19 11:27:15.712525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.432 [2024-11-19 11:27:15.712551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.432 [2024-11-19 11:27:15.718841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.432 [2024-11-19 11:27:15.718906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.432 [2024-11-19 11:27:15.718931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.432 [2024-11-19 11:27:15.725434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.432 [2024-11-19 11:27:15.725501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.432 [2024-11-19 11:27:15.725527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.432 [2024-11-19 11:27:15.731424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.432 [2024-11-19 11:27:15.731514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.432 [2024-11-19 11:27:15.731543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.432 [2024-11-19 11:27:15.737495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.432 [2024-11-19 11:27:15.737566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.432 [2024-11-19 11:27:15.737592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.432 [2024-11-19 11:27:15.743187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.433 [2024-11-19 11:27:15.743316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.433 [2024-11-19 11:27:15.743357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.433 [2024-11-19 11:27:15.748942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.433 [2024-11-19 11:27:15.749023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.433 [2024-11-19 11:27:15.749050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.433 [2024-11-19 11:27:15.754615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.433 [2024-11-19 11:27:15.754699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.433 [2024-11-19 11:27:15.754723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.433 [2024-11-19 11:27:15.760279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.433 [2024-11-19 11:27:15.760372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.433 [2024-11-19 11:27:15.760399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.433 [2024-11-19 11:27:15.766006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.433 [2024-11-19 11:27:15.766076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.433 [2024-11-19 11:27:15.766101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.433 [2024-11-19 11:27:15.771778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.433 [2024-11-19 11:27:15.771858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.433 [2024-11-19 11:27:15.771884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.433 [2024-11-19 11:27:15.777592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.433 [2024-11-19 11:27:15.777682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.433 [2024-11-19 11:27:15.777710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.433 [2024-11-19 11:27:15.783752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.433 [2024-11-19 11:27:15.783829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.433 [2024-11-19 11:27:15.783855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.433 [2024-11-19 11:27:15.790512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.433 [2024-11-19 11:27:15.790607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.433 [2024-11-19 11:27:15.790652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.433 [2024-11-19 11:27:15.796905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.433 [2024-11-19 11:27:15.796977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.433 [2024-11-19 11:27:15.797003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.433 [2024-11-19 11:27:15.802831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.433 [2024-11-19 11:27:15.802910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.433 [2024-11-19 11:27:15.802935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.433 [2024-11-19 11:27:15.808846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.433 [2024-11-19 11:27:15.808928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.433 [2024-11-19 11:27:15.808952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.433 [2024-11-19 11:27:15.814630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.433 [2024-11-19 11:27:15.814750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.433 [2024-11-19 11:27:15.814774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.433 [2024-11-19 11:27:15.820530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.433 [2024-11-19 11:27:15.820603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.433 [2024-11-19 11:27:15.820627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.433 [2024-11-19 11:27:15.826449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.433 [2024-11-19 11:27:15.826526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.433 [2024-11-19 11:27:15.826566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.433 [2024-11-19 11:27:15.832330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.433 [2024-11-19 11:27:15.832433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.433 [2024-11-19 11:27:15.832473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.433 [2024-11-19 11:27:15.838880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.433 [2024-11-19 11:27:15.838959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.433 [2024-11-19 11:27:15.838985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.433 [2024-11-19 11:27:15.844951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.433 [2024-11-19 11:27:15.845019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.433 [2024-11-19 11:27:15.845044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.433 [2024-11-19 11:27:15.851108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.433 [2024-11-19 11:27:15.851175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.433 [2024-11-19 11:27:15.851200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.433 [2024-11-19 11:27:15.857073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.433 [2024-11-19 11:27:15.857151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.433 [2024-11-19 11:27:15.857180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.433 [2024-11-19 11:27:15.863072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.433 [2024-11-19 11:27:15.863162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.433 [2024-11-19 11:27:15.863187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.433 [2024-11-19 11:27:15.868902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.433 [2024-11-19 11:27:15.869003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.433 [2024-11-19 11:27:15.869027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.433 [2024-11-19 11:27:15.874974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.434 [2024-11-19 11:27:15.875076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.434 [2024-11-19 11:27:15.875101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.434 [2024-11-19 11:27:15.881084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.434 [2024-11-19 11:27:15.881165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.434 [2024-11-19 11:27:15.881189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.434 [2024-11-19 11:27:15.887189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.434 [2024-11-19 11:27:15.887265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.434 [2024-11-19 11:27:15.887291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.434 [2024-11-19 11:27:15.893256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.434 [2024-11-19 11:27:15.893334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.434 [2024-11-19 11:27:15.893382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.434 [2024-11-19 11:27:15.899251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.434 [2024-11-19 11:27:15.899326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.434 [2024-11-19 11:27:15.899374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.434 [2024-11-19 11:27:15.905164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.434 [2024-11-19 11:27:15.905251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.434 [2024-11-19 11:27:15.905276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.434 [2024-11-19 11:27:15.911267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.434 [2024-11-19 11:27:15.911385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.434 [2024-11-19 11:27:15.911412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.434 [2024-11-19 11:27:15.917767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.434 [2024-11-19 11:27:15.917841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.434 [2024-11-19 11:27:15.917866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.434 [2024-11-19 11:27:15.924474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.434 [2024-11-19 11:27:15.924558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.434 [2024-11-19 11:27:15.924583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.693 [2024-11-19 11:27:15.931134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.693 [2024-11-19 11:27:15.931209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.693 [2024-11-19 11:27:15.931234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.693 [2024-11-19 11:27:15.937507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.693 [2024-11-19 11:27:15.937608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.693 [2024-11-19 11:27:15.937634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.693 [2024-11-19 11:27:15.943733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.693 [2024-11-19 11:27:15.943820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.693 [2024-11-19 11:27:15.943845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.693 [2024-11-19 11:27:15.949855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.693 [2024-11-19 11:27:15.949938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.693 [2024-11-19 11:27:15.949963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.693 [2024-11-19 11:27:15.955917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.693 [2024-11-19 11:27:15.955990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.693 [2024-11-19 11:27:15.956015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.693 [2024-11-19 11:27:15.961913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.693 [2024-11-19 11:27:15.961982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.693 [2024-11-19 11:27:15.962007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.693 [2024-11-19 11:27:15.968024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.693 [2024-11-19 11:27:15.968102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.693 [2024-11-19 11:27:15.968127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.693 [2024-11-19 11:27:15.974603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.693 [2024-11-19 11:27:15.974685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.693 [2024-11-19 11:27:15.974710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.693 [2024-11-19 11:27:15.980371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.693 [2024-11-19 11:27:15.980472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.693 [2024-11-19 11:27:15.980500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.694 [2024-11-19 11:27:15.985978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.694 [2024-11-19 11:27:15.986070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.694 [2024-11-19 11:27:15.986097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.694 [2024-11-19 11:27:15.992278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.694 [2024-11-19 11:27:15.992455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.694 [2024-11-19 11:27:15.992495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.694 [2024-11-19 11:27:15.998864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.694 [2024-11-19 11:27:15.999024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.694 [2024-11-19 11:27:15.999051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.694 [2024-11-19 11:27:16.005990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.694 [2024-11-19 11:27:16.006122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.694 [2024-11-19 11:27:16.006155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.694 [2024-11-19 11:27:16.013627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.694 [2024-11-19 11:27:16.013833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.694 [2024-11-19 11:27:16.013870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.694 [2024-11-19 11:27:16.021775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.694 [2024-11-19 11:27:16.021850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.694 [2024-11-19 11:27:16.021883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.694 [2024-11-19 11:27:16.029610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.694 [2024-11-19 11:27:16.029752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.694 [2024-11-19 11:27:16.029777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.694 [2024-11-19 11:27:16.038646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.694 [2024-11-19 11:27:16.038792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.694 [2024-11-19 11:27:16.038819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.694 [2024-11-19 11:27:16.046186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.694 [2024-11-19 11:27:16.046347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.694 [2024-11-19 11:27:16.046415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.694 [2024-11-19 11:27:16.052401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.694 [2024-11-19 11:27:16.052497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.694 [2024-11-19 11:27:16.052525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.694 [2024-11-19 11:27:16.058096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.694 [2024-11-19 11:27:16.058204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.694 [2024-11-19 11:27:16.058231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.694 [2024-11-19 11:27:16.063706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.694 [2024-11-19 11:27:16.063837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.694 [2024-11-19 11:27:16.063864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.694 [2024-11-19 11:27:16.069492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.694 [2024-11-19 11:27:16.069577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.694 [2024-11-19 11:27:16.069605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.694 [2024-11-19 11:27:16.075355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.694 [2024-11-19 11:27:16.075512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.694 [2024-11-19 11:27:16.075540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.694 [2024-11-19 11:27:16.081142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.694 [2024-11-19 11:27:16.081248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.694 [2024-11-19 11:27:16.081273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.694 [2024-11-19 11:27:16.087257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.694 [2024-11-19 11:27:16.087330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.694 [2024-11-19 11:27:16.087378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.694 [2024-11-19 11:27:16.093698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.694 [2024-11-19 11:27:16.093851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.694 [2024-11-19 11:27:16.093877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.694 [2024-11-19 11:27:16.100717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.694 [2024-11-19 11:27:16.100866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.694 [2024-11-19 11:27:16.100893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.694 [2024-11-19 11:27:16.107211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.694 [2024-11-19 11:27:16.107390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.694 [2024-11-19 11:27:16.107418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.694 [2024-11-19 11:27:16.113542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.694 [2024-11-19 11:27:16.113632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.694 [2024-11-19 11:27:16.113675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.694 [2024-11-19 11:27:16.119994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.694 [2024-11-19 11:27:16.120104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.694 [2024-11-19 11:27:16.120129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.694 [2024-11-19 11:27:16.126878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.694 [2024-11-19 11:27:16.127017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.695 [2024-11-19 11:27:16.127043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.695 [2024-11-19 11:27:16.134659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.695 [2024-11-19 11:27:16.134745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.695 [2024-11-19 11:27:16.134770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.695 [2024-11-19 11:27:16.142802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.695 [2024-11-19 11:27:16.143017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.695 [2024-11-19 11:27:16.143045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.695 [2024-11-19 11:27:16.151203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.695 [2024-11-19 11:27:16.151386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.695 [2024-11-19 11:27:16.151414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.695 [2024-11-19 11:27:16.159472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.695 [2024-11-19 11:27:16.159569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.695 [2024-11-19 11:27:16.159595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.695 [2024-11-19 11:27:16.167688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.695 [2024-11-19 11:27:16.167794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.695 [2024-11-19 11:27:16.167820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.695 [2024-11-19 11:27:16.176525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.695 [2024-11-19 11:27:16.176714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.695 [2024-11-19 11:27:16.176741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.695 [2024-11-19 11:27:16.184078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.695 [2024-11-19 11:27:16.184152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.695 [2024-11-19 11:27:16.184178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.954 [2024-11-19 11:27:16.191171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.954 [2024-11-19 11:27:16.191278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.954 [2024-11-19 11:27:16.191303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.954 [2024-11-19 11:27:16.197942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.954 [2024-11-19 11:27:16.198021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.954 [2024-11-19 11:27:16.198047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.954 [2024-11-19 11:27:16.204626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.954 [2024-11-19 11:27:16.204742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.954 [2024-11-19 11:27:16.204775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.954 [2024-11-19 11:27:16.210986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.954 [2024-11-19 11:27:16.211119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.954 [2024-11-19 11:27:16.211144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.954 [2024-11-19 11:27:16.217369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.954 [2024-11-19 11:27:16.217461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.954 [2024-11-19 11:27:16.217488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.954 [2024-11-19 11:27:16.223705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.954 [2024-11-19 11:27:16.223790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.954 [2024-11-19 11:27:16.223816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.954 [2024-11-19 11:27:16.230055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.954 [2024-11-19 11:27:16.230136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.954 [2024-11-19 11:27:16.230161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.954 [2024-11-19 11:27:16.236716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.954 [2024-11-19 11:27:16.236796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.954 [2024-11-19 11:27:16.236821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.954 [2024-11-19 11:27:16.243605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.954 [2024-11-19 11:27:16.243715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.954 [2024-11-19 11:27:16.243740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.954 [2024-11-19 11:27:16.250314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.954 [2024-11-19 11:27:16.250434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.954 [2024-11-19 11:27:16.250460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.954 [2024-11-19 11:27:16.256827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.954 [2024-11-19 11:27:16.256912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.954 [2024-11-19 11:27:16.256938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.954 [2024-11-19 11:27:16.263012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.955 [2024-11-19 11:27:16.263104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.955 [2024-11-19 11:27:16.263129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.955 [2024-11-19 11:27:16.269517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.955 [2024-11-19 11:27:16.269608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.955 [2024-11-19 11:27:16.269635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.955 [2024-11-19 11:27:16.275761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.955 [2024-11-19 11:27:16.275853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.955 [2024-11-19 11:27:16.275878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.955 [2024-11-19 11:27:16.281996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.955 [2024-11-19 11:27:16.282088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.955 [2024-11-19 11:27:16.282120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.955 [2024-11-19 11:27:16.288496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.955 [2024-11-19 11:27:16.288579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.955 [2024-11-19 11:27:16.288605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.955 [2024-11-19 11:27:16.294887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.955 [2024-11-19 11:27:16.294964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.955 [2024-11-19 11:27:16.294989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.955 [2024-11-19 11:27:16.301867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.955 [2024-11-19 11:27:16.301944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.955 [2024-11-19 11:27:16.301968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.955 [2024-11-19 11:27:16.309112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.955 [2024-11-19 11:27:16.309188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.955 [2024-11-19 11:27:16.309214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.955 [2024-11-19 11:27:16.315731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.955 [2024-11-19 11:27:16.315816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.955 [2024-11-19 11:27:16.315841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.955 [2024-11-19 11:27:16.323014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.955 [2024-11-19 11:27:16.323095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.955 [2024-11-19 11:27:16.323120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.955 [2024-11-19 11:27:16.330271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.955 [2024-11-19 11:27:16.330432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.955 [2024-11-19 11:27:16.330459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.955 [2024-11-19 11:27:16.337227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.955 [2024-11-19 11:27:16.337302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.955 [2024-11-19 11:27:16.337327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.955 [2024-11-19 11:27:16.344677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.955 [2024-11-19 11:27:16.344802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.955 [2024-11-19 11:27:16.344828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.955 [2024-11-19 11:27:16.351240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.955 [2024-11-19 11:27:16.351369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.955 [2024-11-19 11:27:16.351396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.955 [2024-11-19 11:27:16.357865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.955 [2024-11-19 11:27:16.357953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.955 [2024-11-19 11:27:16.357979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.955 [2024-11-19 11:27:16.364893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.955 [2024-11-19 11:27:16.365122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.955 [2024-11-19 11:27:16.365150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.955 [2024-11-19 11:27:16.372988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.955 [2024-11-19 11:27:16.373113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.955 [2024-11-19 11:27:16.373145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.955 [2024-11-19 11:27:16.381588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.955 [2024-11-19 11:27:16.381722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.955 [2024-11-19 11:27:16.381754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.955 [2024-11-19 11:27:16.389520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.955 [2024-11-19 11:27:16.389732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.955 [2024-11-19 11:27:16.389760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.955 [2024-11-19 11:27:16.397301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.955 [2024-11-19 11:27:16.397449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.955 [2024-11-19 11:27:16.397475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.955 [2024-11-19 11:27:16.405546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.955 [2024-11-19 11:27:16.405713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.955 [2024-11-19 11:27:16.405738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.955 [2024-11-19 11:27:16.413579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.955 [2024-11-19 11:27:16.413763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.955 [2024-11-19 11:27:16.413788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.955 [2024-11-19 11:27:16.421764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.955 [2024-11-19 11:27:16.421982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.955 [2024-11-19 11:27:16.422010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.955 [2024-11-19 11:27:16.429606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.955 [2024-11-19 11:27:16.429737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.955 [2024-11-19 11:27:16.429763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.955 [2024-11-19 11:27:16.437822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.955 [2024-11-19 11:27:16.438008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.955 [2024-11-19 11:27:16.438035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.955 [2024-11-19 11:27:16.445825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:20.955 [2024-11-19 11:27:16.446050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.955 [2024-11-19 11:27:16.446095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:21.215 [2024-11-19 11:27:16.454102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:21.215 [2024-11-19 11:27:16.454324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.215 [2024-11-19 11:27:16.454376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:21.215 [2024-11-19 11:27:16.462417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:21.215 [2024-11-19 11:27:16.462515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.215 [2024-11-19 11:27:16.462541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:21.215 [2024-11-19 11:27:16.469076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:21.215 [2024-11-19 11:27:16.469172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.215 [2024-11-19 11:27:16.469199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:21.215 [2024-11-19 11:27:16.475667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:21.215 [2024-11-19 11:27:16.475794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.215 [2024-11-19 11:27:16.475820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:21.215 [2024-11-19 11:27:16.482465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:21.215 [2024-11-19 11:27:16.482543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.215 [2024-11-19 11:27:16.482570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:21.215 [2024-11-19 11:27:16.488996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:21.215 [2024-11-19 11:27:16.489074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.215 [2024-11-19 11:27:16.489099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:21.215 [2024-11-19 11:27:16.495638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:21.215 [2024-11-19 11:27:16.495731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.215 [2024-11-19 11:27:16.495756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:21.215 [2024-11-19 11:27:16.502212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:21.215 [2024-11-19 11:27:16.502293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.215 [2024-11-19 11:27:16.502318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:21.215 [2024-11-19 11:27:16.509032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:21.215 [2024-11-19 11:27:16.509110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.215 [2024-11-19 11:27:16.509135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:21.215 [2024-11-19 11:27:16.515903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:21.215 [2024-11-19 11:27:16.516011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.215 [2024-11-19 11:27:16.516036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:21.215 [2024-11-19 11:27:16.522798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:21.215 [2024-11-19 11:27:16.522901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.215 [2024-11-19 11:27:16.522926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:21.215 [2024-11-19 11:27:16.529857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:21.215 [2024-11-19 11:27:16.529941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.215 [2024-11-19 11:27:16.529966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:21.215 [2024-11-19 11:27:16.537250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:21.215 [2024-11-19 11:27:16.537330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.215 [2024-11-19 11:27:16.537379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:21.215 [2024-11-19 11:27:16.544087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:21.215 [2024-11-19 11:27:16.544165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.215 [2024-11-19 11:27:16.544190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:21.215 [2024-11-19 11:27:16.551337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:21.215 [2024-11-19 11:27:16.551448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.215 [2024-11-19 11:27:16.551474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:21.215 [2024-11-19 11:27:16.558294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:21.215 [2024-11-19 11:27:16.558398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.215 [2024-11-19 11:27:16.558424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:21.215 [2024-11-19 11:27:16.564944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:21.215 [2024-11-19 11:27:16.565050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.215 [2024-11-19 11:27:16.565075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:21.215 4774.00 IOPS, 596.75 MiB/s [2024-11-19T10:27:16.712Z] [2024-11-19 11:27:16.573154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f1560) with pdu=0x2000166ff3c8 00:25:21.215 [2024-11-19 11:27:16.573233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.215 [2024-11-19 11:27:16.573264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:21.215 00:25:21.215 Latency(us) 00:25:21.215 [2024-11-19T10:27:16.712Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:21.215 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:21.215 nvme0n1 : 2.00 4770.27 596.28 0.00 0.00 3346.75 2451.53 10534.31 00:25:21.215 [2024-11-19T10:27:16.712Z] =================================================================================================================== 00:25:21.215 [2024-11-19T10:27:16.712Z] Total : 4770.27 596.28 0.00 0.00 3346.75 2451.53 10534.31 00:25:21.215 { 00:25:21.215 "results": [ 00:25:21.215 { 00:25:21.216 "job": "nvme0n1", 00:25:21.216 "core_mask": "0x2", 00:25:21.216 "workload": "randwrite", 00:25:21.216 "status": "finished", 00:25:21.216 "queue_depth": 16, 00:25:21.216 "io_size": 131072, 00:25:21.216 "runtime": 2.004917, 00:25:21.216 "iops": 4770.272285585887, 00:25:21.216 "mibps": 596.2840356982359, 00:25:21.216 "io_failed": 0, 00:25:21.216 "io_timeout": 0, 00:25:21.216 "avg_latency_us": 3346.7454956085317, 00:25:21.216 "min_latency_us": 2451.531851851852, 00:25:21.216 "max_latency_us": 10534.305185185185 00:25:21.216 } 00:25:21.216 ], 00:25:21.216 "core_count": 1 00:25:21.216 } 00:25:21.216 11:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:21.216 11:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:21.216 11:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:21.216 11:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:21.216 | .driver_specific 00:25:21.216 | .nvme_error 00:25:21.216 | .status_code 00:25:21.216 | .command_transient_transport_error' 00:25:21.474 11:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 309 > 0 )) 00:25:21.474 11:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2720346 00:25:21.474 11:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2720346 ']' 00:25:21.474 11:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2720346 00:25:21.474 11:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:21.474 11:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:21.474 11:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2720346 00:25:21.474 11:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:21.474 11:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:21.474 11:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2720346' 00:25:21.474 killing process with pid 2720346 00:25:21.474 11:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2720346 00:25:21.474 Received shutdown signal, test time was about 2.000000 seconds 00:25:21.474 00:25:21.474 Latency(us) 00:25:21.474 [2024-11-19T10:27:16.971Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:21.474 [2024-11-19T10:27:16.971Z] =================================================================================================================== 00:25:21.474 [2024-11-19T10:27:16.971Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:21.474 11:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2720346 00:25:21.732 11:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2718348 00:25:21.732 11:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2718348 ']' 00:25:21.732 11:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2718348 00:25:21.732 11:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:21.732 11:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:21.732 11:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2718348 00:25:21.732 11:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:21.732 11:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:21.732 11:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2718348' 00:25:21.732 killing process with pid 2718348 00:25:21.732 11:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2718348 00:25:21.732 11:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2718348 00:25:21.990 00:25:21.990 real 0m15.526s 00:25:21.990 user 0m30.466s 00:25:21.990 sys 0m5.116s 00:25:21.990 11:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:21.990 11:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:21.990 ************************************ 00:25:21.990 END TEST nvmf_digest_error 00:25:21.990 ************************************ 00:25:21.990 11:27:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:25:21.990 11:27:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:25:21.990 11:27:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:21.990 11:27:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:25:21.990 11:27:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:21.990 11:27:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:25:21.990 11:27:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:21.990 11:27:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:21.990 rmmod nvme_tcp 00:25:21.990 rmmod nvme_fabrics 00:25:21.990 rmmod nvme_keyring 00:25:21.990 11:27:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:21.990 11:27:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:25:21.990 11:27:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:25:21.990 11:27:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 2718348 ']' 00:25:21.990 11:27:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 2718348 00:25:21.990 11:27:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 2718348 ']' 00:25:21.990 11:27:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 2718348 00:25:21.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2718348) - No such process 00:25:21.990 11:27:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 2718348 is not found' 00:25:21.990 Process with pid 2718348 is not found 00:25:21.990 11:27:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:21.990 11:27:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:21.990 11:27:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:21.990 11:27:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:25:21.990 11:27:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:25:21.990 11:27:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:21.990 11:27:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:25:21.990 11:27:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:21.990 11:27:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:21.990 11:27:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:21.990 11:27:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:21.990 11:27:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:24.536 11:27:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:24.536 00:25:24.536 real 0m36.312s 00:25:24.536 user 1m2.527s 00:25:24.536 sys 0m12.163s 00:25:24.536 11:27:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:24.536 11:27:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:24.536 ************************************ 00:25:24.536 END TEST nvmf_digest 00:25:24.536 ************************************ 00:25:24.536 11:27:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:25:24.536 11:27:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:25:24.536 11:27:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:25:24.536 11:27:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:24.536 11:27:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:24.536 11:27:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:24.536 11:27:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.536 ************************************ 00:25:24.536 START TEST nvmf_bdevperf 00:25:24.536 ************************************ 00:25:24.536 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:24.536 * Looking for test storage... 00:25:24.536 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:24.536 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:24.536 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:25:24.536 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:24.536 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:24.536 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:24.536 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:24.536 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:24.536 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:25:24.536 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:25:24.536 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:25:24.536 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:25:24.536 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:25:24.536 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:25:24.536 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:25:24.536 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:24.536 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:25:24.536 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:25:24.536 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:24.536 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:24.536 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:25:24.536 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:25:24.536 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:24.536 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:25:24.536 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:24.536 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:25:24.536 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:25:24.536 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:24.536 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:25:24.536 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:24.536 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:24.536 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:24.536 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:25:24.536 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:24.536 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:24.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:24.536 --rc genhtml_branch_coverage=1 00:25:24.536 --rc genhtml_function_coverage=1 00:25:24.536 --rc genhtml_legend=1 00:25:24.536 --rc geninfo_all_blocks=1 00:25:24.536 --rc geninfo_unexecuted_blocks=1 00:25:24.536 00:25:24.536 ' 00:25:24.536 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:24.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:24.536 --rc genhtml_branch_coverage=1 00:25:24.536 --rc genhtml_function_coverage=1 00:25:24.536 --rc genhtml_legend=1 00:25:24.536 --rc geninfo_all_blocks=1 00:25:24.536 --rc geninfo_unexecuted_blocks=1 00:25:24.536 00:25:24.536 ' 00:25:24.536 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:24.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:24.537 --rc genhtml_branch_coverage=1 00:25:24.537 --rc genhtml_function_coverage=1 00:25:24.537 --rc genhtml_legend=1 00:25:24.537 --rc geninfo_all_blocks=1 00:25:24.537 --rc geninfo_unexecuted_blocks=1 00:25:24.537 00:25:24.537 ' 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:24.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:24.537 --rc genhtml_branch_coverage=1 00:25:24.537 --rc genhtml_function_coverage=1 00:25:24.537 --rc genhtml_legend=1 00:25:24.537 --rc geninfo_all_blocks=1 00:25:24.537 --rc geninfo_unexecuted_blocks=1 00:25:24.537 00:25:24.537 ' 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:24.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:24.537 11:27:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:25:27.069 Found 0000:82:00.0 (0x8086 - 0x159b) 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:25:27.069 Found 0000:82:00.1 (0x8086 - 0x159b) 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:25:27.069 Found net devices under 0000:82:00.0: cvl_0_0 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:25:27.069 Found net devices under 0000:82:00.1: cvl_0_1 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:27.069 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:27.070 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:27.070 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:27.070 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:27.070 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:27.070 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:27.070 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:27.070 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:27.070 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:27.070 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:27.070 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:27.070 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:27.070 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:27.070 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:27.070 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:27.070 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:27.070 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:27.070 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:27.070 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:27.070 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:27.070 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:27.070 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:25:27.070 00:25:27.070 --- 10.0.0.2 ping statistics --- 00:25:27.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:27.070 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:25:27.070 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:27.070 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:27.070 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:25:27.070 00:25:27.070 --- 10.0.0.1 ping statistics --- 00:25:27.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:27.070 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:25:27.070 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:27.070 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:25:27.070 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:27.070 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:27.070 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:27.070 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:27.070 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:27.070 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:27.070 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:27.070 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:25:27.070 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:27.070 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:27.070 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:27.070 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:27.070 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2723119 00:25:27.070 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2723119 00:25:27.070 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:27.070 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2723119 ']' 00:25:27.070 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:27.070 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:27.070 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:27.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:27.070 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:27.070 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:27.070 [2024-11-19 11:27:22.547022] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:25:27.070 [2024-11-19 11:27:22.547102] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:27.328 [2024-11-19 11:27:22.630616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:27.328 [2024-11-19 11:27:22.688758] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:27.328 [2024-11-19 11:27:22.688810] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:27.328 [2024-11-19 11:27:22.688824] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:27.328 [2024-11-19 11:27:22.688850] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:27.328 [2024-11-19 11:27:22.688860] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:27.328 [2024-11-19 11:27:22.690378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:27.328 [2024-11-19 11:27:22.690426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:27.328 [2024-11-19 11:27:22.690431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:27.328 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:27.328 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:25:27.328 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:27.328 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:27.328 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:27.587 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:27.587 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:27.587 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.587 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:27.587 [2024-11-19 11:27:22.840136] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:27.587 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.587 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:27.587 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.587 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:27.587 Malloc0 00:25:27.587 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.587 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:27.587 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.587 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:27.587 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.587 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:27.587 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.587 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:27.587 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.587 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:27.587 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.587 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:27.587 [2024-11-19 11:27:22.901949] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:27.587 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.587 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:25:27.587 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:25:27.587 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:25:27.587 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:25:27.587 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:27.587 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:27.587 { 00:25:27.587 "params": { 00:25:27.587 "name": "Nvme$subsystem", 00:25:27.587 "trtype": "$TEST_TRANSPORT", 00:25:27.587 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:27.588 "adrfam": "ipv4", 00:25:27.588 "trsvcid": "$NVMF_PORT", 00:25:27.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:27.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:27.588 "hdgst": ${hdgst:-false}, 00:25:27.588 "ddgst": ${ddgst:-false} 00:25:27.588 }, 00:25:27.588 "method": "bdev_nvme_attach_controller" 00:25:27.588 } 00:25:27.588 EOF 00:25:27.588 )") 00:25:27.588 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:25:27.588 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:25:27.588 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:25:27.588 11:27:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:27.588 "params": { 00:25:27.588 "name": "Nvme1", 00:25:27.588 "trtype": "tcp", 00:25:27.588 "traddr": "10.0.0.2", 00:25:27.588 "adrfam": "ipv4", 00:25:27.588 "trsvcid": "4420", 00:25:27.588 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:27.588 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:27.588 "hdgst": false, 00:25:27.588 "ddgst": false 00:25:27.588 }, 00:25:27.588 "method": "bdev_nvme_attach_controller" 00:25:27.588 }' 00:25:27.588 [2024-11-19 11:27:22.954167] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:25:27.588 [2024-11-19 11:27:22.954256] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2723149 ] 00:25:27.588 [2024-11-19 11:27:23.040210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.846 [2024-11-19 11:27:23.103825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:27.846 Running I/O for 1 seconds... 00:25:29.220 8639.00 IOPS, 33.75 MiB/s 00:25:29.220 Latency(us) 00:25:29.220 [2024-11-19T10:27:24.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:29.220 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:29.220 Verification LBA range: start 0x0 length 0x4000 00:25:29.220 Nvme1n1 : 1.01 8674.81 33.89 0.00 0.00 14696.20 2730.67 13495.56 00:25:29.220 [2024-11-19T10:27:24.717Z] =================================================================================================================== 00:25:29.220 [2024-11-19T10:27:24.717Z] Total : 8674.81 33.89 0.00 0.00 14696.20 2730.67 13495.56 00:25:29.220 11:27:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2723291 00:25:29.220 11:27:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:25:29.220 11:27:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:25:29.220 11:27:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:25:29.220 11:27:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:25:29.220 11:27:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:25:29.220 11:27:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:29.220 11:27:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:29.220 { 00:25:29.220 "params": { 00:25:29.220 "name": "Nvme$subsystem", 00:25:29.220 "trtype": "$TEST_TRANSPORT", 00:25:29.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:29.220 "adrfam": "ipv4", 00:25:29.220 "trsvcid": "$NVMF_PORT", 00:25:29.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:29.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:29.220 "hdgst": ${hdgst:-false}, 00:25:29.220 "ddgst": ${ddgst:-false} 00:25:29.221 }, 00:25:29.221 "method": "bdev_nvme_attach_controller" 00:25:29.221 } 00:25:29.221 EOF 00:25:29.221 )") 00:25:29.221 11:27:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:25:29.221 11:27:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:25:29.221 11:27:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:25:29.221 11:27:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:29.221 "params": { 00:25:29.221 "name": "Nvme1", 00:25:29.221 "trtype": "tcp", 00:25:29.221 "traddr": "10.0.0.2", 00:25:29.221 "adrfam": "ipv4", 00:25:29.221 "trsvcid": "4420", 00:25:29.221 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:29.221 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:29.221 "hdgst": false, 00:25:29.221 "ddgst": false 00:25:29.221 }, 00:25:29.221 "method": "bdev_nvme_attach_controller" 00:25:29.221 }' 00:25:29.221 [2024-11-19 11:27:24.583889] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:25:29.221 [2024-11-19 11:27:24.583981] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2723291 ] 00:25:29.221 [2024-11-19 11:27:24.662929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.479 [2024-11-19 11:27:24.721201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:29.737 Running I/O for 15 seconds... 00:25:31.656 8727.00 IOPS, 34.09 MiB/s [2024-11-19T10:27:27.750Z] 8760.00 IOPS, 34.22 MiB/s [2024-11-19T10:27:27.750Z] 11:27:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2723119 00:25:32.253 11:27:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:25:32.253 [2024-11-19 11:27:27.551962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:45144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.253 [2024-11-19 11:27:27.552015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.253 [2024-11-19 11:27:27.552041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:45152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.253 [2024-11-19 11:27:27.552068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.253 [2024-11-19 11:27:27.552092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:45160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.253 [2024-11-19 11:27:27.552107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.253 [2024-11-19 11:27:27.552122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:45168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.253 [2024-11-19 11:27:27.552135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.253 [2024-11-19 11:27:27.552151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:45176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.253 [2024-11-19 11:27:27.552166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.253 [2024-11-19 11:27:27.552180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:45184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.253 [2024-11-19 11:27:27.552193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.253 [2024-11-19 11:27:27.552209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:45192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.253 [2024-11-19 11:27:27.552222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.253 [2024-11-19 11:27:27.552236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:45200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.253 [2024-11-19 11:27:27.552249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.253 [2024-11-19 11:27:27.552266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:45208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.253 [2024-11-19 11:27:27.552281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.253 [2024-11-19 11:27:27.552297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:45216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.253 [2024-11-19 11:27:27.552312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.253 [2024-11-19 11:27:27.552328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:45224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.253 [2024-11-19 11:27:27.552359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.253 [2024-11-19 11:27:27.552388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:45232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.253 [2024-11-19 11:27:27.552404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.253 [2024-11-19 11:27:27.552423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:45240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.253 [2024-11-19 11:27:27.552438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.253 [2024-11-19 11:27:27.552454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:45248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.253 [2024-11-19 11:27:27.552469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.253 [2024-11-19 11:27:27.552486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:45256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.253 [2024-11-19 11:27:27.552504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.253 [2024-11-19 11:27:27.552521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:45264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.253 [2024-11-19 11:27:27.552536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.253 [2024-11-19 11:27:27.552552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:45272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.253 [2024-11-19 11:27:27.552568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.253 [2024-11-19 11:27:27.552585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:45280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.253 [2024-11-19 11:27:27.552600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.253 [2024-11-19 11:27:27.552616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:45288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.253 [2024-11-19 11:27:27.552631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.253 [2024-11-19 11:27:27.552666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:45296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.253 [2024-11-19 11:27:27.552680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.253 [2024-11-19 11:27:27.552695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:45304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.253 [2024-11-19 11:27:27.552723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.253 [2024-11-19 11:27:27.552739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:45312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.253 [2024-11-19 11:27:27.552751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.253 [2024-11-19 11:27:27.552765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:45320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.254 [2024-11-19 11:27:27.552791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.254 [2024-11-19 11:27:27.552806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:45328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.254 [2024-11-19 11:27:27.552818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.254 [2024-11-19 11:27:27.552832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:45336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.254 [2024-11-19 11:27:27.552844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.254 [2024-11-19 11:27:27.552858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:45344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.254 [2024-11-19 11:27:27.552870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.254 [2024-11-19 11:27:27.552884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:45352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.254 [2024-11-19 11:27:27.552896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.254 [2024-11-19 11:27:27.552914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:45360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.254 [2024-11-19 11:27:27.552927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.254 [2024-11-19 11:27:27.552941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:45368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.254 [2024-11-19 11:27:27.552953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.254 [2024-11-19 11:27:27.552967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:45376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.254 [2024-11-19 11:27:27.552980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.254 [2024-11-19 11:27:27.552993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:45384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.254 [2024-11-19 11:27:27.553006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.254 [2024-11-19 11:27:27.553019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:45392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.254 [2024-11-19 11:27:27.553032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.254 [2024-11-19 11:27:27.553045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:45400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.254 [2024-11-19 11:27:27.553057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.254 [2024-11-19 11:27:27.553071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:45408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.254 [2024-11-19 11:27:27.553084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.254 [2024-11-19 11:27:27.553097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:45416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.254 [2024-11-19 11:27:27.553110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.254 [2024-11-19 11:27:27.553124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.254 [2024-11-19 11:27:27.553136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.254 [2024-11-19 11:27:27.553149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:45440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.254 [2024-11-19 11:27:27.553163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.254 [2024-11-19 11:27:27.553176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:45448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.254 [2024-11-19 11:27:27.553189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.254 [2024-11-19 11:27:27.553203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.254 [2024-11-19 11:27:27.553216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.254 [2024-11-19 11:27:27.553231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:45464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.254 [2024-11-19 11:27:27.553243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.254 [2024-11-19 11:27:27.553261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:45472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.254 [2024-11-19 11:27:27.553275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.254 [2024-11-19 11:27:27.553289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:45480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.254 [2024-11-19 11:27:27.553301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.254 [2024-11-19 11:27:27.553315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.254 [2024-11-19 11:27:27.553328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.254 [2024-11-19 11:27:27.553356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:45496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.254 [2024-11-19 11:27:27.553380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.254 [2024-11-19 11:27:27.553397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.254 [2024-11-19 11:27:27.553412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.254 [2024-11-19 11:27:27.553428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:45512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.254 [2024-11-19 11:27:27.553442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.254 [2024-11-19 11:27:27.553458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.254 [2024-11-19 11:27:27.553472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.254 [2024-11-19 11:27:27.553488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:45528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.254 [2024-11-19 11:27:27.553502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.254 [2024-11-19 11:27:27.553518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:45536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.254 [2024-11-19 11:27:27.553532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.254 [2024-11-19 11:27:27.553548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:45544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.254 [2024-11-19 11:27:27.553563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.254 [2024-11-19 11:27:27.553579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:45552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.254 [2024-11-19 11:27:27.553594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.254 [2024-11-19 11:27:27.553610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.254 [2024-11-19 11:27:27.553625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.254 [2024-11-19 11:27:27.553640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:45568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.254 [2024-11-19 11:27:27.553673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.254 [2024-11-19 11:27:27.553689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.254 [2024-11-19 11:27:27.553703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.254 [2024-11-19 11:27:27.553733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:45584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.254 [2024-11-19 11:27:27.553746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.254 [2024-11-19 11:27:27.553760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:45592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.254 [2024-11-19 11:27:27.553773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.254 [2024-11-19 11:27:27.553786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:45600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.254 [2024-11-19 11:27:27.553798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.254 [2024-11-19 11:27:27.553811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:45608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.254 [2024-11-19 11:27:27.553823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.254 [2024-11-19 11:27:27.553837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:45616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.254 [2024-11-19 11:27:27.553849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.254 [2024-11-19 11:27:27.553863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:45624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.254 [2024-11-19 11:27:27.553875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.255 [2024-11-19 11:27:27.553889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.255 [2024-11-19 11:27:27.553901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.255 [2024-11-19 11:27:27.553915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:45640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.255 [2024-11-19 11:27:27.553927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.255 [2024-11-19 11:27:27.553940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:45648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.255 [2024-11-19 11:27:27.553953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.255 [2024-11-19 11:27:27.553966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:45424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.255 [2024-11-19 11:27:27.553979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.255 [2024-11-19 11:27:27.553992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:45656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.255 [2024-11-19 11:27:27.554004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.255 [2024-11-19 11:27:27.554021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:45664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.255 [2024-11-19 11:27:27.554034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.255 [2024-11-19 11:27:27.554048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.255 [2024-11-19 11:27:27.554060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.255 [2024-11-19 11:27:27.554073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:45680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.255 [2024-11-19 11:27:27.554085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.255 [2024-11-19 11:27:27.554098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:45688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.255 [2024-11-19 11:27:27.554111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.255 [2024-11-19 11:27:27.554124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.255 [2024-11-19 11:27:27.554136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.255 [2024-11-19 11:27:27.554150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.255 [2024-11-19 11:27:27.554162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.255 [2024-11-19 11:27:27.554176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:45712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.255 [2024-11-19 11:27:27.554188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.255 [2024-11-19 11:27:27.554201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:45720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.255 [2024-11-19 11:27:27.554213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.255 [2024-11-19 11:27:27.554226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.255 [2024-11-19 11:27:27.554239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.255 [2024-11-19 11:27:27.554252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.255 [2024-11-19 11:27:27.554264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.255 [2024-11-19 11:27:27.554278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:45744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.255 [2024-11-19 11:27:27.554290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.255 [2024-11-19 11:27:27.554303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:45752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.255 [2024-11-19 11:27:27.554315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.255 [2024-11-19 11:27:27.554329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:45760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.255 [2024-11-19 11:27:27.554344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.255 [2024-11-19 11:27:27.554383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:45768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.255 [2024-11-19 11:27:27.554398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.255 [2024-11-19 11:27:27.554414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:45776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.255 [2024-11-19 11:27:27.554429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.255 [2024-11-19 11:27:27.554444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:45784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.255 [2024-11-19 11:27:27.554458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.255 [2024-11-19 11:27:27.554474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:45792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.255 [2024-11-19 11:27:27.554489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.255 [2024-11-19 11:27:27.554504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.255 [2024-11-19 11:27:27.554519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.255 [2024-11-19 11:27:27.554534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:45808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.255 [2024-11-19 11:27:27.554548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.255 [2024-11-19 11:27:27.554563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:45816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.255 [2024-11-19 11:27:27.554577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.255 [2024-11-19 11:27:27.554593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:45824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.255 [2024-11-19 11:27:27.554607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.255 [2024-11-19 11:27:27.554622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:45832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.255 [2024-11-19 11:27:27.554636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.255 [2024-11-19 11:27:27.554667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.255 [2024-11-19 11:27:27.554681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.255 [2024-11-19 11:27:27.554696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.255 [2024-11-19 11:27:27.554723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.255 [2024-11-19 11:27:27.554737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:45856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.255 [2024-11-19 11:27:27.554750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.255 [2024-11-19 11:27:27.554763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:45864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.255 [2024-11-19 11:27:27.554782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.255 [2024-11-19 11:27:27.554796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.255 [2024-11-19 11:27:27.554808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.255 [2024-11-19 11:27:27.554822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.255 [2024-11-19 11:27:27.554834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.255 [2024-11-19 11:27:27.554847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:45888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.255 [2024-11-19 11:27:27.554860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.255 [2024-11-19 11:27:27.554873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:45896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.255 [2024-11-19 11:27:27.554885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.255 [2024-11-19 11:27:27.554898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:45904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.255 [2024-11-19 11:27:27.554910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.255 [2024-11-19 11:27:27.554924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:45912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.255 [2024-11-19 11:27:27.554936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.255 [2024-11-19 11:27:27.554959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:45920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.256 [2024-11-19 11:27:27.554973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.256 [2024-11-19 11:27:27.554986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:45928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.256 [2024-11-19 11:27:27.554998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.256 [2024-11-19 11:27:27.555016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:45936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.256 [2024-11-19 11:27:27.555029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.256 [2024-11-19 11:27:27.555042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.256 [2024-11-19 11:27:27.555054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.256 [2024-11-19 11:27:27.555068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:45952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.256 [2024-11-19 11:27:27.555080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.256 [2024-11-19 11:27:27.555093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:45960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.256 [2024-11-19 11:27:27.555105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.256 [2024-11-19 11:27:27.555122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:45968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.256 [2024-11-19 11:27:27.555135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.256 [2024-11-19 11:27:27.555149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:45976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.256 [2024-11-19 11:27:27.555161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.256 [2024-11-19 11:27:27.555174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:45984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.256 [2024-11-19 11:27:27.555187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.256 [2024-11-19 11:27:27.555200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:45992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.256 [2024-11-19 11:27:27.555212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.256 [2024-11-19 11:27:27.555225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:46000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.256 [2024-11-19 11:27:27.555238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.256 [2024-11-19 11:27:27.555251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:46008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.256 [2024-11-19 11:27:27.555263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.256 [2024-11-19 11:27:27.555276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:46016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.256 [2024-11-19 11:27:27.555288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.256 [2024-11-19 11:27:27.555301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:46024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.256 [2024-11-19 11:27:27.555313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.256 [2024-11-19 11:27:27.555326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:46032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.256 [2024-11-19 11:27:27.555339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.256 [2024-11-19 11:27:27.555377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:46040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.256 [2024-11-19 11:27:27.555393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.256 [2024-11-19 11:27:27.555414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:46048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.256 [2024-11-19 11:27:27.555429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.256 [2024-11-19 11:27:27.555444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:46056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.256 [2024-11-19 11:27:27.555459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.256 [2024-11-19 11:27:27.555479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.256 [2024-11-19 11:27:27.555498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.256 [2024-11-19 11:27:27.555514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:46072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.256 [2024-11-19 11:27:27.555529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.256 [2024-11-19 11:27:27.555545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:46080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.256 [2024-11-19 11:27:27.555559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.256 [2024-11-19 11:27:27.555575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:46088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.256 [2024-11-19 11:27:27.555589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.256 [2024-11-19 11:27:27.555604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:46096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.256 [2024-11-19 11:27:27.555618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.256 [2024-11-19 11:27:27.555634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:46104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.256 [2024-11-19 11:27:27.555648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.256 [2024-11-19 11:27:27.555676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:46112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.256 [2024-11-19 11:27:27.555689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.256 [2024-11-19 11:27:27.555702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:46120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.256 [2024-11-19 11:27:27.555714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.256 [2024-11-19 11:27:27.555727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:46128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.256 [2024-11-19 11:27:27.555739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.256 [2024-11-19 11:27:27.555753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:46136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.256 [2024-11-19 11:27:27.555765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.256 [2024-11-19 11:27:27.555778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:46144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.256 [2024-11-19 11:27:27.555790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.256 [2024-11-19 11:27:27.555804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:46152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.256 [2024-11-19 11:27:27.555816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.256 [2024-11-19 11:27:27.555829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2728e30 is same with the state(6) to be set 00:25:32.256 [2024-11-19 11:27:27.555844] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.256 [2024-11-19 11:27:27.555854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.256 [2024-11-19 11:27:27.555868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46160 len:8 PRP1 0x0 PRP2 0x0 00:25:32.256 [2024-11-19 11:27:27.555885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.256 [2024-11-19 11:27:27.555991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.256 [2024-11-19 11:27:27.556010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.256 [2024-11-19 11:27:27.556023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.256 [2024-11-19 11:27:27.556040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.256 [2024-11-19 11:27:27.556053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.256 [2024-11-19 11:27:27.556064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.256 [2024-11-19 11:27:27.556075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.256 [2024-11-19 11:27:27.556086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.256 [2024-11-19 11:27:27.556097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.256 [2024-11-19 11:27:27.559900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.256 [2024-11-19 11:27:27.559936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.256 [2024-11-19 11:27:27.560494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.256 [2024-11-19 11:27:27.560524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.257 [2024-11-19 11:27:27.560542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.257 [2024-11-19 11:27:27.560775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.257 [2024-11-19 11:27:27.560963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.257 [2024-11-19 11:27:27.560981] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.257 [2024-11-19 11:27:27.560996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.257 [2024-11-19 11:27:27.561010] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.257 [2024-11-19 11:27:27.573231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.257 [2024-11-19 11:27:27.573600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.257 [2024-11-19 11:27:27.573627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.257 [2024-11-19 11:27:27.573643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.257 [2024-11-19 11:27:27.573841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.257 [2024-11-19 11:27:27.574029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.257 [2024-11-19 11:27:27.574048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.257 [2024-11-19 11:27:27.574065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.257 [2024-11-19 11:27:27.574077] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.257 [2024-11-19 11:27:27.586444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.257 [2024-11-19 11:27:27.586794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.257 [2024-11-19 11:27:27.586819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.257 [2024-11-19 11:27:27.586834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.257 [2024-11-19 11:27:27.587019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.257 [2024-11-19 11:27:27.587207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.257 [2024-11-19 11:27:27.587225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.257 [2024-11-19 11:27:27.587238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.257 [2024-11-19 11:27:27.587250] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.257 [2024-11-19 11:27:27.599495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.257 [2024-11-19 11:27:27.599838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.257 [2024-11-19 11:27:27.599863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.257 [2024-11-19 11:27:27.599877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.257 [2024-11-19 11:27:27.600061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.257 [2024-11-19 11:27:27.600249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.257 [2024-11-19 11:27:27.600267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.257 [2024-11-19 11:27:27.600279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.257 [2024-11-19 11:27:27.600291] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.257 [2024-11-19 11:27:27.612674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.257 [2024-11-19 11:27:27.613071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.257 [2024-11-19 11:27:27.613097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.257 [2024-11-19 11:27:27.613112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.257 [2024-11-19 11:27:27.613296] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.257 [2024-11-19 11:27:27.613513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.257 [2024-11-19 11:27:27.613534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.257 [2024-11-19 11:27:27.613546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.257 [2024-11-19 11:27:27.613558] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.257 [2024-11-19 11:27:27.625799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.257 [2024-11-19 11:27:27.626218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.257 [2024-11-19 11:27:27.626243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.257 [2024-11-19 11:27:27.626258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.257 [2024-11-19 11:27:27.626483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.257 [2024-11-19 11:27:27.626676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.257 [2024-11-19 11:27:27.626697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.257 [2024-11-19 11:27:27.626709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.257 [2024-11-19 11:27:27.626722] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.257 [2024-11-19 11:27:27.639005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.257 [2024-11-19 11:27:27.639391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.257 [2024-11-19 11:27:27.639418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.257 [2024-11-19 11:27:27.639433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.257 [2024-11-19 11:27:27.639625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.257 [2024-11-19 11:27:27.639830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.257 [2024-11-19 11:27:27.639851] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.257 [2024-11-19 11:27:27.639864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.257 [2024-11-19 11:27:27.639877] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.257 [2024-11-19 11:27:27.652127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.257 [2024-11-19 11:27:27.652501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.257 [2024-11-19 11:27:27.652527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.257 [2024-11-19 11:27:27.652541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.257 [2024-11-19 11:27:27.652725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.257 [2024-11-19 11:27:27.652912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.257 [2024-11-19 11:27:27.652931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.257 [2024-11-19 11:27:27.652944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.257 [2024-11-19 11:27:27.652956] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.257 [2024-11-19 11:27:27.665194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.257 [2024-11-19 11:27:27.665609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.257 [2024-11-19 11:27:27.665636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.257 [2024-11-19 11:27:27.665655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.257 [2024-11-19 11:27:27.665840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.257 [2024-11-19 11:27:27.666028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.257 [2024-11-19 11:27:27.666046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.257 [2024-11-19 11:27:27.666058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.257 [2024-11-19 11:27:27.666070] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.257 [2024-11-19 11:27:27.678550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.257 [2024-11-19 11:27:27.678929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.257 [2024-11-19 11:27:27.678955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.257 [2024-11-19 11:27:27.678970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.257 [2024-11-19 11:27:27.679188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.258 [2024-11-19 11:27:27.679418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.258 [2024-11-19 11:27:27.679455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.258 [2024-11-19 11:27:27.679469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.258 [2024-11-19 11:27:27.679482] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.258 [2024-11-19 11:27:27.691649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.258 [2024-11-19 11:27:27.692059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.258 [2024-11-19 11:27:27.692085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.258 [2024-11-19 11:27:27.692099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.258 [2024-11-19 11:27:27.692283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.258 [2024-11-19 11:27:27.692517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.258 [2024-11-19 11:27:27.692538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.258 [2024-11-19 11:27:27.692552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.258 [2024-11-19 11:27:27.692565] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.258 [2024-11-19 11:27:27.704632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.258 [2024-11-19 11:27:27.705002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.258 [2024-11-19 11:27:27.705027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.258 [2024-11-19 11:27:27.705041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.258 [2024-11-19 11:27:27.705225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.258 [2024-11-19 11:27:27.705447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.258 [2024-11-19 11:27:27.705467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.258 [2024-11-19 11:27:27.705480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.258 [2024-11-19 11:27:27.705492] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.258 [2024-11-19 11:27:27.717692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.258 [2024-11-19 11:27:27.718064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.258 [2024-11-19 11:27:27.718091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.258 [2024-11-19 11:27:27.718106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.258 [2024-11-19 11:27:27.718290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.258 [2024-11-19 11:27:27.718525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.258 [2024-11-19 11:27:27.718547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.258 [2024-11-19 11:27:27.718561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.258 [2024-11-19 11:27:27.718574] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.258 [2024-11-19 11:27:27.730781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.258 [2024-11-19 11:27:27.731127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.258 [2024-11-19 11:27:27.731152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.258 [2024-11-19 11:27:27.731166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.258 [2024-11-19 11:27:27.731350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.258 [2024-11-19 11:27:27.731588] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.258 [2024-11-19 11:27:27.731608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.258 [2024-11-19 11:27:27.731621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.258 [2024-11-19 11:27:27.731633] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.258 [2024-11-19 11:27:27.744477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.258 [2024-11-19 11:27:27.744925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.258 [2024-11-19 11:27:27.744950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.258 [2024-11-19 11:27:27.744973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.258 [2024-11-19 11:27:27.745157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.258 [2024-11-19 11:27:27.745345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.258 [2024-11-19 11:27:27.745373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.258 [2024-11-19 11:27:27.745406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.258 [2024-11-19 11:27:27.745420] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.518 [2024-11-19 11:27:27.758015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.519 [2024-11-19 11:27:27.758431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.519 [2024-11-19 11:27:27.758457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.519 [2024-11-19 11:27:27.758471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.519 [2024-11-19 11:27:27.758655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.519 [2024-11-19 11:27:27.758842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.519 [2024-11-19 11:27:27.758861] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.519 [2024-11-19 11:27:27.758873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.519 [2024-11-19 11:27:27.758886] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.519 [2024-11-19 11:27:27.771010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.519 [2024-11-19 11:27:27.771413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.519 [2024-11-19 11:27:27.771439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.519 [2024-11-19 11:27:27.771454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.519 [2024-11-19 11:27:27.771637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.519 [2024-11-19 11:27:27.771825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.519 [2024-11-19 11:27:27.771844] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.519 [2024-11-19 11:27:27.771857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.519 [2024-11-19 11:27:27.771869] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.519 [2024-11-19 11:27:27.784114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.519 [2024-11-19 11:27:27.784473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.519 [2024-11-19 11:27:27.784498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.519 [2024-11-19 11:27:27.784513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.519 [2024-11-19 11:27:27.784696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.519 [2024-11-19 11:27:27.784884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.519 [2024-11-19 11:27:27.784904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.519 [2024-11-19 11:27:27.784917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.519 [2024-11-19 11:27:27.784929] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.519 [2024-11-19 11:27:27.797270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.519 [2024-11-19 11:27:27.797687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.519 [2024-11-19 11:27:27.797712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.519 [2024-11-19 11:27:27.797726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.519 [2024-11-19 11:27:27.797910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.519 [2024-11-19 11:27:27.798097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.519 [2024-11-19 11:27:27.798115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.519 [2024-11-19 11:27:27.798128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.519 [2024-11-19 11:27:27.798139] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.519 [2024-11-19 11:27:27.810304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.519 [2024-11-19 11:27:27.810691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.519 [2024-11-19 11:27:27.810716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.519 [2024-11-19 11:27:27.810730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.519 [2024-11-19 11:27:27.810913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.519 [2024-11-19 11:27:27.811100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.519 [2024-11-19 11:27:27.811119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.519 [2024-11-19 11:27:27.811131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.519 [2024-11-19 11:27:27.811144] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.519 [2024-11-19 11:27:27.823576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.519 [2024-11-19 11:27:27.823971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.519 [2024-11-19 11:27:27.823997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.519 [2024-11-19 11:27:27.824012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.519 [2024-11-19 11:27:27.824203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.519 [2024-11-19 11:27:27.824424] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.519 [2024-11-19 11:27:27.824446] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.519 [2024-11-19 11:27:27.824460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.519 [2024-11-19 11:27:27.824473] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.519 [2024-11-19 11:27:27.836925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.519 [2024-11-19 11:27:27.837338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.519 [2024-11-19 11:27:27.837387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.519 [2024-11-19 11:27:27.837409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.519 [2024-11-19 11:27:27.837600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.519 [2024-11-19 11:27:27.837824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.519 [2024-11-19 11:27:27.837844] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.519 [2024-11-19 11:27:27.837858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.519 [2024-11-19 11:27:27.837871] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.519 [2024-11-19 11:27:27.850269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.519 [2024-11-19 11:27:27.850687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.519 [2024-11-19 11:27:27.850712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.519 [2024-11-19 11:27:27.850727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.519 [2024-11-19 11:27:27.850911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.519 [2024-11-19 11:27:27.851098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.519 [2024-11-19 11:27:27.851117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.519 [2024-11-19 11:27:27.851130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.519 [2024-11-19 11:27:27.851142] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.519 [2024-11-19 11:27:27.863319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.519 [2024-11-19 11:27:27.863732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.519 [2024-11-19 11:27:27.863758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.519 [2024-11-19 11:27:27.863773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.519 [2024-11-19 11:27:27.863957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.519 [2024-11-19 11:27:27.864145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.519 [2024-11-19 11:27:27.864163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.519 [2024-11-19 11:27:27.864175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.519 [2024-11-19 11:27:27.864187] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.519 [2024-11-19 11:27:27.876506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.519 [2024-11-19 11:27:27.876988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.519 [2024-11-19 11:27:27.877039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.520 [2024-11-19 11:27:27.877053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.520 [2024-11-19 11:27:27.877237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.520 [2024-11-19 11:27:27.877477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.520 [2024-11-19 11:27:27.877498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.520 [2024-11-19 11:27:27.877511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.520 [2024-11-19 11:27:27.877523] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.520 [2024-11-19 11:27:27.889555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.520 [2024-11-19 11:27:27.889965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.520 [2024-11-19 11:27:27.889991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.520 [2024-11-19 11:27:27.890006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.520 [2024-11-19 11:27:27.890191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.520 [2024-11-19 11:27:27.890404] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.520 [2024-11-19 11:27:27.890425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.520 [2024-11-19 11:27:27.890439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.520 [2024-11-19 11:27:27.890451] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.520 [2024-11-19 11:27:27.902590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.520 [2024-11-19 11:27:27.902935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.520 [2024-11-19 11:27:27.902960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.520 [2024-11-19 11:27:27.902974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.520 [2024-11-19 11:27:27.903158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.520 [2024-11-19 11:27:27.903351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.520 [2024-11-19 11:27:27.903397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.520 [2024-11-19 11:27:27.903411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.520 [2024-11-19 11:27:27.903425] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.520 [2024-11-19 11:27:27.915694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.520 [2024-11-19 11:27:27.916086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.520 [2024-11-19 11:27:27.916113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.520 [2024-11-19 11:27:27.916127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.520 [2024-11-19 11:27:27.916312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.520 [2024-11-19 11:27:27.916536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.520 [2024-11-19 11:27:27.916558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.520 [2024-11-19 11:27:27.916576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.520 [2024-11-19 11:27:27.916590] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.520 [2024-11-19 11:27:27.928643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.520 [2024-11-19 11:27:27.929060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.520 [2024-11-19 11:27:27.929094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.520 [2024-11-19 11:27:27.929107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.520 [2024-11-19 11:27:27.929291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.520 [2024-11-19 11:27:27.929535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.520 [2024-11-19 11:27:27.929557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.520 [2024-11-19 11:27:27.929571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.520 [2024-11-19 11:27:27.929584] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.520 [2024-11-19 11:27:27.941697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.520 [2024-11-19 11:27:27.942063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.520 [2024-11-19 11:27:27.942088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.520 [2024-11-19 11:27:27.942102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.520 [2024-11-19 11:27:27.942285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.520 [2024-11-19 11:27:27.942522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.520 [2024-11-19 11:27:27.942544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.520 [2024-11-19 11:27:27.942558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.520 [2024-11-19 11:27:27.942571] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.520 [2024-11-19 11:27:27.954743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.520 [2024-11-19 11:27:27.955132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.520 [2024-11-19 11:27:27.955157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.520 [2024-11-19 11:27:27.955172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.520 [2024-11-19 11:27:27.955356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.520 [2024-11-19 11:27:27.955606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.520 [2024-11-19 11:27:27.955626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.520 [2024-11-19 11:27:27.955640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.520 [2024-11-19 11:27:27.955663] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.520 [2024-11-19 11:27:27.967962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.520 [2024-11-19 11:27:27.968268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.520 [2024-11-19 11:27:27.968293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.520 [2024-11-19 11:27:27.968307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.520 [2024-11-19 11:27:27.968520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.520 [2024-11-19 11:27:27.968732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.520 [2024-11-19 11:27:27.968751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.520 [2024-11-19 11:27:27.968763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.520 [2024-11-19 11:27:27.968775] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.520 [2024-11-19 11:27:27.981210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.520 [2024-11-19 11:27:27.981664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.520 [2024-11-19 11:27:27.981689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.520 [2024-11-19 11:27:27.981703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.520 [2024-11-19 11:27:27.981886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.520 [2024-11-19 11:27:27.982073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.520 [2024-11-19 11:27:27.982091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.520 [2024-11-19 11:27:27.982104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.520 [2024-11-19 11:27:27.982115] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.520 [2024-11-19 11:27:27.994330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.520 [2024-11-19 11:27:27.994740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.520 [2024-11-19 11:27:27.994765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.520 [2024-11-19 11:27:27.994779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.520 [2024-11-19 11:27:27.994962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.520 [2024-11-19 11:27:27.995149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.521 [2024-11-19 11:27:27.995170] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.521 [2024-11-19 11:27:27.995182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.521 [2024-11-19 11:27:27.995195] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.521 [2024-11-19 11:27:28.007523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.521 [2024-11-19 11:27:28.007891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.521 [2024-11-19 11:27:28.007930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.521 [2024-11-19 11:27:28.007953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.521 [2024-11-19 11:27:28.008142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.521 [2024-11-19 11:27:28.008347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.521 [2024-11-19 11:27:28.008394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.521 [2024-11-19 11:27:28.008409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.521 [2024-11-19 11:27:28.008421] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.781 [2024-11-19 11:27:28.021100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.781 [2024-11-19 11:27:28.021413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.781 [2024-11-19 11:27:28.021439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.781 [2024-11-19 11:27:28.021454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.781 [2024-11-19 11:27:28.021644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.781 [2024-11-19 11:27:28.021850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.781 [2024-11-19 11:27:28.021870] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.781 [2024-11-19 11:27:28.021885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.781 [2024-11-19 11:27:28.021897] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.781 [2024-11-19 11:27:28.034752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.781 [2024-11-19 11:27:28.035103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.781 [2024-11-19 11:27:28.035129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.781 [2024-11-19 11:27:28.035143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.781 7342.33 IOPS, 28.68 MiB/s [2024-11-19T10:27:28.278Z] [2024-11-19 11:27:28.036808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.781 [2024-11-19 11:27:28.036995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.781 [2024-11-19 11:27:28.037015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.781 [2024-11-19 11:27:28.037027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.781 [2024-11-19 11:27:28.037039] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.781 [2024-11-19 11:27:28.047976] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.781 [2024-11-19 11:27:28.048285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.781 [2024-11-19 11:27:28.048310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.781 [2024-11-19 11:27:28.048325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.781 [2024-11-19 11:27:28.048539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.781 [2024-11-19 11:27:28.048752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.781 [2024-11-19 11:27:28.048772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.781 [2024-11-19 11:27:28.048784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.781 [2024-11-19 11:27:28.048796] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.781 [2024-11-19 11:27:28.061255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.781 [2024-11-19 11:27:28.061593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.781 [2024-11-19 11:27:28.061620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.781 [2024-11-19 11:27:28.061636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.781 [2024-11-19 11:27:28.061835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.781 [2024-11-19 11:27:28.062039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.781 [2024-11-19 11:27:28.062060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.781 [2024-11-19 11:27:28.062072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.781 [2024-11-19 11:27:28.062084] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.781 [2024-11-19 11:27:28.074438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.781 [2024-11-19 11:27:28.074829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.781 [2024-11-19 11:27:28.074881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.781 [2024-11-19 11:27:28.074895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.781 [2024-11-19 11:27:28.075078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.781 [2024-11-19 11:27:28.075265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.781 [2024-11-19 11:27:28.075285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.781 [2024-11-19 11:27:28.075297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.781 [2024-11-19 11:27:28.075309] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.781 [2024-11-19 11:27:28.087716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.781 [2024-11-19 11:27:28.088023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.781 [2024-11-19 11:27:28.088048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.781 [2024-11-19 11:27:28.088062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.781 [2024-11-19 11:27:28.088246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.781 [2024-11-19 11:27:28.088462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.782 [2024-11-19 11:27:28.088483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.782 [2024-11-19 11:27:28.088501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.782 [2024-11-19 11:27:28.088514] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.782 [2024-11-19 11:27:28.100881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.782 [2024-11-19 11:27:28.101200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.782 [2024-11-19 11:27:28.101225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.782 [2024-11-19 11:27:28.101240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.782 [2024-11-19 11:27:28.101467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.782 [2024-11-19 11:27:28.101682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.782 [2024-11-19 11:27:28.101702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.782 [2024-11-19 11:27:28.101714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.782 [2024-11-19 11:27:28.101726] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.782 [2024-11-19 11:27:28.114109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.782 [2024-11-19 11:27:28.114438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.782 [2024-11-19 11:27:28.114465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.782 [2024-11-19 11:27:28.114479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.782 [2024-11-19 11:27:28.114668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.782 [2024-11-19 11:27:28.114862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.782 [2024-11-19 11:27:28.114882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.782 [2024-11-19 11:27:28.114895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.782 [2024-11-19 11:27:28.114907] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.782 [2024-11-19 11:27:28.127262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.782 [2024-11-19 11:27:28.127603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.782 [2024-11-19 11:27:28.127629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.782 [2024-11-19 11:27:28.127644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.782 [2024-11-19 11:27:28.127844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.782 [2024-11-19 11:27:28.128032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.782 [2024-11-19 11:27:28.128051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.782 [2024-11-19 11:27:28.128063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.782 [2024-11-19 11:27:28.128075] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.782 [2024-11-19 11:27:28.140490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.782 [2024-11-19 11:27:28.140842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.782 [2024-11-19 11:27:28.140868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.782 [2024-11-19 11:27:28.140883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.782 [2024-11-19 11:27:28.141067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.782 [2024-11-19 11:27:28.141255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.782 [2024-11-19 11:27:28.141274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.782 [2024-11-19 11:27:28.141286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.782 [2024-11-19 11:27:28.141298] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.782 [2024-11-19 11:27:28.153599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.782 [2024-11-19 11:27:28.154007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.782 [2024-11-19 11:27:28.154032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.782 [2024-11-19 11:27:28.154046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.782 [2024-11-19 11:27:28.154230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.782 [2024-11-19 11:27:28.154450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.782 [2024-11-19 11:27:28.154470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.782 [2024-11-19 11:27:28.154483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.782 [2024-11-19 11:27:28.154495] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.782 [2024-11-19 11:27:28.166774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.782 [2024-11-19 11:27:28.167118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.782 [2024-11-19 11:27:28.167143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.782 [2024-11-19 11:27:28.167157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.782 [2024-11-19 11:27:28.167341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.782 [2024-11-19 11:27:28.167567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.782 [2024-11-19 11:27:28.167587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.782 [2024-11-19 11:27:28.167599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.782 [2024-11-19 11:27:28.167612] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.782 [2024-11-19 11:27:28.179966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.782 [2024-11-19 11:27:28.180341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.782 [2024-11-19 11:27:28.180385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.782 [2024-11-19 11:27:28.180407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.782 [2024-11-19 11:27:28.180597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.782 [2024-11-19 11:27:28.180814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.782 [2024-11-19 11:27:28.180834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.782 [2024-11-19 11:27:28.180847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.782 [2024-11-19 11:27:28.180859] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.782 [2024-11-19 11:27:28.193099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.782 [2024-11-19 11:27:28.193465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.782 [2024-11-19 11:27:28.193490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.782 [2024-11-19 11:27:28.193504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.782 [2024-11-19 11:27:28.193688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.782 [2024-11-19 11:27:28.193875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.782 [2024-11-19 11:27:28.193895] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.782 [2024-11-19 11:27:28.193908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.782 [2024-11-19 11:27:28.193920] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.782 [2024-11-19 11:27:28.206210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.782 [2024-11-19 11:27:28.206591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.782 [2024-11-19 11:27:28.206616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.782 [2024-11-19 11:27:28.206630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.782 [2024-11-19 11:27:28.206813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.782 [2024-11-19 11:27:28.207000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.782 [2024-11-19 11:27:28.207018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.782 [2024-11-19 11:27:28.207031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.782 [2024-11-19 11:27:28.207043] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.783 [2024-11-19 11:27:28.219324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.783 [2024-11-19 11:27:28.219700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.783 [2024-11-19 11:27:28.219725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.783 [2024-11-19 11:27:28.219739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.783 [2024-11-19 11:27:28.219923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.783 [2024-11-19 11:27:28.220126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.783 [2024-11-19 11:27:28.220146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.783 [2024-11-19 11:27:28.220158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.783 [2024-11-19 11:27:28.220170] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.783 [2024-11-19 11:27:28.232437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.783 [2024-11-19 11:27:28.232813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.783 [2024-11-19 11:27:28.232838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.783 [2024-11-19 11:27:28.232852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.783 [2024-11-19 11:27:28.233036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.783 [2024-11-19 11:27:28.233233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.783 [2024-11-19 11:27:28.233253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.783 [2024-11-19 11:27:28.233266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.783 [2024-11-19 11:27:28.233278] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.783 [2024-11-19 11:27:28.245679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.783 [2024-11-19 11:27:28.246051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.783 [2024-11-19 11:27:28.246087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.783 [2024-11-19 11:27:28.246102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.783 [2024-11-19 11:27:28.246286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.783 [2024-11-19 11:27:28.246537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.783 [2024-11-19 11:27:28.246558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.783 [2024-11-19 11:27:28.246571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.783 [2024-11-19 11:27:28.246584] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.783 [2024-11-19 11:27:28.258646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.783 [2024-11-19 11:27:28.259033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.783 [2024-11-19 11:27:28.259058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.783 [2024-11-19 11:27:28.259072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.783 [2024-11-19 11:27:28.259256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.783 [2024-11-19 11:27:28.259472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.783 [2024-11-19 11:27:28.259493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.783 [2024-11-19 11:27:28.259510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.783 [2024-11-19 11:27:28.259523] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.783 [2024-11-19 11:27:28.271886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.783 [2024-11-19 11:27:28.272370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.783 [2024-11-19 11:27:28.272397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:32.783 [2024-11-19 11:27:28.272426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:32.783 [2024-11-19 11:27:28.272660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:32.783 [2024-11-19 11:27:28.272867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.783 [2024-11-19 11:27:28.272887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.783 [2024-11-19 11:27:28.272900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.783 [2024-11-19 11:27:28.272913] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.043 [2024-11-19 11:27:28.285219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.043 [2024-11-19 11:27:28.285607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.043 [2024-11-19 11:27:28.285632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.043 [2024-11-19 11:27:28.285647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.043 [2024-11-19 11:27:28.285831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.043 [2024-11-19 11:27:28.286018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.043 [2024-11-19 11:27:28.286036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.043 [2024-11-19 11:27:28.286048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.043 [2024-11-19 11:27:28.286061] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.043 [2024-11-19 11:27:28.298252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.043 [2024-11-19 11:27:28.298664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.043 [2024-11-19 11:27:28.298689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.043 [2024-11-19 11:27:28.298703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.043 [2024-11-19 11:27:28.298887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.043 [2024-11-19 11:27:28.299075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.043 [2024-11-19 11:27:28.299093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.043 [2024-11-19 11:27:28.299106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.043 [2024-11-19 11:27:28.299118] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.043 [2024-11-19 11:27:28.311256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.043 [2024-11-19 11:27:28.311631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.043 [2024-11-19 11:27:28.311657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.043 [2024-11-19 11:27:28.311671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.043 [2024-11-19 11:27:28.311854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.043 [2024-11-19 11:27:28.312041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.043 [2024-11-19 11:27:28.312066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.043 [2024-11-19 11:27:28.312078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.043 [2024-11-19 11:27:28.312089] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.043 [2024-11-19 11:27:28.324504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.043 [2024-11-19 11:27:28.324866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.043 [2024-11-19 11:27:28.324891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.043 [2024-11-19 11:27:28.324906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.043 [2024-11-19 11:27:28.325095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.043 [2024-11-19 11:27:28.325295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.043 [2024-11-19 11:27:28.325316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.043 [2024-11-19 11:27:28.325330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.043 [2024-11-19 11:27:28.325343] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.043 [2024-11-19 11:27:28.337639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.043 [2024-11-19 11:27:28.337982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.043 [2024-11-19 11:27:28.338007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.043 [2024-11-19 11:27:28.338022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.043 [2024-11-19 11:27:28.338206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.043 [2024-11-19 11:27:28.338434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.043 [2024-11-19 11:27:28.338471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.043 [2024-11-19 11:27:28.338485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.043 [2024-11-19 11:27:28.338498] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.043 [2024-11-19 11:27:28.350663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.043 [2024-11-19 11:27:28.351046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.043 [2024-11-19 11:27:28.351070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.043 [2024-11-19 11:27:28.351090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.043 [2024-11-19 11:27:28.351274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.043 [2024-11-19 11:27:28.351493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.043 [2024-11-19 11:27:28.351515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.043 [2024-11-19 11:27:28.351528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.043 [2024-11-19 11:27:28.351540] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.043 [2024-11-19 11:27:28.363759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.043 [2024-11-19 11:27:28.364159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.043 [2024-11-19 11:27:28.364184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.043 [2024-11-19 11:27:28.364199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.043 [2024-11-19 11:27:28.364408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.043 [2024-11-19 11:27:28.364623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.043 [2024-11-19 11:27:28.364645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.043 [2024-11-19 11:27:28.364658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.043 [2024-11-19 11:27:28.364671] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.043 [2024-11-19 11:27:28.376755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.043 [2024-11-19 11:27:28.377134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.043 [2024-11-19 11:27:28.377160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.043 [2024-11-19 11:27:28.377175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.043 [2024-11-19 11:27:28.377359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.043 [2024-11-19 11:27:28.377585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.043 [2024-11-19 11:27:28.377606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.043 [2024-11-19 11:27:28.377619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.043 [2024-11-19 11:27:28.377632] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.043 [2024-11-19 11:27:28.389809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.043 [2024-11-19 11:27:28.390163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.043 [2024-11-19 11:27:28.390188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.043 [2024-11-19 11:27:28.390202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.043 [2024-11-19 11:27:28.390412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.043 [2024-11-19 11:27:28.390612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.043 [2024-11-19 11:27:28.390633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.043 [2024-11-19 11:27:28.390647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.043 [2024-11-19 11:27:28.390659] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.043 [2024-11-19 11:27:28.402898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.044 [2024-11-19 11:27:28.403294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.044 [2024-11-19 11:27:28.403319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.044 [2024-11-19 11:27:28.403335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.044 [2024-11-19 11:27:28.403565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.044 [2024-11-19 11:27:28.403780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.044 [2024-11-19 11:27:28.403815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.044 [2024-11-19 11:27:28.403827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.044 [2024-11-19 11:27:28.403840] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.044 [2024-11-19 11:27:28.415860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.044 [2024-11-19 11:27:28.416201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.044 [2024-11-19 11:27:28.416237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.044 [2024-11-19 11:27:28.416251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.044 [2024-11-19 11:27:28.416481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.044 [2024-11-19 11:27:28.416681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.044 [2024-11-19 11:27:28.416700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.044 [2024-11-19 11:27:28.416714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.044 [2024-11-19 11:27:28.416726] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.044 [2024-11-19 11:27:28.428981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.044 [2024-11-19 11:27:28.429303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.044 [2024-11-19 11:27:28.429329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.044 [2024-11-19 11:27:28.429345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.044 [2024-11-19 11:27:28.429564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.044 [2024-11-19 11:27:28.429776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.044 [2024-11-19 11:27:28.429796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.044 [2024-11-19 11:27:28.429813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.044 [2024-11-19 11:27:28.429826] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.044 [2024-11-19 11:27:28.442433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.044 [2024-11-19 11:27:28.442873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.044 [2024-11-19 11:27:28.442898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.044 [2024-11-19 11:27:28.442912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.044 [2024-11-19 11:27:28.443101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.044 [2024-11-19 11:27:28.443294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.044 [2024-11-19 11:27:28.443313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.044 [2024-11-19 11:27:28.443325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.044 [2024-11-19 11:27:28.443337] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.044 [2024-11-19 11:27:28.455747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.044 [2024-11-19 11:27:28.456162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.044 [2024-11-19 11:27:28.456211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.044 [2024-11-19 11:27:28.456225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.044 [2024-11-19 11:27:28.456444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.044 [2024-11-19 11:27:28.456648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.044 [2024-11-19 11:27:28.456685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.044 [2024-11-19 11:27:28.456699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.044 [2024-11-19 11:27:28.456711] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.044 [2024-11-19 11:27:28.469129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.044 [2024-11-19 11:27:28.469508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.044 [2024-11-19 11:27:28.469555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.044 [2024-11-19 11:27:28.469571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.044 [2024-11-19 11:27:28.469779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.044 [2024-11-19 11:27:28.469974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.044 [2024-11-19 11:27:28.469994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.044 [2024-11-19 11:27:28.470008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.044 [2024-11-19 11:27:28.470021] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.044 [2024-11-19 11:27:28.482525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.044 [2024-11-19 11:27:28.482986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.044 [2024-11-19 11:27:28.483039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.044 [2024-11-19 11:27:28.483054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.044 [2024-11-19 11:27:28.483243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.044 [2024-11-19 11:27:28.483483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.044 [2024-11-19 11:27:28.483505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.044 [2024-11-19 11:27:28.483518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.044 [2024-11-19 11:27:28.483532] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.044 [2024-11-19 11:27:28.495745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.044 [2024-11-19 11:27:28.496156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.044 [2024-11-19 11:27:28.496182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.044 [2024-11-19 11:27:28.496196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.044 [2024-11-19 11:27:28.496428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.044 [2024-11-19 11:27:28.496634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.044 [2024-11-19 11:27:28.496669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.044 [2024-11-19 11:27:28.496683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.044 [2024-11-19 11:27:28.496696] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.044 [2024-11-19 11:27:28.509061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.044 [2024-11-19 11:27:28.509455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.044 [2024-11-19 11:27:28.509482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.044 [2024-11-19 11:27:28.509497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.044 [2024-11-19 11:27:28.509714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.044 [2024-11-19 11:27:28.509907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.044 [2024-11-19 11:27:28.509927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.044 [2024-11-19 11:27:28.509940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.044 [2024-11-19 11:27:28.509953] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.044 [2024-11-19 11:27:28.522380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.044 [2024-11-19 11:27:28.522769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.044 [2024-11-19 11:27:28.522794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.044 [2024-11-19 11:27:28.522813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.044 [2024-11-19 11:27:28.523003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.045 [2024-11-19 11:27:28.523195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.045 [2024-11-19 11:27:28.523216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.045 [2024-11-19 11:27:28.523228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.045 [2024-11-19 11:27:28.523241] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.045 [2024-11-19 11:27:28.535883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.045 [2024-11-19 11:27:28.536324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.045 [2024-11-19 11:27:28.536374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.045 [2024-11-19 11:27:28.536393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.045 [2024-11-19 11:27:28.536622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.045 [2024-11-19 11:27:28.536863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.045 [2024-11-19 11:27:28.536885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.045 [2024-11-19 11:27:28.536898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.045 [2024-11-19 11:27:28.536911] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.304 [2024-11-19 11:27:28.549192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.304 [2024-11-19 11:27:28.549633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.304 [2024-11-19 11:27:28.549685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.304 [2024-11-19 11:27:28.549700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.304 [2024-11-19 11:27:28.549888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.304 [2024-11-19 11:27:28.550081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.304 [2024-11-19 11:27:28.550101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.304 [2024-11-19 11:27:28.550114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.304 [2024-11-19 11:27:28.550126] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.304 [2024-11-19 11:27:28.562359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.304 [2024-11-19 11:27:28.562808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.304 [2024-11-19 11:27:28.562834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.304 [2024-11-19 11:27:28.562849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.304 [2024-11-19 11:27:28.563038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.304 [2024-11-19 11:27:28.563237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.304 [2024-11-19 11:27:28.563256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.304 [2024-11-19 11:27:28.563269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.304 [2024-11-19 11:27:28.563281] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.304 [2024-11-19 11:27:28.575751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.304 [2024-11-19 11:27:28.576142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.304 [2024-11-19 11:27:28.576169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.304 [2024-11-19 11:27:28.576185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.304 [2024-11-19 11:27:28.576406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.304 [2024-11-19 11:27:28.576611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.304 [2024-11-19 11:27:28.576632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.304 [2024-11-19 11:27:28.576647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.304 [2024-11-19 11:27:28.576660] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.304 [2024-11-19 11:27:28.588937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.304 [2024-11-19 11:27:28.589359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.304 [2024-11-19 11:27:28.589407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.304 [2024-11-19 11:27:28.589423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.304 [2024-11-19 11:27:28.589617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.304 [2024-11-19 11:27:28.589834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.304 [2024-11-19 11:27:28.589855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.305 [2024-11-19 11:27:28.589869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.305 [2024-11-19 11:27:28.589881] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.305 [2024-11-19 11:27:28.602206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.305 [2024-11-19 11:27:28.602630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.305 [2024-11-19 11:27:28.602670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.305 [2024-11-19 11:27:28.602685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.305 [2024-11-19 11:27:28.602875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.305 [2024-11-19 11:27:28.603067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.305 [2024-11-19 11:27:28.603098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.305 [2024-11-19 11:27:28.603116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.305 [2024-11-19 11:27:28.603130] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.305 [2024-11-19 11:27:28.615490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.305 [2024-11-19 11:27:28.615885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.305 [2024-11-19 11:27:28.615910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.305 [2024-11-19 11:27:28.615925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.305 [2024-11-19 11:27:28.616114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.305 [2024-11-19 11:27:28.616311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.305 [2024-11-19 11:27:28.616332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.305 [2024-11-19 11:27:28.616360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.305 [2024-11-19 11:27:28.616384] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.305 [2024-11-19 11:27:28.628800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.305 [2024-11-19 11:27:28.629112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.305 [2024-11-19 11:27:28.629138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.305 [2024-11-19 11:27:28.629153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.305 [2024-11-19 11:27:28.629357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.305 [2024-11-19 11:27:28.629567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.305 [2024-11-19 11:27:28.629587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.305 [2024-11-19 11:27:28.629600] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.305 [2024-11-19 11:27:28.629613] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.305 [2024-11-19 11:27:28.642114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.305 [2024-11-19 11:27:28.642455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.305 [2024-11-19 11:27:28.642482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.305 [2024-11-19 11:27:28.642498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.305 [2024-11-19 11:27:28.642712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.305 [2024-11-19 11:27:28.642906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.305 [2024-11-19 11:27:28.642925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.305 [2024-11-19 11:27:28.642939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.305 [2024-11-19 11:27:28.642951] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.305 [2024-11-19 11:27:28.655437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.305 [2024-11-19 11:27:28.655789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.305 [2024-11-19 11:27:28.655815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.305 [2024-11-19 11:27:28.655830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.305 [2024-11-19 11:27:28.656019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.305 [2024-11-19 11:27:28.656212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.305 [2024-11-19 11:27:28.656231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.305 [2024-11-19 11:27:28.656244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.305 [2024-11-19 11:27:28.656257] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.305 [2024-11-19 11:27:28.668772] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.305 [2024-11-19 11:27:28.669125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.305 [2024-11-19 11:27:28.669153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.305 [2024-11-19 11:27:28.669167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.305 [2024-11-19 11:27:28.669382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.305 [2024-11-19 11:27:28.669612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.305 [2024-11-19 11:27:28.669633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.305 [2024-11-19 11:27:28.669647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.305 [2024-11-19 11:27:28.669676] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.305 [2024-11-19 11:27:28.682126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.305 [2024-11-19 11:27:28.682519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.305 [2024-11-19 11:27:28.682546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.305 [2024-11-19 11:27:28.682561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.305 [2024-11-19 11:27:28.682770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.305 [2024-11-19 11:27:28.682963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.305 [2024-11-19 11:27:28.682982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.305 [2024-11-19 11:27:28.682994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.305 [2024-11-19 11:27:28.683007] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.305 [2024-11-19 11:27:28.695462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.305 [2024-11-19 11:27:28.695834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.305 [2024-11-19 11:27:28.695860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.305 [2024-11-19 11:27:28.695879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.306 [2024-11-19 11:27:28.696069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.306 [2024-11-19 11:27:28.696263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.306 [2024-11-19 11:27:28.696283] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.306 [2024-11-19 11:27:28.696295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.306 [2024-11-19 11:27:28.696308] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.306 [2024-11-19 11:27:28.708854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.306 [2024-11-19 11:27:28.709214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.306 [2024-11-19 11:27:28.709240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.306 [2024-11-19 11:27:28.709254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.306 [2024-11-19 11:27:28.709473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.306 [2024-11-19 11:27:28.709689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.306 [2024-11-19 11:27:28.709708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.306 [2024-11-19 11:27:28.709722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.306 [2024-11-19 11:27:28.709734] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.306 [2024-11-19 11:27:28.722082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.306 [2024-11-19 11:27:28.722410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.306 [2024-11-19 11:27:28.722437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.306 [2024-11-19 11:27:28.722453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.306 [2024-11-19 11:27:28.722649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.306 [2024-11-19 11:27:28.722858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.306 [2024-11-19 11:27:28.722877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.306 [2024-11-19 11:27:28.722890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.306 [2024-11-19 11:27:28.722902] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.306 [2024-11-19 11:27:28.735333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.306 [2024-11-19 11:27:28.735705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.306 [2024-11-19 11:27:28.735731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.306 [2024-11-19 11:27:28.735746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.306 [2024-11-19 11:27:28.735944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.306 [2024-11-19 11:27:28.736144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.306 [2024-11-19 11:27:28.736165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.306 [2024-11-19 11:27:28.736178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.306 [2024-11-19 11:27:28.736191] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.306 [2024-11-19 11:27:28.748576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.306 [2024-11-19 11:27:28.748981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.306 [2024-11-19 11:27:28.749007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.306 [2024-11-19 11:27:28.749021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.306 [2024-11-19 11:27:28.749211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.306 [2024-11-19 11:27:28.749475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.306 [2024-11-19 11:27:28.749498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.306 [2024-11-19 11:27:28.749511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.306 [2024-11-19 11:27:28.749524] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.306 [2024-11-19 11:27:28.761825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.306 [2024-11-19 11:27:28.762211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.306 [2024-11-19 11:27:28.762237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.306 [2024-11-19 11:27:28.762252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.306 [2024-11-19 11:27:28.762475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.306 [2024-11-19 11:27:28.762708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.306 [2024-11-19 11:27:28.762730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.306 [2024-11-19 11:27:28.762744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.306 [2024-11-19 11:27:28.762757] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.306 [2024-11-19 11:27:28.775263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.306 [2024-11-19 11:27:28.775677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.306 [2024-11-19 11:27:28.775704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.306 [2024-11-19 11:27:28.775735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.306 [2024-11-19 11:27:28.775925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.306 [2024-11-19 11:27:28.776118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.306 [2024-11-19 11:27:28.776139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.306 [2024-11-19 11:27:28.776157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.306 [2024-11-19 11:27:28.776171] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.306 [2024-11-19 11:27:28.788587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.306 [2024-11-19 11:27:28.788988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.306 [2024-11-19 11:27:28.789024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.306 [2024-11-19 11:27:28.789039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.306 [2024-11-19 11:27:28.789228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.306 [2024-11-19 11:27:28.789465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.306 [2024-11-19 11:27:28.789487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.306 [2024-11-19 11:27:28.789500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.306 [2024-11-19 11:27:28.789514] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.566 [2024-11-19 11:27:28.802250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.566 [2024-11-19 11:27:28.802688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.566 [2024-11-19 11:27:28.802714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.566 [2024-11-19 11:27:28.802729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.566 [2024-11-19 11:27:28.802918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.566 [2024-11-19 11:27:28.803115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.566 [2024-11-19 11:27:28.803135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.566 [2024-11-19 11:27:28.803148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.566 [2024-11-19 11:27:28.803161] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.566 [2024-11-19 11:27:28.815439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.566 [2024-11-19 11:27:28.815861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.566 [2024-11-19 11:27:28.815888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.566 [2024-11-19 11:27:28.815903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.566 [2024-11-19 11:27:28.816093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.566 [2024-11-19 11:27:28.816285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.566 [2024-11-19 11:27:28.816304] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.566 [2024-11-19 11:27:28.816317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.566 [2024-11-19 11:27:28.816329] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.566 [2024-11-19 11:27:28.828807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.566 [2024-11-19 11:27:28.829194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.566 [2024-11-19 11:27:28.829220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.566 [2024-11-19 11:27:28.829235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.566 [2024-11-19 11:27:28.829469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.566 [2024-11-19 11:27:28.829720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.566 [2024-11-19 11:27:28.829742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.566 [2024-11-19 11:27:28.829756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.566 [2024-11-19 11:27:28.829769] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.566 [2024-11-19 11:27:28.842135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.566 [2024-11-19 11:27:28.842553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.566 [2024-11-19 11:27:28.842580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.566 [2024-11-19 11:27:28.842596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.566 [2024-11-19 11:27:28.842802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.566 [2024-11-19 11:27:28.842995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.566 [2024-11-19 11:27:28.843027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.566 [2024-11-19 11:27:28.843040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.566 [2024-11-19 11:27:28.843053] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.566 [2024-11-19 11:27:28.855293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.566 [2024-11-19 11:27:28.855697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.566 [2024-11-19 11:27:28.855734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.566 [2024-11-19 11:27:28.855748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.566 [2024-11-19 11:27:28.855937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.567 [2024-11-19 11:27:28.856130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.567 [2024-11-19 11:27:28.856151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.567 [2024-11-19 11:27:28.856164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.567 [2024-11-19 11:27:28.856176] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.567 [2024-11-19 11:27:28.868589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.567 [2024-11-19 11:27:28.868976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.567 [2024-11-19 11:27:28.869002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.567 [2024-11-19 11:27:28.869022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.567 [2024-11-19 11:27:28.869212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.567 [2024-11-19 11:27:28.869454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.567 [2024-11-19 11:27:28.869476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.567 [2024-11-19 11:27:28.869490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.567 [2024-11-19 11:27:28.869503] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.567 [2024-11-19 11:27:28.881976] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.567 [2024-11-19 11:27:28.882391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.567 [2024-11-19 11:27:28.882418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.567 [2024-11-19 11:27:28.882434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.567 [2024-11-19 11:27:28.882629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.567 [2024-11-19 11:27:28.882837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.567 [2024-11-19 11:27:28.882858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.567 [2024-11-19 11:27:28.882872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.567 [2024-11-19 11:27:28.882884] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.567 [2024-11-19 11:27:28.895280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.567 [2024-11-19 11:27:28.895717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.567 [2024-11-19 11:27:28.895742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.567 [2024-11-19 11:27:28.895757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.567 [2024-11-19 11:27:28.895946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.567 [2024-11-19 11:27:28.896138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.567 [2024-11-19 11:27:28.896158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.567 [2024-11-19 11:27:28.896172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.567 [2024-11-19 11:27:28.896185] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.567 [2024-11-19 11:27:28.908600] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.567 [2024-11-19 11:27:28.909007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.567 [2024-11-19 11:27:28.909033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.567 [2024-11-19 11:27:28.909048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.567 [2024-11-19 11:27:28.909237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.567 [2024-11-19 11:27:28.909481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.567 [2024-11-19 11:27:28.909504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.567 [2024-11-19 11:27:28.909518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.567 [2024-11-19 11:27:28.909532] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.567 [2024-11-19 11:27:28.921871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.567 [2024-11-19 11:27:28.922256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.567 [2024-11-19 11:27:28.922281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.567 [2024-11-19 11:27:28.922296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.567 [2024-11-19 11:27:28.922535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.567 [2024-11-19 11:27:28.922755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.567 [2024-11-19 11:27:28.922789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.567 [2024-11-19 11:27:28.922802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.567 [2024-11-19 11:27:28.922815] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.567 [2024-11-19 11:27:28.935152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.567 [2024-11-19 11:27:28.935571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.567 [2024-11-19 11:27:28.935599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.567 [2024-11-19 11:27:28.935614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.567 [2024-11-19 11:27:28.935821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.567 [2024-11-19 11:27:28.936014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.567 [2024-11-19 11:27:28.936034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.567 [2024-11-19 11:27:28.936047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.567 [2024-11-19 11:27:28.936059] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.567 [2024-11-19 11:27:28.948510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.567 [2024-11-19 11:27:28.948927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.567 [2024-11-19 11:27:28.948959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.567 [2024-11-19 11:27:28.948973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.567 [2024-11-19 11:27:28.949162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.567 [2024-11-19 11:27:28.949379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.567 [2024-11-19 11:27:28.949401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.567 [2024-11-19 11:27:28.949435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.567 [2024-11-19 11:27:28.949449] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.567 [2024-11-19 11:27:28.961826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.567 [2024-11-19 11:27:28.962186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.567 [2024-11-19 11:27:28.962211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.567 [2024-11-19 11:27:28.962226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.567 [2024-11-19 11:27:28.962460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.567 [2024-11-19 11:27:28.962666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.567 [2024-11-19 11:27:28.962700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.567 [2024-11-19 11:27:28.962714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.567 [2024-11-19 11:27:28.962726] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.567 [2024-11-19 11:27:28.975157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.567 [2024-11-19 11:27:28.975522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.567 [2024-11-19 11:27:28.975549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.567 [2024-11-19 11:27:28.975564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.567 [2024-11-19 11:27:28.975771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.567 [2024-11-19 11:27:28.975965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.567 [2024-11-19 11:27:28.975985] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.567 [2024-11-19 11:27:28.975997] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.568 [2024-11-19 11:27:28.976009] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.568 [2024-11-19 11:27:28.988483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.568 [2024-11-19 11:27:28.988891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.568 [2024-11-19 11:27:28.988915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.568 [2024-11-19 11:27:28.988930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.568 [2024-11-19 11:27:28.989120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.568 [2024-11-19 11:27:28.989312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.568 [2024-11-19 11:27:28.989333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.568 [2024-11-19 11:27:28.989369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.568 [2024-11-19 11:27:28.989384] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.568 [2024-11-19 11:27:29.001808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.568 [2024-11-19 11:27:29.002151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.568 [2024-11-19 11:27:29.002184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.568 [2024-11-19 11:27:29.002199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.568 [2024-11-19 11:27:29.002418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.568 [2024-11-19 11:27:29.002623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.568 [2024-11-19 11:27:29.002645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.568 [2024-11-19 11:27:29.002659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.568 [2024-11-19 11:27:29.002672] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.568 [2024-11-19 11:27:29.015049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.568 [2024-11-19 11:27:29.015428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.568 [2024-11-19 11:27:29.015455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.568 [2024-11-19 11:27:29.015470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.568 [2024-11-19 11:27:29.015665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.568 [2024-11-19 11:27:29.015875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.568 [2024-11-19 11:27:29.015896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.568 [2024-11-19 11:27:29.015909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.568 [2024-11-19 11:27:29.015922] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.568 [2024-11-19 11:27:29.028274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.568 [2024-11-19 11:27:29.028711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.568 [2024-11-19 11:27:29.028736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.568 [2024-11-19 11:27:29.028751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.568 [2024-11-19 11:27:29.028940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.568 [2024-11-19 11:27:29.029132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.568 [2024-11-19 11:27:29.029153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.568 [2024-11-19 11:27:29.029166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.568 [2024-11-19 11:27:29.029179] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.568 5506.75 IOPS, 21.51 MiB/s [2024-11-19T10:27:29.065Z] [2024-11-19 11:27:29.041989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.568 [2024-11-19 11:27:29.042338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.568 [2024-11-19 11:27:29.042391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.568 [2024-11-19 11:27:29.042409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.568 [2024-11-19 11:27:29.042613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.568 [2024-11-19 11:27:29.042822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.568 [2024-11-19 11:27:29.042843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.568 [2024-11-19 11:27:29.042856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.568 [2024-11-19 11:27:29.042869] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.568 [2024-11-19 11:27:29.055295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.568 [2024-11-19 11:27:29.055722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.568 [2024-11-19 11:27:29.055748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.568 [2024-11-19 11:27:29.055763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.568 [2024-11-19 11:27:29.055952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.568 [2024-11-19 11:27:29.056144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.568 [2024-11-19 11:27:29.056165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.568 [2024-11-19 11:27:29.056178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.568 [2024-11-19 11:27:29.056190] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.829 [2024-11-19 11:27:29.068930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.829 [2024-11-19 11:27:29.069286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.829 [2024-11-19 11:27:29.069312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.829 [2024-11-19 11:27:29.069337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.829 [2024-11-19 11:27:29.069573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.829 [2024-11-19 11:27:29.069809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.829 [2024-11-19 11:27:29.069829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.829 [2024-11-19 11:27:29.069842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.829 [2024-11-19 11:27:29.069856] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.829 [2024-11-19 11:27:29.082400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.829 [2024-11-19 11:27:29.082788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.829 [2024-11-19 11:27:29.082814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.829 [2024-11-19 11:27:29.082829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.829 [2024-11-19 11:27:29.083024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.829 [2024-11-19 11:27:29.083230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.829 [2024-11-19 11:27:29.083251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.829 [2024-11-19 11:27:29.083278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.829 [2024-11-19 11:27:29.083292] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.829 [2024-11-19 11:27:29.095579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.829 [2024-11-19 11:27:29.095964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.829 [2024-11-19 11:27:29.095990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.829 [2024-11-19 11:27:29.096006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.829 [2024-11-19 11:27:29.096195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.829 [2024-11-19 11:27:29.096415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.829 [2024-11-19 11:27:29.096437] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.829 [2024-11-19 11:27:29.096451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.829 [2024-11-19 11:27:29.096463] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.829 [2024-11-19 11:27:29.108839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.829 [2024-11-19 11:27:29.109214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.829 [2024-11-19 11:27:29.109240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.829 [2024-11-19 11:27:29.109254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.829 [2024-11-19 11:27:29.109478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.829 [2024-11-19 11:27:29.109691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.829 [2024-11-19 11:27:29.109712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.829 [2024-11-19 11:27:29.109725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.829 [2024-11-19 11:27:29.109738] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.829 [2024-11-19 11:27:29.122100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.829 [2024-11-19 11:27:29.122526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.829 [2024-11-19 11:27:29.122554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.829 [2024-11-19 11:27:29.122569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.829 [2024-11-19 11:27:29.122777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.829 [2024-11-19 11:27:29.122970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.829 [2024-11-19 11:27:29.122990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.829 [2024-11-19 11:27:29.123008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.829 [2024-11-19 11:27:29.123021] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.829 [2024-11-19 11:27:29.135419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.829 [2024-11-19 11:27:29.135796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.829 [2024-11-19 11:27:29.135823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.829 [2024-11-19 11:27:29.135837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.829 [2024-11-19 11:27:29.136027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.829 [2024-11-19 11:27:29.136220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.829 [2024-11-19 11:27:29.136240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.829 [2024-11-19 11:27:29.136253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.829 [2024-11-19 11:27:29.136266] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.829 [2024-11-19 11:27:29.148862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.829 [2024-11-19 11:27:29.149192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.829 [2024-11-19 11:27:29.149218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.829 [2024-11-19 11:27:29.149233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.829 [2024-11-19 11:27:29.149451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.829 [2024-11-19 11:27:29.149651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.829 [2024-11-19 11:27:29.149685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.829 [2024-11-19 11:27:29.149699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.829 [2024-11-19 11:27:29.149711] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.829 [2024-11-19 11:27:29.162120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.829 [2024-11-19 11:27:29.162490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.829 [2024-11-19 11:27:29.162517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.829 [2024-11-19 11:27:29.162533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.829 [2024-11-19 11:27:29.162762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.829 [2024-11-19 11:27:29.162956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.829 [2024-11-19 11:27:29.162977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.829 [2024-11-19 11:27:29.162990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.829 [2024-11-19 11:27:29.163002] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.829 [2024-11-19 11:27:29.175444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.829 [2024-11-19 11:27:29.175776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.829 [2024-11-19 11:27:29.175802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.829 [2024-11-19 11:27:29.175816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.829 [2024-11-19 11:27:29.176005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.829 [2024-11-19 11:27:29.176199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.829 [2024-11-19 11:27:29.176218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.830 [2024-11-19 11:27:29.176231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.830 [2024-11-19 11:27:29.176244] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.830 [2024-11-19 11:27:29.188896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.830 [2024-11-19 11:27:29.189220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.830 [2024-11-19 11:27:29.189246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.830 [2024-11-19 11:27:29.189262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.830 [2024-11-19 11:27:29.189484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.830 [2024-11-19 11:27:29.189705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.830 [2024-11-19 11:27:29.189725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.830 [2024-11-19 11:27:29.189738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.830 [2024-11-19 11:27:29.189750] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.830 [2024-11-19 11:27:29.202227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.830 [2024-11-19 11:27:29.202567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.830 [2024-11-19 11:27:29.202594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.830 [2024-11-19 11:27:29.202610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.830 [2024-11-19 11:27:29.202832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.830 [2024-11-19 11:27:29.203027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.830 [2024-11-19 11:27:29.203046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.830 [2024-11-19 11:27:29.203059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.830 [2024-11-19 11:27:29.203071] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.830 [2024-11-19 11:27:29.215456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.830 [2024-11-19 11:27:29.215834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.830 [2024-11-19 11:27:29.215859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.830 [2024-11-19 11:27:29.215878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.830 [2024-11-19 11:27:29.216068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.830 [2024-11-19 11:27:29.216261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.830 [2024-11-19 11:27:29.216281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.830 [2024-11-19 11:27:29.216294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.830 [2024-11-19 11:27:29.216306] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.830 [2024-11-19 11:27:29.228715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.830 [2024-11-19 11:27:29.229052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.830 [2024-11-19 11:27:29.229077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.830 [2024-11-19 11:27:29.229091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.830 [2024-11-19 11:27:29.229281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.830 [2024-11-19 11:27:29.229504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.830 [2024-11-19 11:27:29.229525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.830 [2024-11-19 11:27:29.229538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.830 [2024-11-19 11:27:29.229550] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.830 [2024-11-19 11:27:29.241948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.830 [2024-11-19 11:27:29.242265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.830 [2024-11-19 11:27:29.242291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.830 [2024-11-19 11:27:29.242306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.830 [2024-11-19 11:27:29.242530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.830 [2024-11-19 11:27:29.242746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.830 [2024-11-19 11:27:29.242766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.830 [2024-11-19 11:27:29.242778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.830 [2024-11-19 11:27:29.242791] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.830 [2024-11-19 11:27:29.255228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.830 [2024-11-19 11:27:29.255551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.830 [2024-11-19 11:27:29.255579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.830 [2024-11-19 11:27:29.255595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.830 [2024-11-19 11:27:29.255801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.830 [2024-11-19 11:27:29.256011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.830 [2024-11-19 11:27:29.256030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.830 [2024-11-19 11:27:29.256043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.830 [2024-11-19 11:27:29.256056] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.830 [2024-11-19 11:27:29.268468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.830 [2024-11-19 11:27:29.268833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.830 [2024-11-19 11:27:29.268858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.830 [2024-11-19 11:27:29.268873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.830 [2024-11-19 11:27:29.269062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.830 [2024-11-19 11:27:29.269256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.830 [2024-11-19 11:27:29.269276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.830 [2024-11-19 11:27:29.269288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.830 [2024-11-19 11:27:29.269300] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.830 [2024-11-19 11:27:29.281866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.830 [2024-11-19 11:27:29.282238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.830 [2024-11-19 11:27:29.282276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.830 [2024-11-19 11:27:29.282291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.830 [2024-11-19 11:27:29.282519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.830 [2024-11-19 11:27:29.282731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.830 [2024-11-19 11:27:29.282752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.830 [2024-11-19 11:27:29.282765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.830 [2024-11-19 11:27:29.282777] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.830 [2024-11-19 11:27:29.295138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.830 [2024-11-19 11:27:29.295449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.830 [2024-11-19 11:27:29.295475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.830 [2024-11-19 11:27:29.295490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.830 [2024-11-19 11:27:29.295699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.830 [2024-11-19 11:27:29.295892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.830 [2024-11-19 11:27:29.295912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.830 [2024-11-19 11:27:29.295934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.830 [2024-11-19 11:27:29.295946] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.830 [2024-11-19 11:27:29.308359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.830 [2024-11-19 11:27:29.308755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.831 [2024-11-19 11:27:29.308780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.831 [2024-11-19 11:27:29.308795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.831 [2024-11-19 11:27:29.308984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.831 [2024-11-19 11:27:29.309176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.831 [2024-11-19 11:27:29.309197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.831 [2024-11-19 11:27:29.309210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.831 [2024-11-19 11:27:29.309223] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.831 [2024-11-19 11:27:29.322013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.831 [2024-11-19 11:27:29.322470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.831 [2024-11-19 11:27:29.322497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:33.831 [2024-11-19 11:27:29.322513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:33.831 [2024-11-19 11:27:29.322738] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:33.831 [2024-11-19 11:27:29.322948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.831 [2024-11-19 11:27:29.322968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.831 [2024-11-19 11:27:29.322981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.831 [2024-11-19 11:27:29.323010] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.091 [2024-11-19 11:27:29.335529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.091 [2024-11-19 11:27:29.335956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.091 [2024-11-19 11:27:29.335981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.091 [2024-11-19 11:27:29.335995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.091 [2024-11-19 11:27:29.336185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.091 [2024-11-19 11:27:29.336410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.091 [2024-11-19 11:27:29.336431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.091 [2024-11-19 11:27:29.336444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.091 [2024-11-19 11:27:29.336457] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.091 [2024-11-19 11:27:29.348882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.091 [2024-11-19 11:27:29.349310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.091 [2024-11-19 11:27:29.349334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.091 [2024-11-19 11:27:29.349371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.091 [2024-11-19 11:27:29.349576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.091 [2024-11-19 11:27:29.349804] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.091 [2024-11-19 11:27:29.349825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.091 [2024-11-19 11:27:29.349838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.091 [2024-11-19 11:27:29.349851] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.091 [2024-11-19 11:27:29.362114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.091 [2024-11-19 11:27:29.362537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.091 [2024-11-19 11:27:29.362563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.092 [2024-11-19 11:27:29.362578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.092 [2024-11-19 11:27:29.362785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.092 [2024-11-19 11:27:29.362980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.092 [2024-11-19 11:27:29.363000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.092 [2024-11-19 11:27:29.363013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.092 [2024-11-19 11:27:29.363026] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.092 [2024-11-19 11:27:29.375457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.092 [2024-11-19 11:27:29.375886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.092 [2024-11-19 11:27:29.375911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.092 [2024-11-19 11:27:29.375925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.092 [2024-11-19 11:27:29.376115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.092 [2024-11-19 11:27:29.376308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.092 [2024-11-19 11:27:29.376329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.092 [2024-11-19 11:27:29.376356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.092 [2024-11-19 11:27:29.376381] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.092 [2024-11-19 11:27:29.388647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.092 [2024-11-19 11:27:29.389055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.092 [2024-11-19 11:27:29.389081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.092 [2024-11-19 11:27:29.389101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.092 [2024-11-19 11:27:29.389291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.092 [2024-11-19 11:27:29.389534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.092 [2024-11-19 11:27:29.389555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.092 [2024-11-19 11:27:29.389568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.092 [2024-11-19 11:27:29.389582] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.092 [2024-11-19 11:27:29.401872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.092 [2024-11-19 11:27:29.402252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.092 [2024-11-19 11:27:29.402277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.092 [2024-11-19 11:27:29.402291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.092 [2024-11-19 11:27:29.402527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.092 [2024-11-19 11:27:29.402754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.092 [2024-11-19 11:27:29.402775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.092 [2024-11-19 11:27:29.402789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.092 [2024-11-19 11:27:29.402801] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.092 [2024-11-19 11:27:29.414940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.092 [2024-11-19 11:27:29.415322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.092 [2024-11-19 11:27:29.415379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.092 [2024-11-19 11:27:29.415396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.092 [2024-11-19 11:27:29.415585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.092 [2024-11-19 11:27:29.415789] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.092 [2024-11-19 11:27:29.415807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.092 [2024-11-19 11:27:29.415819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.092 [2024-11-19 11:27:29.415831] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.092 [2024-11-19 11:27:29.428008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.092 [2024-11-19 11:27:29.428429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.092 [2024-11-19 11:27:29.428455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.092 [2024-11-19 11:27:29.428470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.092 [2024-11-19 11:27:29.428654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.092 [2024-11-19 11:27:29.428845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.092 [2024-11-19 11:27:29.428866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.092 [2024-11-19 11:27:29.428878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.092 [2024-11-19 11:27:29.428890] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.092 [2024-11-19 11:27:29.441046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.092 [2024-11-19 11:27:29.441448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.092 [2024-11-19 11:27:29.441474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.092 [2024-11-19 11:27:29.441488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.092 [2024-11-19 11:27:29.441671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.092 [2024-11-19 11:27:29.441859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.092 [2024-11-19 11:27:29.441877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.092 [2024-11-19 11:27:29.441890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.092 [2024-11-19 11:27:29.441902] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.092 [2024-11-19 11:27:29.454062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.092 [2024-11-19 11:27:29.454430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.092 [2024-11-19 11:27:29.454455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.092 [2024-11-19 11:27:29.454469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.092 [2024-11-19 11:27:29.454653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.092 [2024-11-19 11:27:29.454840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.092 [2024-11-19 11:27:29.454859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.092 [2024-11-19 11:27:29.454872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.092 [2024-11-19 11:27:29.454883] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.092 [2024-11-19 11:27:29.467142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.092 [2024-11-19 11:27:29.467561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.092 [2024-11-19 11:27:29.467588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.092 [2024-11-19 11:27:29.467603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.092 [2024-11-19 11:27:29.467819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.092 [2024-11-19 11:27:29.468007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.092 [2024-11-19 11:27:29.468027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.092 [2024-11-19 11:27:29.468045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.092 [2024-11-19 11:27:29.468058] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.092 [2024-11-19 11:27:29.480160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.092 [2024-11-19 11:27:29.480555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.092 [2024-11-19 11:27:29.480595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.092 [2024-11-19 11:27:29.480610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.092 [2024-11-19 11:27:29.480819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.092 [2024-11-19 11:27:29.481040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.093 [2024-11-19 11:27:29.481060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.093 [2024-11-19 11:27:29.481074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.093 [2024-11-19 11:27:29.481087] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.093 [2024-11-19 11:27:29.493205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.093 [2024-11-19 11:27:29.493622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.093 [2024-11-19 11:27:29.493647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.093 [2024-11-19 11:27:29.493661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.093 [2024-11-19 11:27:29.493845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.093 [2024-11-19 11:27:29.494032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.093 [2024-11-19 11:27:29.494050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.093 [2024-11-19 11:27:29.494062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.093 [2024-11-19 11:27:29.494074] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.093 [2024-11-19 11:27:29.506213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.093 [2024-11-19 11:27:29.506618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.093 [2024-11-19 11:27:29.506643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.093 [2024-11-19 11:27:29.506657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.093 [2024-11-19 11:27:29.506840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.093 [2024-11-19 11:27:29.507027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.093 [2024-11-19 11:27:29.507046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.093 [2024-11-19 11:27:29.507058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.093 [2024-11-19 11:27:29.507070] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.093 [2024-11-19 11:27:29.519316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.093 [2024-11-19 11:27:29.519721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.093 [2024-11-19 11:27:29.519746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.093 [2024-11-19 11:27:29.519760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.093 [2024-11-19 11:27:29.519944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.093 [2024-11-19 11:27:29.520131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.093 [2024-11-19 11:27:29.520150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.093 [2024-11-19 11:27:29.520163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.093 [2024-11-19 11:27:29.520175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.093 [2024-11-19 11:27:29.532531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.093 [2024-11-19 11:27:29.532934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.093 [2024-11-19 11:27:29.532959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.093 [2024-11-19 11:27:29.532972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.093 [2024-11-19 11:27:29.533156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.093 [2024-11-19 11:27:29.533348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.093 [2024-11-19 11:27:29.533392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.093 [2024-11-19 11:27:29.533409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.093 [2024-11-19 11:27:29.533422] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.093 [2024-11-19 11:27:29.545606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.093 [2024-11-19 11:27:29.545982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.093 [2024-11-19 11:27:29.546007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.093 [2024-11-19 11:27:29.546021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.093 [2024-11-19 11:27:29.546205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.093 [2024-11-19 11:27:29.546420] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.093 [2024-11-19 11:27:29.546441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.093 [2024-11-19 11:27:29.546454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.093 [2024-11-19 11:27:29.546466] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.093 [2024-11-19 11:27:29.558667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.093 [2024-11-19 11:27:29.559035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.093 [2024-11-19 11:27:29.559059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.093 [2024-11-19 11:27:29.559078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.093 [2024-11-19 11:27:29.559262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.093 [2024-11-19 11:27:29.559491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.093 [2024-11-19 11:27:29.559512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.093 [2024-11-19 11:27:29.559525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.093 [2024-11-19 11:27:29.559537] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.093 [2024-11-19 11:27:29.571816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.093 [2024-11-19 11:27:29.572221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.093 [2024-11-19 11:27:29.572245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.093 [2024-11-19 11:27:29.572260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.093 [2024-11-19 11:27:29.572474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.093 [2024-11-19 11:27:29.572667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.093 [2024-11-19 11:27:29.572701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.093 [2024-11-19 11:27:29.572714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.093 [2024-11-19 11:27:29.572726] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.093 [2024-11-19 11:27:29.585572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.093 [2024-11-19 11:27:29.585997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.093 [2024-11-19 11:27:29.586022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.093 [2024-11-19 11:27:29.586037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.093 [2024-11-19 11:27:29.586227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.093 [2024-11-19 11:27:29.586479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.093 [2024-11-19 11:27:29.586504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.093 [2024-11-19 11:27:29.586519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.093 [2024-11-19 11:27:29.586532] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.353 [2024-11-19 11:27:29.598792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.353 [2024-11-19 11:27:29.599170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.353 [2024-11-19 11:27:29.599220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.353 [2024-11-19 11:27:29.599235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.353 [2024-11-19 11:27:29.599488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.353 [2024-11-19 11:27:29.599726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.353 [2024-11-19 11:27:29.599746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.353 [2024-11-19 11:27:29.599760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.353 [2024-11-19 11:27:29.599788] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.353 [2024-11-19 11:27:29.611890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.353 [2024-11-19 11:27:29.612279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.353 [2024-11-19 11:27:29.612327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.353 [2024-11-19 11:27:29.612342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.353 [2024-11-19 11:27:29.612572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.353 [2024-11-19 11:27:29.612797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.353 [2024-11-19 11:27:29.612818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.353 [2024-11-19 11:27:29.612831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.353 [2024-11-19 11:27:29.612843] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.353 [2024-11-19 11:27:29.624937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.353 [2024-11-19 11:27:29.625357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.353 [2024-11-19 11:27:29.625413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.353 [2024-11-19 11:27:29.625428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.353 [2024-11-19 11:27:29.625613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.353 [2024-11-19 11:27:29.625800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.353 [2024-11-19 11:27:29.625820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.353 [2024-11-19 11:27:29.625833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.353 [2024-11-19 11:27:29.625845] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.353 [2024-11-19 11:27:29.638048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.353 [2024-11-19 11:27:29.638468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.353 [2024-11-19 11:27:29.638494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.353 [2024-11-19 11:27:29.638508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.353 [2024-11-19 11:27:29.638692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.353 [2024-11-19 11:27:29.638889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.353 [2024-11-19 11:27:29.638910] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.353 [2024-11-19 11:27:29.638927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.353 [2024-11-19 11:27:29.638941] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.353 [2024-11-19 11:27:29.651189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.353 [2024-11-19 11:27:29.651596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.353 [2024-11-19 11:27:29.651621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.353 [2024-11-19 11:27:29.651635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.353 [2024-11-19 11:27:29.651819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.353 [2024-11-19 11:27:29.652006] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.353 [2024-11-19 11:27:29.652026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.353 [2024-11-19 11:27:29.652038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.353 [2024-11-19 11:27:29.652051] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.353 [2024-11-19 11:27:29.664441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.353 [2024-11-19 11:27:29.664872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.353 [2024-11-19 11:27:29.664898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.353 [2024-11-19 11:27:29.664912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.353 [2024-11-19 11:27:29.665096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.353 [2024-11-19 11:27:29.665283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.354 [2024-11-19 11:27:29.665301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.354 [2024-11-19 11:27:29.665314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.354 [2024-11-19 11:27:29.665327] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.354 [2024-11-19 11:27:29.677514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.354 [2024-11-19 11:27:29.677954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.354 [2024-11-19 11:27:29.678002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.354 [2024-11-19 11:27:29.678017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.354 [2024-11-19 11:27:29.678205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.354 [2024-11-19 11:27:29.678436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.354 [2024-11-19 11:27:29.678457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.354 [2024-11-19 11:27:29.678475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.354 [2024-11-19 11:27:29.678487] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.354 [2024-11-19 11:27:29.690615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.354 [2024-11-19 11:27:29.691012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.354 [2024-11-19 11:27:29.691036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.354 [2024-11-19 11:27:29.691050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.354 [2024-11-19 11:27:29.691234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.354 [2024-11-19 11:27:29.691455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.354 [2024-11-19 11:27:29.691477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.354 [2024-11-19 11:27:29.691490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.354 [2024-11-19 11:27:29.691502] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.354 [2024-11-19 11:27:29.703666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.354 [2024-11-19 11:27:29.704076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.354 [2024-11-19 11:27:29.704101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.354 [2024-11-19 11:27:29.704115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.354 [2024-11-19 11:27:29.704299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.354 [2024-11-19 11:27:29.704537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.354 [2024-11-19 11:27:29.704557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.354 [2024-11-19 11:27:29.704570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.354 [2024-11-19 11:27:29.704584] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.354 [2024-11-19 11:27:29.716682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.354 [2024-11-19 11:27:29.717074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.354 [2024-11-19 11:27:29.717125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.354 [2024-11-19 11:27:29.717139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.354 [2024-11-19 11:27:29.717323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.354 [2024-11-19 11:27:29.717540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.354 [2024-11-19 11:27:29.717560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.354 [2024-11-19 11:27:29.717573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.354 [2024-11-19 11:27:29.717586] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.354 [2024-11-19 11:27:29.729792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.354 [2024-11-19 11:27:29.730173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.354 [2024-11-19 11:27:29.730224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.354 [2024-11-19 11:27:29.730244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.354 [2024-11-19 11:27:29.730473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.354 [2024-11-19 11:27:29.730673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.354 [2024-11-19 11:27:29.730694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.354 [2024-11-19 11:27:29.730707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.354 [2024-11-19 11:27:29.730721] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.354 [2024-11-19 11:27:29.742900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.354 [2024-11-19 11:27:29.743338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.354 [2024-11-19 11:27:29.743399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.354 [2024-11-19 11:27:29.743413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.354 [2024-11-19 11:27:29.743598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.354 [2024-11-19 11:27:29.743785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.354 [2024-11-19 11:27:29.743803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.354 [2024-11-19 11:27:29.743815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.354 [2024-11-19 11:27:29.743826] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.354 [2024-11-19 11:27:29.755941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.354 [2024-11-19 11:27:29.756347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.354 [2024-11-19 11:27:29.756405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.354 [2024-11-19 11:27:29.756420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.354 [2024-11-19 11:27:29.756609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.354 [2024-11-19 11:27:29.756802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.354 [2024-11-19 11:27:29.756823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.354 [2024-11-19 11:27:29.756835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.354 [2024-11-19 11:27:29.756848] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.354 [2024-11-19 11:27:29.768956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.354 [2024-11-19 11:27:29.769394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.354 [2024-11-19 11:27:29.769420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.354 [2024-11-19 11:27:29.769434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.354 [2024-11-19 11:27:29.769618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.354 [2024-11-19 11:27:29.769811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.354 [2024-11-19 11:27:29.769830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.354 [2024-11-19 11:27:29.769843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.354 [2024-11-19 11:27:29.769854] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.354 [2024-11-19 11:27:29.782262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.354 [2024-11-19 11:27:29.782644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.354 [2024-11-19 11:27:29.782710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.354 [2024-11-19 11:27:29.782724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.354 [2024-11-19 11:27:29.782908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.354 [2024-11-19 11:27:29.783096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.354 [2024-11-19 11:27:29.783115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.354 [2024-11-19 11:27:29.783128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.355 [2024-11-19 11:27:29.783139] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.355 [2024-11-19 11:27:29.795559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.355 [2024-11-19 11:27:29.795902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.355 [2024-11-19 11:27:29.795926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.355 [2024-11-19 11:27:29.795940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.355 [2024-11-19 11:27:29.796124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.355 [2024-11-19 11:27:29.796313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.355 [2024-11-19 11:27:29.796331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.355 [2024-11-19 11:27:29.796359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.355 [2024-11-19 11:27:29.796383] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.355 [2024-11-19 11:27:29.808736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.355 [2024-11-19 11:27:29.809032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.355 [2024-11-19 11:27:29.809057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.355 [2024-11-19 11:27:29.809072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.355 [2024-11-19 11:27:29.809257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.355 [2024-11-19 11:27:29.809491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.355 [2024-11-19 11:27:29.809512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.355 [2024-11-19 11:27:29.809531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.355 [2024-11-19 11:27:29.809544] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.355 [2024-11-19 11:27:29.821829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.355 [2024-11-19 11:27:29.822224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.355 [2024-11-19 11:27:29.822249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.355 [2024-11-19 11:27:29.822263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.355 [2024-11-19 11:27:29.822492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.355 [2024-11-19 11:27:29.822694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.355 [2024-11-19 11:27:29.822715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.355 [2024-11-19 11:27:29.822742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.355 [2024-11-19 11:27:29.822756] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.355 [2024-11-19 11:27:29.835057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.355 [2024-11-19 11:27:29.835424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.355 [2024-11-19 11:27:29.835450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.355 [2024-11-19 11:27:29.835465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.355 [2024-11-19 11:27:29.835668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.355 [2024-11-19 11:27:29.835857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.355 [2024-11-19 11:27:29.835877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.355 [2024-11-19 11:27:29.835890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.355 [2024-11-19 11:27:29.835903] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.355 [2024-11-19 11:27:29.848690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.615 [2024-11-19 11:27:29.849183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.615 [2024-11-19 11:27:29.849234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.615 [2024-11-19 11:27:29.849249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.615 [2024-11-19 11:27:29.849512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.615 [2024-11-19 11:27:29.849773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.615 [2024-11-19 11:27:29.849794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.615 [2024-11-19 11:27:29.849807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.615 [2024-11-19 11:27:29.849820] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.615 [2024-11-19 11:27:29.861841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.615 [2024-11-19 11:27:29.862137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.615 [2024-11-19 11:27:29.862187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.615 [2024-11-19 11:27:29.862202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.615 [2024-11-19 11:27:29.862413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.615 [2024-11-19 11:27:29.862607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.615 [2024-11-19 11:27:29.862627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.615 [2024-11-19 11:27:29.862640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.615 [2024-11-19 11:27:29.862666] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.615 [2024-11-19 11:27:29.874997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.615 [2024-11-19 11:27:29.875352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.615 [2024-11-19 11:27:29.875426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.615 [2024-11-19 11:27:29.875441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.615 [2024-11-19 11:27:29.875637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.615 [2024-11-19 11:27:29.875857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.615 [2024-11-19 11:27:29.875876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.615 [2024-11-19 11:27:29.875888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.615 [2024-11-19 11:27:29.875900] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.615 [2024-11-19 11:27:29.888006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.615 [2024-11-19 11:27:29.888420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.615 [2024-11-19 11:27:29.888445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.615 [2024-11-19 11:27:29.888459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.615 [2024-11-19 11:27:29.888643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.615 [2024-11-19 11:27:29.888831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.615 [2024-11-19 11:27:29.888851] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.615 [2024-11-19 11:27:29.888863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.615 [2024-11-19 11:27:29.888875] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.615 [2024-11-19 11:27:29.901115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.615 [2024-11-19 11:27:29.901521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.615 [2024-11-19 11:27:29.901547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.615 [2024-11-19 11:27:29.901566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.615 [2024-11-19 11:27:29.901752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.615 [2024-11-19 11:27:29.901939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.615 [2024-11-19 11:27:29.901958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.615 [2024-11-19 11:27:29.901971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.615 [2024-11-19 11:27:29.901984] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.615 [2024-11-19 11:27:29.914185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.615 [2024-11-19 11:27:29.914624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.615 [2024-11-19 11:27:29.914665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.615 [2024-11-19 11:27:29.914680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.615 [2024-11-19 11:27:29.914878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.615 [2024-11-19 11:27:29.915065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.615 [2024-11-19 11:27:29.915084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.615 [2024-11-19 11:27:29.915096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.615 [2024-11-19 11:27:29.915109] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.615 [2024-11-19 11:27:29.927349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.615 [2024-11-19 11:27:29.927745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.615 [2024-11-19 11:27:29.927770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.615 [2024-11-19 11:27:29.927784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.615 [2024-11-19 11:27:29.927968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.615 [2024-11-19 11:27:29.928155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.615 [2024-11-19 11:27:29.928173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.615 [2024-11-19 11:27:29.928185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.615 [2024-11-19 11:27:29.928198] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.615 [2024-11-19 11:27:29.940597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.615 [2024-11-19 11:27:29.940998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.615 [2024-11-19 11:27:29.941023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.615 [2024-11-19 11:27:29.941038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.615 [2024-11-19 11:27:29.941223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.615 [2024-11-19 11:27:29.941461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.615 [2024-11-19 11:27:29.941483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.615 [2024-11-19 11:27:29.941498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.615 [2024-11-19 11:27:29.941511] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.615 [2024-11-19 11:27:29.953560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.615 [2024-11-19 11:27:29.953951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.615 [2024-11-19 11:27:29.953976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.615 [2024-11-19 11:27:29.953990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.615 [2024-11-19 11:27:29.954174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.615 [2024-11-19 11:27:29.954394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.615 [2024-11-19 11:27:29.954422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.615 [2024-11-19 11:27:29.954436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.615 [2024-11-19 11:27:29.954449] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.615 [2024-11-19 11:27:29.966615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.615 [2024-11-19 11:27:29.967019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.616 [2024-11-19 11:27:29.967044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.616 [2024-11-19 11:27:29.967058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.616 [2024-11-19 11:27:29.967242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.616 [2024-11-19 11:27:29.967467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.616 [2024-11-19 11:27:29.967488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.616 [2024-11-19 11:27:29.967502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.616 [2024-11-19 11:27:29.967515] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.616 [2024-11-19 11:27:29.979647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.616 [2024-11-19 11:27:29.980045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.616 [2024-11-19 11:27:29.980069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.616 [2024-11-19 11:27:29.980083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.616 [2024-11-19 11:27:29.980267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.616 [2024-11-19 11:27:29.980505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.616 [2024-11-19 11:27:29.980528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.616 [2024-11-19 11:27:29.980546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.616 [2024-11-19 11:27:29.980560] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.616 [2024-11-19 11:27:29.992852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.616 [2024-11-19 11:27:29.993209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.616 [2024-11-19 11:27:29.993235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.616 [2024-11-19 11:27:29.993250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.616 [2024-11-19 11:27:29.993493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.616 [2024-11-19 11:27:29.993708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.616 [2024-11-19 11:27:29.993727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.616 [2024-11-19 11:27:29.993754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.616 [2024-11-19 11:27:29.993766] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.616 [2024-11-19 11:27:30.006540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.616 [2024-11-19 11:27:30.006911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.616 [2024-11-19 11:27:30.006940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.616 [2024-11-19 11:27:30.006957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.616 [2024-11-19 11:27:30.007159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.616 [2024-11-19 11:27:30.007395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.616 [2024-11-19 11:27:30.007418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.616 [2024-11-19 11:27:30.007434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.616 [2024-11-19 11:27:30.007448] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.616 [2024-11-19 11:27:30.019970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.616 [2024-11-19 11:27:30.020372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.616 [2024-11-19 11:27:30.020402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.616 [2024-11-19 11:27:30.020420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.616 [2024-11-19 11:27:30.020636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.616 [2024-11-19 11:27:30.020858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.616 [2024-11-19 11:27:30.020880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.616 [2024-11-19 11:27:30.020895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.616 [2024-11-19 11:27:30.020909] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.616 [2024-11-19 11:27:30.033451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.616 [2024-11-19 11:27:30.033876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.616 [2024-11-19 11:27:30.033917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.616 [2024-11-19 11:27:30.033932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.616 [2024-11-19 11:27:30.034121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.616 [2024-11-19 11:27:30.034315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.616 [2024-11-19 11:27:30.034335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.616 [2024-11-19 11:27:30.034349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.616 [2024-11-19 11:27:30.034387] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.616 4405.40 IOPS, 17.21 MiB/s [2024-11-19T10:27:30.113Z] [2024-11-19 11:27:30.046827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.616 [2024-11-19 11:27:30.047199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.616 [2024-11-19 11:27:30.047226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.616 [2024-11-19 11:27:30.047242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.616 [2024-11-19 11:27:30.047463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.616 [2024-11-19 11:27:30.047697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.616 [2024-11-19 11:27:30.047718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.616 [2024-11-19 11:27:30.047732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.616 [2024-11-19 11:27:30.047745] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.616 [2024-11-19 11:27:30.060134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.616 [2024-11-19 11:27:30.060547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.616 [2024-11-19 11:27:30.060574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.616 [2024-11-19 11:27:30.060591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.616 [2024-11-19 11:27:30.060799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.616 [2024-11-19 11:27:30.060992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.616 [2024-11-19 11:27:30.061022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.616 [2024-11-19 11:27:30.061036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.616 [2024-11-19 11:27:30.061049] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.616 [2024-11-19 11:27:30.073453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.616 [2024-11-19 11:27:30.073899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.616 [2024-11-19 11:27:30.073931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.616 [2024-11-19 11:27:30.073946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.616 [2024-11-19 11:27:30.074136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.616 [2024-11-19 11:27:30.074324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.616 [2024-11-19 11:27:30.074358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.616 [2024-11-19 11:27:30.074384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.616 [2024-11-19 11:27:30.074424] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.616 [2024-11-19 11:27:30.086983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.616 [2024-11-19 11:27:30.087356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.616 [2024-11-19 11:27:30.087408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.616 [2024-11-19 11:27:30.087425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.616 [2024-11-19 11:27:30.087621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.616 [2024-11-19 11:27:30.087861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.617 [2024-11-19 11:27:30.087882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.617 [2024-11-19 11:27:30.087896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.617 [2024-11-19 11:27:30.087909] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.617 [2024-11-19 11:27:30.100291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.617 [2024-11-19 11:27:30.100714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.617 [2024-11-19 11:27:30.100740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.617 [2024-11-19 11:27:30.100769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.617 [2024-11-19 11:27:30.100973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.617 [2024-11-19 11:27:30.101179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.617 [2024-11-19 11:27:30.101198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.617 [2024-11-19 11:27:30.101211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.617 [2024-11-19 11:27:30.101224] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.876 [2024-11-19 11:27:30.113881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.876 [2024-11-19 11:27:30.114329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.876 [2024-11-19 11:27:30.114354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.876 [2024-11-19 11:27:30.114395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.876 [2024-11-19 11:27:30.114617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.876 [2024-11-19 11:27:30.114874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.876 [2024-11-19 11:27:30.114896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.876 [2024-11-19 11:27:30.114910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.876 [2024-11-19 11:27:30.114924] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.876 [2024-11-19 11:27:30.127077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.876 [2024-11-19 11:27:30.127528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.876 [2024-11-19 11:27:30.127556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.876 [2024-11-19 11:27:30.127571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.876 [2024-11-19 11:27:30.127770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.876 [2024-11-19 11:27:30.127957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.876 [2024-11-19 11:27:30.127977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.876 [2024-11-19 11:27:30.127991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.876 [2024-11-19 11:27:30.128003] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.876 [2024-11-19 11:27:30.140526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.876 [2024-11-19 11:27:30.140975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.876 [2024-11-19 11:27:30.141001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.876 [2024-11-19 11:27:30.141016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.876 [2024-11-19 11:27:30.141201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.876 [2024-11-19 11:27:30.141455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.876 [2024-11-19 11:27:30.141478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.876 [2024-11-19 11:27:30.141492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.876 [2024-11-19 11:27:30.141505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.876 [2024-11-19 11:27:30.153915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.876 [2024-11-19 11:27:30.154339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.876 [2024-11-19 11:27:30.154388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.876 [2024-11-19 11:27:30.154403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.876 [2024-11-19 11:27:30.154598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.876 [2024-11-19 11:27:30.154805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.876 [2024-11-19 11:27:30.154825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.876 [2024-11-19 11:27:30.154843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.876 [2024-11-19 11:27:30.154857] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.876 [2024-11-19 11:27:30.167242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.876 [2024-11-19 11:27:30.167652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.876 [2024-11-19 11:27:30.167680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.876 [2024-11-19 11:27:30.167696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.876 [2024-11-19 11:27:30.167900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.876 [2024-11-19 11:27:30.168088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.876 [2024-11-19 11:27:30.168107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.876 [2024-11-19 11:27:30.168120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.876 [2024-11-19 11:27:30.168133] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.876 [2024-11-19 11:27:30.180640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.876 [2024-11-19 11:27:30.181061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.876 [2024-11-19 11:27:30.181100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.876 [2024-11-19 11:27:30.181114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.876 [2024-11-19 11:27:30.181323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.876 [2024-11-19 11:27:30.181585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.876 [2024-11-19 11:27:30.181608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.877 [2024-11-19 11:27:30.181622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.877 [2024-11-19 11:27:30.181635] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.877 [2024-11-19 11:27:30.193953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.877 [2024-11-19 11:27:30.194359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.877 [2024-11-19 11:27:30.194393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.877 [2024-11-19 11:27:30.194407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.877 [2024-11-19 11:27:30.194597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.877 [2024-11-19 11:27:30.194811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.877 [2024-11-19 11:27:30.194832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.877 [2024-11-19 11:27:30.194845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.877 [2024-11-19 11:27:30.194858] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.877 [2024-11-19 11:27:30.207319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.877 [2024-11-19 11:27:30.207698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.877 [2024-11-19 11:27:30.207725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.877 [2024-11-19 11:27:30.207740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.877 [2024-11-19 11:27:30.207941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.877 [2024-11-19 11:27:30.208129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.877 [2024-11-19 11:27:30.208149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.877 [2024-11-19 11:27:30.208161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.877 [2024-11-19 11:27:30.208173] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.877 [2024-11-19 11:27:30.220537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.877 [2024-11-19 11:27:30.220971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.877 [2024-11-19 11:27:30.220996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.877 [2024-11-19 11:27:30.221011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.877 [2024-11-19 11:27:30.221209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.877 [2024-11-19 11:27:30.221446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.877 [2024-11-19 11:27:30.221468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.877 [2024-11-19 11:27:30.221482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.877 [2024-11-19 11:27:30.221496] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.877 [2024-11-19 11:27:30.233840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.877 [2024-11-19 11:27:30.234299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.877 [2024-11-19 11:27:30.234324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.877 [2024-11-19 11:27:30.234339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.877 [2024-11-19 11:27:30.234574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.877 [2024-11-19 11:27:30.234797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.877 [2024-11-19 11:27:30.234817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.877 [2024-11-19 11:27:30.234830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.877 [2024-11-19 11:27:30.234843] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.877 [2024-11-19 11:27:30.247238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.877 [2024-11-19 11:27:30.247672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.877 [2024-11-19 11:27:30.247701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.877 [2024-11-19 11:27:30.247716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.877 [2024-11-19 11:27:30.247915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.877 [2024-11-19 11:27:30.248143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.877 [2024-11-19 11:27:30.248165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.877 [2024-11-19 11:27:30.248179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.877 [2024-11-19 11:27:30.248192] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.877 [2024-11-19 11:27:30.260699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.877 [2024-11-19 11:27:30.261118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.877 [2024-11-19 11:27:30.261145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.877 [2024-11-19 11:27:30.261161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.877 [2024-11-19 11:27:30.261397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.877 [2024-11-19 11:27:30.261597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.877 [2024-11-19 11:27:30.261619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.877 [2024-11-19 11:27:30.261632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.877 [2024-11-19 11:27:30.261645] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.877 [2024-11-19 11:27:30.273929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.877 [2024-11-19 11:27:30.274390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.877 [2024-11-19 11:27:30.274437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.877 [2024-11-19 11:27:30.274452] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.877 [2024-11-19 11:27:30.274641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.877 [2024-11-19 11:27:30.274845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.877 [2024-11-19 11:27:30.274866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.877 [2024-11-19 11:27:30.274879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.877 [2024-11-19 11:27:30.274892] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.877 [2024-11-19 11:27:30.287310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.877 [2024-11-19 11:27:30.287747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.877 [2024-11-19 11:27:30.287772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.877 [2024-11-19 11:27:30.287786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.877 [2024-11-19 11:27:30.287974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.877 [2024-11-19 11:27:30.288177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.877 [2024-11-19 11:27:30.288212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.877 [2024-11-19 11:27:30.288226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.877 [2024-11-19 11:27:30.288239] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.877 [2024-11-19 11:27:30.300670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.877 [2024-11-19 11:27:30.301001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.877 [2024-11-19 11:27:30.301027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.877 [2024-11-19 11:27:30.301042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.877 [2024-11-19 11:27:30.301242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.877 [2024-11-19 11:27:30.301486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.877 [2024-11-19 11:27:30.301508] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.877 [2024-11-19 11:27:30.301521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.877 [2024-11-19 11:27:30.301534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.877 [2024-11-19 11:27:30.314158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.878 [2024-11-19 11:27:30.314527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.878 [2024-11-19 11:27:30.314554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.878 [2024-11-19 11:27:30.314570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.878 [2024-11-19 11:27:30.314788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.878 [2024-11-19 11:27:30.314976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.878 [2024-11-19 11:27:30.314995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.878 [2024-11-19 11:27:30.315007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.878 [2024-11-19 11:27:30.315019] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.878 [2024-11-19 11:27:30.327569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.878 [2024-11-19 11:27:30.327899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.878 [2024-11-19 11:27:30.327951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.878 [2024-11-19 11:27:30.327966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.878 [2024-11-19 11:27:30.328151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.878 [2024-11-19 11:27:30.328402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.878 [2024-11-19 11:27:30.328439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.878 [2024-11-19 11:27:30.328459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.878 [2024-11-19 11:27:30.328473] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.878 [2024-11-19 11:27:30.341382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.878 [2024-11-19 11:27:30.341791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.878 [2024-11-19 11:27:30.341838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.878 [2024-11-19 11:27:30.341853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.878 [2024-11-19 11:27:30.342062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.878 [2024-11-19 11:27:30.342261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.878 [2024-11-19 11:27:30.342283] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.878 [2024-11-19 11:27:30.342296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.878 [2024-11-19 11:27:30.342324] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.878 [2024-11-19 11:27:30.355041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.878 [2024-11-19 11:27:30.355490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.878 [2024-11-19 11:27:30.355539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.878 [2024-11-19 11:27:30.355555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.878 [2024-11-19 11:27:30.355792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.878 [2024-11-19 11:27:30.356023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.878 [2024-11-19 11:27:30.356045] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.878 [2024-11-19 11:27:30.356058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.878 [2024-11-19 11:27:30.356071] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.878 [2024-11-19 11:27:30.368573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.878 [2024-11-19 11:27:30.368944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.878 [2024-11-19 11:27:30.368992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:34.878 [2024-11-19 11:27:30.369006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:34.878 [2024-11-19 11:27:30.369205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:34.878 [2024-11-19 11:27:30.369468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.878 [2024-11-19 11:27:30.369491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.878 [2024-11-19 11:27:30.369506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.878 [2024-11-19 11:27:30.369519] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.137 [2024-11-19 11:27:30.381857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.138 [2024-11-19 11:27:30.382163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.138 [2024-11-19 11:27:30.382189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:35.138 [2024-11-19 11:27:30.382203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:35.138 [2024-11-19 11:27:30.382417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:35.138 [2024-11-19 11:27:30.382623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.138 [2024-11-19 11:27:30.382662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.138 [2024-11-19 11:27:30.382675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.138 [2024-11-19 11:27:30.382688] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.138 [2024-11-19 11:27:30.395200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.138 [2024-11-19 11:27:30.395530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.138 [2024-11-19 11:27:30.395556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:35.138 [2024-11-19 11:27:30.395572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:35.138 [2024-11-19 11:27:30.395799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:35.138 [2024-11-19 11:27:30.396011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.138 [2024-11-19 11:27:30.396031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.138 [2024-11-19 11:27:30.396043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.138 [2024-11-19 11:27:30.396055] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.138 [2024-11-19 11:27:30.408685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.138 [2024-11-19 11:27:30.409045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.138 [2024-11-19 11:27:30.409072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:35.138 [2024-11-19 11:27:30.409088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:35.138 [2024-11-19 11:27:30.409284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:35.138 [2024-11-19 11:27:30.409550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.138 [2024-11-19 11:27:30.409573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.138 [2024-11-19 11:27:30.409588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.138 [2024-11-19 11:27:30.409601] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.138 [2024-11-19 11:27:30.422035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.138 [2024-11-19 11:27:30.422336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.138 [2024-11-19 11:27:30.422394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:35.138 [2024-11-19 11:27:30.422412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:35.138 [2024-11-19 11:27:30.422626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:35.138 [2024-11-19 11:27:30.422870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.138 [2024-11-19 11:27:30.422891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.138 [2024-11-19 11:27:30.422903] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.138 [2024-11-19 11:27:30.422916] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.138 [2024-11-19 11:27:30.435573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.138 [2024-11-19 11:27:30.435978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.138 [2024-11-19 11:27:30.436004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:35.138 [2024-11-19 11:27:30.436019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:35.138 [2024-11-19 11:27:30.436230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:35.138 [2024-11-19 11:27:30.436471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.138 [2024-11-19 11:27:30.436494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.138 [2024-11-19 11:27:30.436509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.138 [2024-11-19 11:27:30.436523] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.138 [2024-11-19 11:27:30.448903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.138 [2024-11-19 11:27:30.449210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.138 [2024-11-19 11:27:30.449235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:35.138 [2024-11-19 11:27:30.449249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:35.138 [2024-11-19 11:27:30.449475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:35.138 [2024-11-19 11:27:30.449707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.138 [2024-11-19 11:27:30.449742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.138 [2024-11-19 11:27:30.449754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.138 [2024-11-19 11:27:30.449766] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.138 [2024-11-19 11:27:30.462181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.138 [2024-11-19 11:27:30.462511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.138 [2024-11-19 11:27:30.462540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:35.138 [2024-11-19 11:27:30.462555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:35.138 [2024-11-19 11:27:30.462765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:35.138 [2024-11-19 11:27:30.462957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.138 [2024-11-19 11:27:30.462976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.138 [2024-11-19 11:27:30.462989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.138 [2024-11-19 11:27:30.463000] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.138 [2024-11-19 11:27:30.475553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.138 [2024-11-19 11:27:30.475922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.138 [2024-11-19 11:27:30.475946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:35.138 [2024-11-19 11:27:30.475961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:35.138 [2024-11-19 11:27:30.476144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:35.138 [2024-11-19 11:27:30.476332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.138 [2024-11-19 11:27:30.476377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.138 [2024-11-19 11:27:30.476394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.138 [2024-11-19 11:27:30.476422] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.138 [2024-11-19 11:27:30.488595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.138 [2024-11-19 11:27:30.488920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.138 [2024-11-19 11:27:30.488945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:35.138 [2024-11-19 11:27:30.488959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:35.138 [2024-11-19 11:27:30.489143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:35.138 [2024-11-19 11:27:30.489330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.138 [2024-11-19 11:27:30.489349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.138 [2024-11-19 11:27:30.489371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.138 [2024-11-19 11:27:30.489401] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.138 [2024-11-19 11:27:30.501800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.138 [2024-11-19 11:27:30.502076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.138 [2024-11-19 11:27:30.502101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:35.139 [2024-11-19 11:27:30.502115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:35.139 [2024-11-19 11:27:30.502299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:35.139 [2024-11-19 11:27:30.502537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.139 [2024-11-19 11:27:30.502557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.139 [2024-11-19 11:27:30.502576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.139 [2024-11-19 11:27:30.502588] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.139 [2024-11-19 11:27:30.514802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.139 [2024-11-19 11:27:30.515103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.139 [2024-11-19 11:27:30.515128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:35.139 [2024-11-19 11:27:30.515142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:35.139 [2024-11-19 11:27:30.515325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:35.139 [2024-11-19 11:27:30.515559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.139 [2024-11-19 11:27:30.515580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.139 [2024-11-19 11:27:30.515593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.139 [2024-11-19 11:27:30.515605] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.139 [2024-11-19 11:27:30.527827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.139 [2024-11-19 11:27:30.528157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.139 [2024-11-19 11:27:30.528181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:35.139 [2024-11-19 11:27:30.528195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:35.139 [2024-11-19 11:27:30.528404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:35.139 [2024-11-19 11:27:30.528598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.139 [2024-11-19 11:27:30.528618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.139 [2024-11-19 11:27:30.528630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.139 [2024-11-19 11:27:30.528642] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.139 [2024-11-19 11:27:30.540854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.139 [2024-11-19 11:27:30.541158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.139 [2024-11-19 11:27:30.541182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:35.139 [2024-11-19 11:27:30.541196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:35.139 [2024-11-19 11:27:30.541405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:35.139 [2024-11-19 11:27:30.541599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.139 [2024-11-19 11:27:30.541619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.139 [2024-11-19 11:27:30.541632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.139 [2024-11-19 11:27:30.541643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2723119 Killed "${NVMF_APP[@]}" "$@" 00:25:35.139 11:27:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:25:35.139 11:27:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:35.139 11:27:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:35.139 11:27:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:35.139 11:27:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:35.139 11:27:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2724075 00:25:35.139 11:27:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:35.139 11:27:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2724075 00:25:35.139 11:27:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2724075 ']' 00:25:35.139 11:27:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:35.139 11:27:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:35.139 11:27:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:35.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:35.139 11:27:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:35.139 11:27:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:35.139 [2024-11-19 11:27:30.554325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.139 [2024-11-19 11:27:30.554694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.139 [2024-11-19 11:27:30.554720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:35.139 [2024-11-19 11:27:30.554734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:35.139 [2024-11-19 11:27:30.554924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:35.139 [2024-11-19 11:27:30.555117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.139 [2024-11-19 11:27:30.555136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.139 [2024-11-19 11:27:30.555148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.139 [2024-11-19 11:27:30.555161] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.139 [2024-11-19 11:27:30.567696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.139 [2024-11-19 11:27:30.568044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.139 [2024-11-19 11:27:30.568069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:35.139 [2024-11-19 11:27:30.568083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:35.139 [2024-11-19 11:27:30.568273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:35.139 [2024-11-19 11:27:30.568519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.139 [2024-11-19 11:27:30.568541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.139 [2024-11-19 11:27:30.568555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.139 [2024-11-19 11:27:30.568576] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.139 [2024-11-19 11:27:30.581019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.139 [2024-11-19 11:27:30.581441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.139 [2024-11-19 11:27:30.581468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:35.139 [2024-11-19 11:27:30.581484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:35.139 [2024-11-19 11:27:30.581717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:35.139 [2024-11-19 11:27:30.581943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.139 [2024-11-19 11:27:30.581962] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.139 [2024-11-19 11:27:30.581976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.139 [2024-11-19 11:27:30.582001] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.139 [2024-11-19 11:27:30.594264] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:25:35.139 [2024-11-19 11:27:30.594319] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:35.139 [2024-11-19 11:27:30.594431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.139 [2024-11-19 11:27:30.594835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.139 [2024-11-19 11:27:30.594860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:35.139 [2024-11-19 11:27:30.594875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:35.139 [2024-11-19 11:27:30.595064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:35.139 [2024-11-19 11:27:30.595258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.139 [2024-11-19 11:27:30.595277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.139 [2024-11-19 11:27:30.595290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.140 [2024-11-19 11:27:30.595302] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.140 [2024-11-19 11:27:30.607782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.140 [2024-11-19 11:27:30.608135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.140 [2024-11-19 11:27:30.608161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:35.140 [2024-11-19 11:27:30.608175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:35.140 [2024-11-19 11:27:30.608392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:35.140 [2024-11-19 11:27:30.608600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.140 [2024-11-19 11:27:30.608620] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.140 [2024-11-19 11:27:30.608653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.140 [2024-11-19 11:27:30.608671] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.140 [2024-11-19 11:27:30.621048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.140 [2024-11-19 11:27:30.621385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.140 [2024-11-19 11:27:30.621412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:35.140 [2024-11-19 11:27:30.621428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:35.140 [2024-11-19 11:27:30.621629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:35.140 [2024-11-19 11:27:30.621855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.140 [2024-11-19 11:27:30.621874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.140 [2024-11-19 11:27:30.621887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.140 [2024-11-19 11:27:30.621899] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.400 [2024-11-19 11:27:30.634671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.400 [2024-11-19 11:27:30.635037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.400 [2024-11-19 11:27:30.635062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:35.400 [2024-11-19 11:27:30.635076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:35.400 [2024-11-19 11:27:30.635266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:35.400 [2024-11-19 11:27:30.635504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.400 [2024-11-19 11:27:30.635526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.400 [2024-11-19 11:27:30.635539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.400 [2024-11-19 11:27:30.635551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.400 [2024-11-19 11:27:30.647979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.400 [2024-11-19 11:27:30.648313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.400 [2024-11-19 11:27:30.648340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:35.400 [2024-11-19 11:27:30.648380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:35.400 [2024-11-19 11:27:30.648602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:35.400 [2024-11-19 11:27:30.648824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.400 [2024-11-19 11:27:30.648860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.400 [2024-11-19 11:27:30.648874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.400 [2024-11-19 11:27:30.648887] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.400 [2024-11-19 11:27:30.661252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.400 [2024-11-19 11:27:30.661769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.400 [2024-11-19 11:27:30.661795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:35.400 [2024-11-19 11:27:30.661811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:35.400 [2024-11-19 11:27:30.662009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:35.400 [2024-11-19 11:27:30.662202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.400 [2024-11-19 11:27:30.662221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.400 [2024-11-19 11:27:30.662234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.401 [2024-11-19 11:27:30.662245] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.401 [2024-11-19 11:27:30.674525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.401 [2024-11-19 11:27:30.674914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.401 [2024-11-19 11:27:30.674950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:35.401 [2024-11-19 11:27:30.674964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:35.401 [2024-11-19 11:27:30.675153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:35.401 [2024-11-19 11:27:30.675376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.401 [2024-11-19 11:27:30.675397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.401 [2024-11-19 11:27:30.675421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.401 [2024-11-19 11:27:30.675435] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.401 [2024-11-19 11:27:30.677326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:35.401 [2024-11-19 11:27:30.687754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.401 [2024-11-19 11:27:30.688287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.401 [2024-11-19 11:27:30.688333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:35.401 [2024-11-19 11:27:30.688373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:35.401 [2024-11-19 11:27:30.688601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:35.401 [2024-11-19 11:27:30.688833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.401 [2024-11-19 11:27:30.688853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.401 [2024-11-19 11:27:30.688869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.401 [2024-11-19 11:27:30.688883] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.401 [2024-11-19 11:27:30.701150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.401 [2024-11-19 11:27:30.701606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.401 [2024-11-19 11:27:30.701637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:35.401 [2024-11-19 11:27:30.701664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:35.401 [2024-11-19 11:27:30.701885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:35.401 [2024-11-19 11:27:30.702080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.401 [2024-11-19 11:27:30.702101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.401 [2024-11-19 11:27:30.702115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.401 [2024-11-19 11:27:30.702127] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.401 [2024-11-19 11:27:30.714360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.401 [2024-11-19 11:27:30.714825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.401 [2024-11-19 11:27:30.714850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:35.401 [2024-11-19 11:27:30.714866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:35.401 [2024-11-19 11:27:30.715055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:35.401 [2024-11-19 11:27:30.715249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.401 [2024-11-19 11:27:30.715270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.401 [2024-11-19 11:27:30.715285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.401 [2024-11-19 11:27:30.715298] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.401 [2024-11-19 11:27:30.727564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.401 [2024-11-19 11:27:30.728068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.401 [2024-11-19 11:27:30.728093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:35.401 [2024-11-19 11:27:30.728120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:35.401 [2024-11-19 11:27:30.728311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:35.401 [2024-11-19 11:27:30.728567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.401 [2024-11-19 11:27:30.728589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.401 [2024-11-19 11:27:30.728604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.401 [2024-11-19 11:27:30.728617] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.401 [2024-11-19 11:27:30.733922] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:35.401 [2024-11-19 11:27:30.733966] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:35.401 [2024-11-19 11:27:30.733986] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:35.401 [2024-11-19 11:27:30.733997] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:35.401 [2024-11-19 11:27:30.734006] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:35.401 [2024-11-19 11:27:30.735378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:35.401 [2024-11-19 11:27:30.735438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:35.401 [2024-11-19 11:27:30.735441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:35.401 [2024-11-19 11:27:30.741095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.401 [2024-11-19 11:27:30.741560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.401 [2024-11-19 11:27:30.741605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:35.401 [2024-11-19 11:27:30.741625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:35.401 [2024-11-19 11:27:30.741882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:35.401 [2024-11-19 11:27:30.742091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.401 [2024-11-19 11:27:30.742112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.401 [2024-11-19 11:27:30.742128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.401 [2024-11-19 11:27:30.742143] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.401 [2024-11-19 11:27:30.754564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.401 [2024-11-19 11:27:30.755077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.401 [2024-11-19 11:27:30.755113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:35.401 [2024-11-19 11:27:30.755132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:35.401 [2024-11-19 11:27:30.755355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:35.401 [2024-11-19 11:27:30.755595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.401 [2024-11-19 11:27:30.755617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.401 [2024-11-19 11:27:30.755634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.401 [2024-11-19 11:27:30.755649] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.401 [2024-11-19 11:27:30.768148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.401 [2024-11-19 11:27:30.768662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.401 [2024-11-19 11:27:30.768716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:35.401 [2024-11-19 11:27:30.768736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:35.401 [2024-11-19 11:27:30.768956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:35.401 [2024-11-19 11:27:30.769170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.401 [2024-11-19 11:27:30.769191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.401 [2024-11-19 11:27:30.769207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.401 [2024-11-19 11:27:30.769222] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.401 [2024-11-19 11:27:30.781723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.402 [2024-11-19 11:27:30.782258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.402 [2024-11-19 11:27:30.782296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:35.402 [2024-11-19 11:27:30.782315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:35.402 [2024-11-19 11:27:30.782556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:35.402 [2024-11-19 11:27:30.782794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.402 [2024-11-19 11:27:30.782816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.402 [2024-11-19 11:27:30.782833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.402 [2024-11-19 11:27:30.782847] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.402 [2024-11-19 11:27:30.795322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.402 [2024-11-19 11:27:30.795803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.402 [2024-11-19 11:27:30.795848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:35.402 [2024-11-19 11:27:30.795867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:35.402 [2024-11-19 11:27:30.796086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:35.402 [2024-11-19 11:27:30.796302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.402 [2024-11-19 11:27:30.796324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.402 [2024-11-19 11:27:30.796356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.402 [2024-11-19 11:27:30.796382] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.402 [2024-11-19 11:27:30.808920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.402 [2024-11-19 11:27:30.809389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.402 [2024-11-19 11:27:30.809439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:35.402 [2024-11-19 11:27:30.809460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:35.402 [2024-11-19 11:27:30.809694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:35.402 [2024-11-19 11:27:30.809905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.402 [2024-11-19 11:27:30.809926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.402 [2024-11-19 11:27:30.809941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.402 [2024-11-19 11:27:30.809957] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.402 [2024-11-19 11:27:30.822438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.402 [2024-11-19 11:27:30.822900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.402 [2024-11-19 11:27:30.822928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:35.402 [2024-11-19 11:27:30.822952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:35.402 [2024-11-19 11:27:30.823161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:35.402 [2024-11-19 11:27:30.823390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.402 [2024-11-19 11:27:30.823412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.402 [2024-11-19 11:27:30.823426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.402 [2024-11-19 11:27:30.823439] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.402 [2024-11-19 11:27:30.836023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.402 [2024-11-19 11:27:30.836438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.402 [2024-11-19 11:27:30.836467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:35.402 [2024-11-19 11:27:30.836483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:35.402 [2024-11-19 11:27:30.836697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:35.402 [2024-11-19 11:27:30.836910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.402 [2024-11-19 11:27:30.836931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.402 [2024-11-19 11:27:30.836945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.402 [2024-11-19 11:27:30.836959] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.402 [2024-11-19 11:27:30.849698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.402 [2024-11-19 11:27:30.850046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.402 [2024-11-19 11:27:30.850076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:35.402 [2024-11-19 11:27:30.850093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:35.402 [2024-11-19 11:27:30.850307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:35.402 11:27:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:35.402 [2024-11-19 11:27:30.850546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.402 [2024-11-19 11:27:30.850569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.402 [2024-11-19 11:27:30.850583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.402 [2024-11-19 11:27:30.850596] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.402 11:27:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:25:35.402 11:27:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:35.402 11:27:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:35.402 11:27:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:35.402 [2024-11-19 11:27:30.863228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.402 [2024-11-19 11:27:30.863587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.402 [2024-11-19 11:27:30.863621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:35.402 [2024-11-19 11:27:30.863639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:35.402 [2024-11-19 11:27:30.863877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:35.402 [2024-11-19 11:27:30.864084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.402 [2024-11-19 11:27:30.864106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.402 [2024-11-19 11:27:30.864120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.402 [2024-11-19 11:27:30.864133] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.402 11:27:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:35.402 11:27:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:35.402 11:27:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.402 11:27:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:35.402 [2024-11-19 11:27:30.873124] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:35.402 [2024-11-19 11:27:30.876805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.402 [2024-11-19 11:27:30.877205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.402 [2024-11-19 11:27:30.877240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:35.403 [2024-11-19 11:27:30.877256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:35.403 [2024-11-19 11:27:30.877506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:35.403 [2024-11-19 11:27:30.877740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.403 [2024-11-19 11:27:30.877761] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.403 [2024-11-19 11:27:30.877790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.403 [2024-11-19 11:27:30.877803] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.403 11:27:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.403 11:27:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:35.403 11:27:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.403 11:27:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:35.403 [2024-11-19 11:27:30.890492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.403 [2024-11-19 11:27:30.890972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.403 [2024-11-19 11:27:30.891018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:35.403 [2024-11-19 11:27:30.891038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:35.403 [2024-11-19 11:27:30.891295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:35.403 [2024-11-19 11:27:30.891542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.403 [2024-11-19 11:27:30.891574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.403 [2024-11-19 11:27:30.891592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.403 [2024-11-19 11:27:30.891607] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.662 [2024-11-19 11:27:30.904286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.662 [2024-11-19 11:27:30.904710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.662 [2024-11-19 11:27:30.904761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:35.662 [2024-11-19 11:27:30.904777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:35.662 [2024-11-19 11:27:30.904988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:35.662 [2024-11-19 11:27:30.905193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.662 [2024-11-19 11:27:30.905213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.662 [2024-11-19 11:27:30.905227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.662 [2024-11-19 11:27:30.905240] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.662 Malloc0 00:25:35.662 11:27:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.662 11:27:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:35.662 11:27:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.662 11:27:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:35.662 [2024-11-19 11:27:30.917937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.662 [2024-11-19 11:27:30.918372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.662 [2024-11-19 11:27:30.918402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:35.662 [2024-11-19 11:27:30.918443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:35.662 [2024-11-19 11:27:30.918680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:35.662 [2024-11-19 11:27:30.918902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.662 [2024-11-19 11:27:30.918923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.662 [2024-11-19 11:27:30.918938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.662 [2024-11-19 11:27:30.918962] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.662 11:27:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.662 11:27:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:35.662 11:27:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.662 11:27:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:35.662 11:27:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.662 11:27:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:35.662 11:27:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.662 11:27:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:35.662 [2024-11-19 11:27:30.931550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.662 [2024-11-19 11:27:30.931964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.662 [2024-11-19 11:27:30.931990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ffa40 with addr=10.0.0.2, port=4420 00:25:35.662 [2024-11-19 11:27:30.932015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ffa40 is same with the state(6) to be set 00:25:35.662 [2024-11-19 11:27:30.932217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ffa40 (9): Bad file descriptor 00:25:35.662 [2024-11-19 11:27:30.932466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.662 [2024-11-19 11:27:30.932488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.662 [2024-11-19 11:27:30.932503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.662 [2024-11-19 11:27:30.932517] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.662 [2024-11-19 11:27:30.933544] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:35.662 11:27:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.662 11:27:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2723291 00:25:35.662 [2024-11-19 11:27:30.945197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.662 3671.17 IOPS, 14.34 MiB/s [2024-11-19T10:27:31.159Z] [2024-11-19 11:27:31.094907] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:25:37.969 4322.71 IOPS, 16.89 MiB/s [2024-11-19T10:27:34.397Z] 4891.38 IOPS, 19.11 MiB/s [2024-11-19T10:27:35.331Z] 5323.44 IOPS, 20.79 MiB/s [2024-11-19T10:27:36.264Z] 5661.90 IOPS, 22.12 MiB/s [2024-11-19T10:27:37.198Z] 5935.27 IOPS, 23.18 MiB/s [2024-11-19T10:27:38.131Z] 6174.08 IOPS, 24.12 MiB/s [2024-11-19T10:27:39.065Z] 6383.62 IOPS, 24.94 MiB/s [2024-11-19T10:27:40.439Z] 6554.93 IOPS, 25.61 MiB/s [2024-11-19T10:27:40.439Z] 6705.33 IOPS, 26.19 MiB/s 00:25:44.942 Latency(us) 00:25:44.942 [2024-11-19T10:27:40.439Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:44.942 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:44.942 Verification LBA range: start 0x0 length 0x4000 00:25:44.942 Nvme1n1 : 15.01 6706.04 26.20 10400.96 0.00 7459.08 658.39 23204.60 00:25:44.943 [2024-11-19T10:27:40.440Z] =================================================================================================================== 00:25:44.943 [2024-11-19T10:27:40.440Z] Total : 6706.04 26.20 10400.96 0.00 7459.08 658.39 23204.60 00:25:44.943 11:27:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:25:44.943 11:27:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:44.943 11:27:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.943 11:27:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:44.943 11:27:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.943 11:27:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:25:44.943 11:27:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:25:44.943 11:27:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:44.943 11:27:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:25:44.943 11:27:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:44.943 11:27:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:25:44.943 11:27:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:44.943 11:27:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:44.943 rmmod nvme_tcp 00:25:44.943 rmmod nvme_fabrics 00:25:44.943 rmmod nvme_keyring 00:25:44.943 11:27:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:44.943 11:27:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:25:44.943 11:27:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:25:44.943 11:27:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 2724075 ']' 00:25:44.943 11:27:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 2724075 00:25:44.943 11:27:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 2724075 ']' 00:25:44.943 11:27:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 2724075 00:25:44.943 11:27:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:25:44.943 11:27:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:44.943 11:27:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2724075 00:25:44.943 11:27:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:44.943 11:27:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:44.943 11:27:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2724075' 00:25:44.943 killing process with pid 2724075 00:25:44.943 11:27:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 2724075 00:25:44.943 11:27:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 2724075 00:25:45.202 11:27:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:45.202 11:27:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:45.202 11:27:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:45.202 11:27:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:25:45.202 11:27:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:25:45.202 11:27:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:45.202 11:27:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:25:45.202 11:27:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:45.202 11:27:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:45.202 11:27:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:45.202 11:27:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:45.202 11:27:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:47.740 11:27:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:47.740 00:25:47.740 real 0m23.188s 00:25:47.740 user 1m0.339s 00:25:47.740 sys 0m4.964s 00:25:47.740 11:27:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:47.740 11:27:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:47.740 ************************************ 00:25:47.740 END TEST nvmf_bdevperf 00:25:47.740 ************************************ 00:25:47.740 11:27:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:25:47.740 11:27:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:47.740 11:27:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:47.740 11:27:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.740 ************************************ 00:25:47.740 START TEST nvmf_target_disconnect 00:25:47.740 ************************************ 00:25:47.740 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:25:47.740 * Looking for test storage... 00:25:47.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:47.740 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:47.740 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:25:47.740 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:47.740 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:47.740 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:47.740 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:47.740 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:47.740 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:25:47.740 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:25:47.740 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:25:47.740 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:25:47.740 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:25:47.740 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:25:47.740 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:25:47.740 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:47.740 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:25:47.740 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:25:47.740 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:47.740 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:47.740 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:47.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.741 --rc genhtml_branch_coverage=1 00:25:47.741 --rc genhtml_function_coverage=1 00:25:47.741 --rc genhtml_legend=1 00:25:47.741 --rc geninfo_all_blocks=1 00:25:47.741 --rc geninfo_unexecuted_blocks=1 00:25:47.741 00:25:47.741 ' 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:47.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.741 --rc genhtml_branch_coverage=1 00:25:47.741 --rc genhtml_function_coverage=1 00:25:47.741 --rc genhtml_legend=1 00:25:47.741 --rc geninfo_all_blocks=1 00:25:47.741 --rc geninfo_unexecuted_blocks=1 00:25:47.741 00:25:47.741 ' 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:47.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.741 --rc genhtml_branch_coverage=1 00:25:47.741 --rc genhtml_function_coverage=1 00:25:47.741 --rc genhtml_legend=1 00:25:47.741 --rc geninfo_all_blocks=1 00:25:47.741 --rc geninfo_unexecuted_blocks=1 00:25:47.741 00:25:47.741 ' 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:47.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.741 --rc genhtml_branch_coverage=1 00:25:47.741 --rc genhtml_function_coverage=1 00:25:47.741 --rc genhtml_legend=1 00:25:47.741 --rc geninfo_all_blocks=1 00:25:47.741 --rc geninfo_unexecuted_blocks=1 00:25:47.741 00:25:47.741 ' 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:47.741 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:25:47.741 11:27:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:50.285 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:50.285 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:25:50.285 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:50.285 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:50.285 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:50.285 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:50.285 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:50.285 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:25:50.285 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:50.285 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:25:50.285 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:25:50.285 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:25:50.285 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:25:50.285 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:25:50.285 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:25:50.285 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:50.285 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:50.285 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:50.285 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:50.285 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:50.285 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:50.285 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:50.285 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:50.285 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:50.285 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:50.285 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:50.285 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:50.285 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:50.285 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:50.285 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:50.285 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:50.285 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:50.285 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:50.285 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:50.285 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:25:50.285 Found 0000:82:00.0 (0x8086 - 0x159b) 00:25:50.285 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:50.285 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:50.285 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:50.285 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:50.285 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:50.285 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:25:50.286 Found 0000:82:00.1 (0x8086 - 0x159b) 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:25:50.286 Found net devices under 0000:82:00.0: cvl_0_0 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:25:50.286 Found net devices under 0000:82:00.1: cvl_0_1 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:50.286 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:50.286 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:25:50.286 00:25:50.286 --- 10.0.0.2 ping statistics --- 00:25:50.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:50.286 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:50.286 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:50.286 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:25:50.286 00:25:50.286 --- 10.0.0.1 ping statistics --- 00:25:50.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:50.286 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:50.286 ************************************ 00:25:50.286 START TEST nvmf_target_disconnect_tc1 00:25:50.286 ************************************ 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:25:50.286 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:50.545 [2024-11-19 11:27:45.806575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.545 [2024-11-19 11:27:45.806632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e8f40 with addr=10.0.0.2, port=4420 00:25:50.545 [2024-11-19 11:27:45.806669] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:50.545 [2024-11-19 11:27:45.806695] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:50.545 [2024-11-19 11:27:45.806712] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:25:50.545 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:25:50.545 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:25:50.545 Initializing NVMe Controllers 00:25:50.545 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:25:50.545 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:50.545 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:50.545 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:50.545 00:25:50.545 real 0m0.111s 00:25:50.545 user 0m0.048s 00:25:50.545 sys 0m0.063s 00:25:50.545 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:50.545 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:50.545 ************************************ 00:25:50.545 END TEST nvmf_target_disconnect_tc1 00:25:50.545 ************************************ 00:25:50.545 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:25:50.545 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:50.545 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:50.545 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:50.545 ************************************ 00:25:50.545 START TEST nvmf_target_disconnect_tc2 00:25:50.545 ************************************ 00:25:50.545 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:25:50.545 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:25:50.545 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:50.545 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:50.545 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:50.545 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:50.545 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2727523 00:25:50.545 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:50.545 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2727523 00:25:50.545 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2727523 ']' 00:25:50.545 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:50.545 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:50.545 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:50.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:50.545 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:50.545 11:27:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:50.545 [2024-11-19 11:27:45.924849] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:25:50.545 [2024-11-19 11:27:45.924955] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:50.545 [2024-11-19 11:27:46.007025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:50.804 [2024-11-19 11:27:46.067502] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:50.804 [2024-11-19 11:27:46.067551] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:50.804 [2024-11-19 11:27:46.067576] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:50.804 [2024-11-19 11:27:46.067587] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:50.804 [2024-11-19 11:27:46.067598] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:50.804 [2024-11-19 11:27:46.069111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:25:50.804 [2024-11-19 11:27:46.069219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:25:50.804 [2024-11-19 11:27:46.069306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:25:50.804 [2024-11-19 11:27:46.069309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:50.804 11:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:50.804 11:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:25:50.804 11:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:50.804 11:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:50.804 11:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:50.804 11:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:50.804 11:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:50.804 11:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.804 11:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:50.804 Malloc0 00:25:50.804 11:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.804 11:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:50.804 11:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.804 11:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:50.804 [2024-11-19 11:27:46.258689] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:50.804 11:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.804 11:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:50.804 11:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.804 11:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:50.804 11:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.804 11:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:50.804 11:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.804 11:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:50.804 11:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.804 11:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:50.804 11:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.804 11:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:50.804 [2024-11-19 11:27:46.286960] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:50.804 11:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.804 11:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:50.804 11:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.804 11:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:50.804 11:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.804 11:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2727576 00:25:50.804 11:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:25:50.804 11:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:53.360 11:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2727523 00:25:53.360 11:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:25:53.360 Write completed with error (sct=0, sc=8) 00:25:53.360 starting I/O failed 00:25:53.360 Write completed with error (sct=0, sc=8) 00:25:53.360 starting I/O failed 00:25:53.360 Read completed with error (sct=0, sc=8) 00:25:53.360 starting I/O failed 00:25:53.360 Write completed with error (sct=0, sc=8) 00:25:53.360 starting I/O failed 00:25:53.360 Read completed with error (sct=0, sc=8) 00:25:53.360 starting I/O failed 00:25:53.360 Read completed with error (sct=0, sc=8) 00:25:53.360 starting I/O failed 00:25:53.360 Write completed with error (sct=0, sc=8) 00:25:53.360 starting I/O failed 00:25:53.360 Write completed with error (sct=0, sc=8) 00:25:53.360 starting I/O failed 00:25:53.360 Write completed with error (sct=0, sc=8) 00:25:53.360 starting I/O failed 00:25:53.360 Write completed with error (sct=0, sc=8) 00:25:53.360 starting I/O failed 00:25:53.360 Read completed with error (sct=0, sc=8) 00:25:53.360 starting I/O failed 00:25:53.360 Write completed with error (sct=0, sc=8) 00:25:53.360 starting I/O failed 00:25:53.360 Read completed with error (sct=0, sc=8) 00:25:53.360 starting I/O failed 00:25:53.360 Write completed with error (sct=0, sc=8) 00:25:53.360 starting I/O failed 00:25:53.360 Read completed with error (sct=0, sc=8) 00:25:53.360 starting I/O failed 00:25:53.360 Write completed with error (sct=0, sc=8) 00:25:53.360 starting I/O failed 00:25:53.360 Read completed with error (sct=0, sc=8) 00:25:53.360 starting I/O failed 00:25:53.360 Read completed with error (sct=0, sc=8) 00:25:53.360 starting I/O failed 00:25:53.360 Read completed with error (sct=0, sc=8) 00:25:53.360 starting I/O failed 00:25:53.360 Read completed with error (sct=0, sc=8) 00:25:53.360 starting I/O failed 00:25:53.360 Read completed with error (sct=0, sc=8) 00:25:53.360 starting I/O failed 00:25:53.360 Write completed with error (sct=0, sc=8) 00:25:53.360 starting I/O failed 00:25:53.360 Read completed with error (sct=0, sc=8) 00:25:53.360 starting I/O failed 00:25:53.360 Read completed with error (sct=0, sc=8) 00:25:53.360 starting I/O failed 00:25:53.360 Write completed with error (sct=0, sc=8) 00:25:53.360 starting I/O failed 00:25:53.360 Write completed with error (sct=0, sc=8) 00:25:53.360 starting I/O failed 00:25:53.360 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Write completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Write completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Write completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Write completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 [2024-11-19 11:27:48.313696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Write completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Write completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Write completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Write completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Write completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Write completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Write completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Write completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 [2024-11-19 11:27:48.314030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Write completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Write completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Write completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Write completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Write completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Write completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Write completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Write completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Write completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Write completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Write completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Write completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 [2024-11-19 11:27:48.314414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Write completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Write completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Write completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Write completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Write completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Write completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Write completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Write completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Read completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Write completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Write completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 Write completed with error (sct=0, sc=8) 00:25:53.361 starting I/O failed 00:25:53.361 [2024-11-19 11:27:48.314737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:53.361 [2024-11-19 11:27:48.314930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.361 [2024-11-19 11:27:48.314960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.361 qpair failed and we were unable to recover it. 00:25:53.361 [2024-11-19 11:27:48.315139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.362 [2024-11-19 11:27:48.315190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.362 qpair failed and we were unable to recover it. 00:25:53.362 [2024-11-19 11:27:48.315336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.362 [2024-11-19 11:27:48.315387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.362 qpair failed and we were unable to recover it. 00:25:53.362 [2024-11-19 11:27:48.315507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.362 [2024-11-19 11:27:48.315533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.362 qpair failed and we were unable to recover it. 00:25:53.362 [2024-11-19 11:27:48.315698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.362 [2024-11-19 11:27:48.315743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.362 qpair failed and we were unable to recover it. 00:25:53.362 [2024-11-19 11:27:48.315904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.362 [2024-11-19 11:27:48.315928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.362 qpair failed and we were unable to recover it. 00:25:53.362 [2024-11-19 11:27:48.316078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.362 [2024-11-19 11:27:48.316116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.362 qpair failed and we were unable to recover it. 00:25:53.362 [2024-11-19 11:27:48.316281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.362 [2024-11-19 11:27:48.316305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.362 qpair failed and we were unable to recover it. 00:25:53.362 [2024-11-19 11:27:48.316474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.362 [2024-11-19 11:27:48.316501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.362 qpair failed and we were unable to recover it. 00:25:53.362 [2024-11-19 11:27:48.316677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.362 [2024-11-19 11:27:48.316722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.362 qpair failed and we were unable to recover it. 00:25:53.362 [2024-11-19 11:27:48.316876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.362 [2024-11-19 11:27:48.316899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.362 qpair failed and we were unable to recover it. 00:25:53.362 [2024-11-19 11:27:48.317034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.362 [2024-11-19 11:27:48.317058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.362 qpair failed and we were unable to recover it. 00:25:53.362 [2024-11-19 11:27:48.317220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.362 [2024-11-19 11:27:48.317244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.362 qpair failed and we were unable to recover it. 00:25:53.362 [2024-11-19 11:27:48.317386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.362 [2024-11-19 11:27:48.317413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.362 qpair failed and we were unable to recover it. 00:25:53.362 [2024-11-19 11:27:48.317509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.362 [2024-11-19 11:27:48.317536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.362 qpair failed and we were unable to recover it. 00:25:53.362 [2024-11-19 11:27:48.317691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.362 [2024-11-19 11:27:48.317728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.362 qpair failed and we were unable to recover it. 00:25:53.362 [2024-11-19 11:27:48.317828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.362 [2024-11-19 11:27:48.317867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.362 qpair failed and we were unable to recover it. 00:25:53.362 [2024-11-19 11:27:48.318044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.362 [2024-11-19 11:27:48.318068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.362 qpair failed and we were unable to recover it. 00:25:53.362 [2024-11-19 11:27:48.318236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.362 [2024-11-19 11:27:48.318260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.362 qpair failed and we were unable to recover it. 00:25:53.362 [2024-11-19 11:27:48.318398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.362 [2024-11-19 11:27:48.318432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.362 qpair failed and we were unable to recover it. 00:25:53.362 [2024-11-19 11:27:48.318526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.362 [2024-11-19 11:27:48.318552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.362 qpair failed and we were unable to recover it. 00:25:53.362 [2024-11-19 11:27:48.318770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.362 [2024-11-19 11:27:48.318794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.362 qpair failed and we were unable to recover it. 00:25:53.362 [2024-11-19 11:27:48.318993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.362 [2024-11-19 11:27:48.319041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.362 qpair failed and we were unable to recover it. 00:25:53.362 [2024-11-19 11:27:48.319166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.362 [2024-11-19 11:27:48.319191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.362 qpair failed and we were unable to recover it. 00:25:53.362 [2024-11-19 11:27:48.319385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.362 [2024-11-19 11:27:48.319424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.362 qpair failed and we were unable to recover it. 00:25:53.362 [2024-11-19 11:27:48.319524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.362 [2024-11-19 11:27:48.319550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.362 qpair failed and we were unable to recover it. 00:25:53.362 [2024-11-19 11:27:48.319743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.362 [2024-11-19 11:27:48.319766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.362 qpair failed and we were unable to recover it. 00:25:53.362 [2024-11-19 11:27:48.319894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.362 [2024-11-19 11:27:48.319917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.362 qpair failed and we were unable to recover it. 00:25:53.362 [2024-11-19 11:27:48.320030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.362 [2024-11-19 11:27:48.320055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.362 qpair failed and we were unable to recover it. 00:25:53.362 [2024-11-19 11:27:48.320225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.362 [2024-11-19 11:27:48.320250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.362 qpair failed and we were unable to recover it. 00:25:53.362 [2024-11-19 11:27:48.320411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.362 [2024-11-19 11:27:48.320438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.362 qpair failed and we were unable to recover it. 00:25:53.362 [2024-11-19 11:27:48.320526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.362 [2024-11-19 11:27:48.320561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.362 qpair failed and we were unable to recover it. 00:25:53.362 [2024-11-19 11:27:48.320743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.362 [2024-11-19 11:27:48.320767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.362 qpair failed and we were unable to recover it. 00:25:53.362 [2024-11-19 11:27:48.320964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.362 [2024-11-19 11:27:48.320988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.362 qpair failed and we were unable to recover it. 00:25:53.362 [2024-11-19 11:27:48.321124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.362 [2024-11-19 11:27:48.321148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.362 qpair failed and we were unable to recover it. 00:25:53.362 [2024-11-19 11:27:48.321396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.362 [2024-11-19 11:27:48.321423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.362 qpair failed and we were unable to recover it. 00:25:53.362 [2024-11-19 11:27:48.321521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.362 [2024-11-19 11:27:48.321547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.362 qpair failed and we were unable to recover it. 00:25:53.362 [2024-11-19 11:27:48.321693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.363 [2024-11-19 11:27:48.321731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.363 qpair failed and we were unable to recover it. 00:25:53.363 [2024-11-19 11:27:48.321872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.363 [2024-11-19 11:27:48.321895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.363 qpair failed and we were unable to recover it. 00:25:53.363 [2024-11-19 11:27:48.322043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.363 [2024-11-19 11:27:48.322068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.363 qpair failed and we were unable to recover it. 00:25:53.363 [2024-11-19 11:27:48.322208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.363 [2024-11-19 11:27:48.322232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.363 qpair failed and we were unable to recover it. 00:25:53.363 [2024-11-19 11:27:48.322376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.363 [2024-11-19 11:27:48.322402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.363 qpair failed and we were unable to recover it. 00:25:53.363 [2024-11-19 11:27:48.322524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.363 [2024-11-19 11:27:48.322550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.363 qpair failed and we were unable to recover it. 00:25:53.363 [2024-11-19 11:27:48.322696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.363 [2024-11-19 11:27:48.322734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.363 qpair failed and we were unable to recover it. 00:25:53.363 [2024-11-19 11:27:48.322873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.363 [2024-11-19 11:27:48.322896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.363 qpair failed and we were unable to recover it. 00:25:53.363 [2024-11-19 11:27:48.323054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.363 [2024-11-19 11:27:48.323078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.363 qpair failed and we were unable to recover it. 00:25:53.363 [2024-11-19 11:27:48.323270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.363 [2024-11-19 11:27:48.323294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.363 qpair failed and we were unable to recover it. 00:25:53.363 [2024-11-19 11:27:48.323422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.363 [2024-11-19 11:27:48.323447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.363 qpair failed and we were unable to recover it. 00:25:53.363 [2024-11-19 11:27:48.323551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.363 [2024-11-19 11:27:48.323577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.363 qpair failed and we were unable to recover it. 00:25:53.363 [2024-11-19 11:27:48.323714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.363 [2024-11-19 11:27:48.323739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.363 qpair failed and we were unable to recover it. 00:25:53.363 [2024-11-19 11:27:48.323879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.363 [2024-11-19 11:27:48.323963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.363 qpair failed and we were unable to recover it. 00:25:53.363 [2024-11-19 11:27:48.324112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.363 [2024-11-19 11:27:48.324137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.363 qpair failed and we were unable to recover it. 00:25:53.363 [2024-11-19 11:27:48.324272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.363 [2024-11-19 11:27:48.324296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.363 qpair failed and we were unable to recover it. 00:25:53.363 [2024-11-19 11:27:48.324503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.363 [2024-11-19 11:27:48.324553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.363 qpair failed and we were unable to recover it. 00:25:53.363 [2024-11-19 11:27:48.324730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.363 [2024-11-19 11:27:48.324783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.363 qpair failed and we were unable to recover it. 00:25:53.363 [2024-11-19 11:27:48.324970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.363 [2024-11-19 11:27:48.324995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.363 qpair failed and we were unable to recover it. 00:25:53.363 [2024-11-19 11:27:48.325118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.363 [2024-11-19 11:27:48.325143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.363 qpair failed and we were unable to recover it. 00:25:53.363 [2024-11-19 11:27:48.325281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.363 [2024-11-19 11:27:48.325307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.363 qpair failed and we were unable to recover it. 00:25:53.363 [2024-11-19 11:27:48.325444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.363 [2024-11-19 11:27:48.325477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.363 qpair failed and we were unable to recover it. 00:25:53.363 [2024-11-19 11:27:48.325589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.363 [2024-11-19 11:27:48.325625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.363 qpair failed and we were unable to recover it. 00:25:53.363 [2024-11-19 11:27:48.325795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.363 [2024-11-19 11:27:48.325819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.363 qpair failed and we were unable to recover it. 00:25:53.363 [2024-11-19 11:27:48.325959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.363 [2024-11-19 11:27:48.325985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.363 qpair failed and we were unable to recover it. 00:25:53.363 [2024-11-19 11:27:48.326126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.363 [2024-11-19 11:27:48.326165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.363 qpair failed and we were unable to recover it. 00:25:53.363 [2024-11-19 11:27:48.326304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.363 [2024-11-19 11:27:48.326328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.363 qpair failed and we were unable to recover it. 00:25:53.363 [2024-11-19 11:27:48.326456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.363 [2024-11-19 11:27:48.326483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.363 qpair failed and we were unable to recover it. 00:25:53.363 [2024-11-19 11:27:48.326581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.363 [2024-11-19 11:27:48.326607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.363 qpair failed and we were unable to recover it. 00:25:53.363 [2024-11-19 11:27:48.326714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.363 [2024-11-19 11:27:48.326753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.363 qpair failed and we were unable to recover it. 00:25:53.363 [2024-11-19 11:27:48.326890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.363 [2024-11-19 11:27:48.326914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.363 qpair failed and we were unable to recover it. 00:25:53.363 [2024-11-19 11:27:48.327126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.363 [2024-11-19 11:27:48.327150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.363 qpair failed and we were unable to recover it. 00:25:53.363 [2024-11-19 11:27:48.327263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.363 [2024-11-19 11:27:48.327287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.363 qpair failed and we were unable to recover it. 00:25:53.363 [2024-11-19 11:27:48.327430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.363 [2024-11-19 11:27:48.327459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.363 qpair failed and we were unable to recover it. 00:25:53.363 [2024-11-19 11:27:48.327578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.363 [2024-11-19 11:27:48.327604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.363 qpair failed and we were unable to recover it. 00:25:53.363 [2024-11-19 11:27:48.327751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.363 [2024-11-19 11:27:48.327789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.363 qpair failed and we were unable to recover it. 00:25:53.363 [2024-11-19 11:27:48.327951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.363 [2024-11-19 11:27:48.327989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.364 qpair failed and we were unable to recover it. 00:25:53.364 [2024-11-19 11:27:48.328124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.364 [2024-11-19 11:27:48.328162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.364 qpair failed and we were unable to recover it. 00:25:53.364 [2024-11-19 11:27:48.328261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.364 [2024-11-19 11:27:48.328285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.364 qpair failed and we were unable to recover it. 00:25:53.364 [2024-11-19 11:27:48.328410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.364 [2024-11-19 11:27:48.328436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.364 qpair failed and we were unable to recover it. 00:25:53.364 [2024-11-19 11:27:48.328561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.364 [2024-11-19 11:27:48.328586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.364 qpair failed and we were unable to recover it. 00:25:53.364 [2024-11-19 11:27:48.328699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.364 [2024-11-19 11:27:48.328731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.364 qpair failed and we were unable to recover it. 00:25:53.364 [2024-11-19 11:27:48.328894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.364 [2024-11-19 11:27:48.328933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.364 qpair failed and we were unable to recover it. 00:25:53.364 [2024-11-19 11:27:48.329039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.364 [2024-11-19 11:27:48.329063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.364 qpair failed and we were unable to recover it. 00:25:53.364 [2024-11-19 11:27:48.329236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.364 [2024-11-19 11:27:48.329260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.364 qpair failed and we were unable to recover it. 00:25:53.364 [2024-11-19 11:27:48.329432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.364 [2024-11-19 11:27:48.329458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.364 qpair failed and we were unable to recover it. 00:25:53.364 [2024-11-19 11:27:48.329559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.364 [2024-11-19 11:27:48.329584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.364 qpair failed and we were unable to recover it. 00:25:53.364 [2024-11-19 11:27:48.329775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.364 [2024-11-19 11:27:48.329799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.364 qpair failed and we were unable to recover it. 00:25:53.364 [2024-11-19 11:27:48.329947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.364 [2024-11-19 11:27:48.329971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.364 qpair failed and we were unable to recover it. 00:25:53.364 [2024-11-19 11:27:48.330148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.364 [2024-11-19 11:27:48.330172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.364 qpair failed and we were unable to recover it. 00:25:53.364 [2024-11-19 11:27:48.330325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.364 [2024-11-19 11:27:48.330377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.364 qpair failed and we were unable to recover it. 00:25:53.364 [2024-11-19 11:27:48.330465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.364 [2024-11-19 11:27:48.330491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.364 qpair failed and we were unable to recover it. 00:25:53.364 [2024-11-19 11:27:48.330625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.364 [2024-11-19 11:27:48.330664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.364 qpair failed and we were unable to recover it. 00:25:53.364 [2024-11-19 11:27:48.330785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.364 [2024-11-19 11:27:48.330824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.364 qpair failed and we were unable to recover it. 00:25:53.364 [2024-11-19 11:27:48.330999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.364 [2024-11-19 11:27:48.331022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.364 qpair failed and we were unable to recover it. 00:25:53.364 [2024-11-19 11:27:48.331188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.364 [2024-11-19 11:27:48.331212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.364 qpair failed and we were unable to recover it. 00:25:53.364 [2024-11-19 11:27:48.331399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.364 [2024-11-19 11:27:48.331448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.364 qpair failed and we were unable to recover it. 00:25:53.364 [2024-11-19 11:27:48.331549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.364 [2024-11-19 11:27:48.331575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.364 qpair failed and we were unable to recover it. 00:25:53.364 [2024-11-19 11:27:48.331702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.364 [2024-11-19 11:27:48.331742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.364 qpair failed and we were unable to recover it. 00:25:53.364 [2024-11-19 11:27:48.331864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.364 [2024-11-19 11:27:48.331902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.364 qpair failed and we were unable to recover it. 00:25:53.364 [2024-11-19 11:27:48.332004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.364 [2024-11-19 11:27:48.332028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.364 qpair failed and we were unable to recover it. 00:25:53.364 [2024-11-19 11:27:48.332147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.364 [2024-11-19 11:27:48.332175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.364 qpair failed and we were unable to recover it. 00:25:53.364 [2024-11-19 11:27:48.332360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.364 [2024-11-19 11:27:48.332391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.364 qpair failed and we were unable to recover it. 00:25:53.364 [2024-11-19 11:27:48.332525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.364 [2024-11-19 11:27:48.332550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.364 qpair failed and we were unable to recover it. 00:25:53.364 [2024-11-19 11:27:48.332682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.364 [2024-11-19 11:27:48.332724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.364 qpair failed and we were unable to recover it. 00:25:53.364 [2024-11-19 11:27:48.332834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.364 [2024-11-19 11:27:48.332872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.364 qpair failed and we were unable to recover it. 00:25:53.364 [2024-11-19 11:27:48.333002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.364 [2024-11-19 11:27:48.333026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.364 qpair failed and we were unable to recover it. 00:25:53.364 [2024-11-19 11:27:48.333160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.364 [2024-11-19 11:27:48.333184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.364 qpair failed and we were unable to recover it. 00:25:53.364 [2024-11-19 11:27:48.333332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.364 [2024-11-19 11:27:48.333356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.364 qpair failed and we were unable to recover it. 00:25:53.364 [2024-11-19 11:27:48.333464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.364 [2024-11-19 11:27:48.333489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.364 qpair failed and we were unable to recover it. 00:25:53.364 [2024-11-19 11:27:48.333586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.364 [2024-11-19 11:27:48.333611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.364 qpair failed and we were unable to recover it. 00:25:53.364 [2024-11-19 11:27:48.333712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.364 [2024-11-19 11:27:48.333737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.364 qpair failed and we were unable to recover it. 00:25:53.364 [2024-11-19 11:27:48.333881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.364 [2024-11-19 11:27:48.333905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.365 qpair failed and we were unable to recover it. 00:25:53.365 [2024-11-19 11:27:48.334020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.365 [2024-11-19 11:27:48.334045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.365 qpair failed and we were unable to recover it. 00:25:53.365 [2024-11-19 11:27:48.334226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.365 [2024-11-19 11:27:48.334251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.365 qpair failed and we were unable to recover it. 00:25:53.365 [2024-11-19 11:27:48.334386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.365 [2024-11-19 11:27:48.334411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.365 qpair failed and we were unable to recover it. 00:25:53.365 [2024-11-19 11:27:48.334494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.365 [2024-11-19 11:27:48.334519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.365 qpair failed and we were unable to recover it. 00:25:53.365 [2024-11-19 11:27:48.334671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.365 [2024-11-19 11:27:48.334695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.365 qpair failed and we were unable to recover it. 00:25:53.365 [2024-11-19 11:27:48.334798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.365 [2024-11-19 11:27:48.334822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.365 qpair failed and we were unable to recover it. 00:25:53.365 [2024-11-19 11:27:48.334976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.365 [2024-11-19 11:27:48.335000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.365 qpair failed and we were unable to recover it. 00:25:53.365 [2024-11-19 11:27:48.335130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.365 [2024-11-19 11:27:48.335154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.365 qpair failed and we were unable to recover it. 00:25:53.365 [2024-11-19 11:27:48.335278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.365 [2024-11-19 11:27:48.335303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.365 qpair failed and we were unable to recover it. 00:25:53.365 [2024-11-19 11:27:48.335453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.365 [2024-11-19 11:27:48.335478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.365 qpair failed and we were unable to recover it. 00:25:53.365 [2024-11-19 11:27:48.335606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.365 [2024-11-19 11:27:48.335632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.365 qpair failed and we were unable to recover it. 00:25:53.365 [2024-11-19 11:27:48.335763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.365 [2024-11-19 11:27:48.335787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.365 qpair failed and we were unable to recover it. 00:25:53.365 [2024-11-19 11:27:48.335940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.365 [2024-11-19 11:27:48.335964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.365 qpair failed and we were unable to recover it. 00:25:53.365 [2024-11-19 11:27:48.336134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.365 [2024-11-19 11:27:48.336158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.365 qpair failed and we were unable to recover it. 00:25:53.365 [2024-11-19 11:27:48.336282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.365 [2024-11-19 11:27:48.336307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.365 qpair failed and we were unable to recover it. 00:25:53.365 [2024-11-19 11:27:48.336445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.365 [2024-11-19 11:27:48.336482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.365 qpair failed and we were unable to recover it. 00:25:53.365 [2024-11-19 11:27:48.336598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.365 [2024-11-19 11:27:48.336624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.365 qpair failed and we were unable to recover it. 00:25:53.365 [2024-11-19 11:27:48.336747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.365 [2024-11-19 11:27:48.336773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.365 qpair failed and we were unable to recover it. 00:25:53.365 [2024-11-19 11:27:48.336916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.365 [2024-11-19 11:27:48.336943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.365 qpair failed and we were unable to recover it. 00:25:53.365 [2024-11-19 11:27:48.337084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.365 [2024-11-19 11:27:48.337124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.365 qpair failed and we were unable to recover it. 00:25:53.365 [2024-11-19 11:27:48.337243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.365 [2024-11-19 11:27:48.337268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.365 qpair failed and we were unable to recover it. 00:25:53.365 [2024-11-19 11:27:48.337404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.365 [2024-11-19 11:27:48.337430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.365 qpair failed and we were unable to recover it. 00:25:53.365 [2024-11-19 11:27:48.337536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.365 [2024-11-19 11:27:48.337560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.365 qpair failed and we were unable to recover it. 00:25:53.365 [2024-11-19 11:27:48.337676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.365 [2024-11-19 11:27:48.337701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.365 qpair failed and we were unable to recover it. 00:25:53.365 [2024-11-19 11:27:48.337845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.365 [2024-11-19 11:27:48.337883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.365 qpair failed and we were unable to recover it. 00:25:53.365 [2024-11-19 11:27:48.338013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.365 [2024-11-19 11:27:48.338037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.365 qpair failed and we were unable to recover it. 00:25:53.365 [2024-11-19 11:27:48.338156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.365 [2024-11-19 11:27:48.338181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.365 qpair failed and we were unable to recover it. 00:25:53.365 [2024-11-19 11:27:48.338335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.365 [2024-11-19 11:27:48.338360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.365 qpair failed and we were unable to recover it. 00:25:53.365 [2024-11-19 11:27:48.338484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.365 [2024-11-19 11:27:48.338508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.365 qpair failed and we were unable to recover it. 00:25:53.365 [2024-11-19 11:27:48.338650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.365 [2024-11-19 11:27:48.338675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.365 qpair failed and we were unable to recover it. 00:25:53.366 [2024-11-19 11:27:48.338848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.366 [2024-11-19 11:27:48.338871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.366 qpair failed and we were unable to recover it. 00:25:53.366 [2024-11-19 11:27:48.339027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.366 [2024-11-19 11:27:48.339051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.366 qpair failed and we were unable to recover it. 00:25:53.366 [2024-11-19 11:27:48.339184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.366 [2024-11-19 11:27:48.339223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.366 qpair failed and we were unable to recover it. 00:25:53.366 [2024-11-19 11:27:48.339340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.366 [2024-11-19 11:27:48.339388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.366 qpair failed and we were unable to recover it. 00:25:53.366 [2024-11-19 11:27:48.339508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.366 [2024-11-19 11:27:48.339534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.366 qpair failed and we were unable to recover it. 00:25:53.366 [2024-11-19 11:27:48.339625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.366 [2024-11-19 11:27:48.339668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.366 qpair failed and we were unable to recover it. 00:25:53.366 [2024-11-19 11:27:48.339797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.366 [2024-11-19 11:27:48.339821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.366 qpair failed and we were unable to recover it. 00:25:53.366 [2024-11-19 11:27:48.339954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.366 [2024-11-19 11:27:48.339978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.366 qpair failed and we were unable to recover it. 00:25:53.366 [2024-11-19 11:27:48.340120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.366 [2024-11-19 11:27:48.340144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.366 qpair failed and we were unable to recover it. 00:25:53.366 [2024-11-19 11:27:48.340321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.366 [2024-11-19 11:27:48.340344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.366 qpair failed and we were unable to recover it. 00:25:53.366 [2024-11-19 11:27:48.340497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.366 [2024-11-19 11:27:48.340522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.366 qpair failed and we were unable to recover it. 00:25:53.366 [2024-11-19 11:27:48.340638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.366 [2024-11-19 11:27:48.340678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.366 qpair failed and we were unable to recover it. 00:25:53.366 [2024-11-19 11:27:48.340865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.366 [2024-11-19 11:27:48.340889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.366 qpair failed and we were unable to recover it. 00:25:53.366 [2024-11-19 11:27:48.341064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.366 [2024-11-19 11:27:48.341087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.366 qpair failed and we were unable to recover it. 00:25:53.366 [2024-11-19 11:27:48.341252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.366 [2024-11-19 11:27:48.341276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.366 qpair failed and we were unable to recover it. 00:25:53.366 [2024-11-19 11:27:48.341432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.366 [2024-11-19 11:27:48.341456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.366 qpair failed and we were unable to recover it. 00:25:53.366 [2024-11-19 11:27:48.341556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.366 [2024-11-19 11:27:48.341581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.366 qpair failed and we were unable to recover it. 00:25:53.366 [2024-11-19 11:27:48.341681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.366 [2024-11-19 11:27:48.341705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.366 qpair failed and we were unable to recover it. 00:25:53.366 [2024-11-19 11:27:48.341840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.366 [2024-11-19 11:27:48.341864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.366 qpair failed and we were unable to recover it. 00:25:53.366 [2024-11-19 11:27:48.341993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.366 [2024-11-19 11:27:48.342017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.366 qpair failed and we were unable to recover it. 00:25:53.366 [2024-11-19 11:27:48.342185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.366 [2024-11-19 11:27:48.342210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.366 qpair failed and we were unable to recover it. 00:25:53.366 [2024-11-19 11:27:48.342331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.366 [2024-11-19 11:27:48.342355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.366 qpair failed and we were unable to recover it. 00:25:53.366 [2024-11-19 11:27:48.342529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.366 [2024-11-19 11:27:48.342554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.366 qpair failed and we were unable to recover it. 00:25:53.366 [2024-11-19 11:27:48.342670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.366 [2024-11-19 11:27:48.342694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.366 qpair failed and we were unable to recover it. 00:25:53.366 [2024-11-19 11:27:48.342860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.366 [2024-11-19 11:27:48.342883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.366 qpair failed and we were unable to recover it. 00:25:53.366 [2024-11-19 11:27:48.343004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.366 [2024-11-19 11:27:48.343032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.366 qpair failed and we were unable to recover it. 00:25:53.366 [2024-11-19 11:27:48.343158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.366 [2024-11-19 11:27:48.343182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.366 qpair failed and we were unable to recover it. 00:25:53.366 [2024-11-19 11:27:48.343315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.366 [2024-11-19 11:27:48.343338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.366 qpair failed and we were unable to recover it. 00:25:53.366 [2024-11-19 11:27:48.343450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.366 [2024-11-19 11:27:48.343476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.366 qpair failed and we were unable to recover it. 00:25:53.366 [2024-11-19 11:27:48.343599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.366 [2024-11-19 11:27:48.343625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.366 qpair failed and we were unable to recover it. 00:25:53.366 [2024-11-19 11:27:48.343739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.366 [2024-11-19 11:27:48.343763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.366 qpair failed and we were unable to recover it. 00:25:53.366 [2024-11-19 11:27:48.343927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.366 [2024-11-19 11:27:48.343951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.366 qpair failed and we were unable to recover it. 00:25:53.366 [2024-11-19 11:27:48.344120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.366 [2024-11-19 11:27:48.344143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.366 qpair failed and we were unable to recover it. 00:25:53.366 [2024-11-19 11:27:48.344259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.366 [2024-11-19 11:27:48.344283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.366 qpair failed and we were unable to recover it. 00:25:53.366 [2024-11-19 11:27:48.344451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.366 [2024-11-19 11:27:48.344476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.366 qpair failed and we were unable to recover it. 00:25:53.366 [2024-11-19 11:27:48.344592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.366 [2024-11-19 11:27:48.344617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.366 qpair failed and we were unable to recover it. 00:25:53.367 [2024-11-19 11:27:48.344751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.367 [2024-11-19 11:27:48.344789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.367 qpair failed and we were unable to recover it. 00:25:53.367 [2024-11-19 11:27:48.344880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.367 [2024-11-19 11:27:48.344902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.367 qpair failed and we were unable to recover it. 00:25:53.367 [2024-11-19 11:27:48.345039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.367 [2024-11-19 11:27:48.345063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.367 qpair failed and we were unable to recover it. 00:25:53.367 [2024-11-19 11:27:48.345232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.367 [2024-11-19 11:27:48.345269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.367 qpair failed and we were unable to recover it. 00:25:53.367 [2024-11-19 11:27:48.345390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.367 [2024-11-19 11:27:48.345415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.367 qpair failed and we were unable to recover it. 00:25:53.367 [2024-11-19 11:27:48.345572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.367 [2024-11-19 11:27:48.345597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.367 qpair failed and we were unable to recover it. 00:25:53.367 [2024-11-19 11:27:48.345751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.367 [2024-11-19 11:27:48.345775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.367 qpair failed and we were unable to recover it. 00:25:53.367 [2024-11-19 11:27:48.345911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.367 [2024-11-19 11:27:48.345950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.367 qpair failed and we were unable to recover it. 00:25:53.367 [2024-11-19 11:27:48.346105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.367 [2024-11-19 11:27:48.346129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.367 qpair failed and we were unable to recover it. 00:25:53.367 [2024-11-19 11:27:48.346244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.367 [2024-11-19 11:27:48.346268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.367 qpair failed and we were unable to recover it. 00:25:53.367 [2024-11-19 11:27:48.346406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.367 [2024-11-19 11:27:48.346433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.367 qpair failed and we were unable to recover it. 00:25:53.367 [2024-11-19 11:27:48.346577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.367 [2024-11-19 11:27:48.346602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.367 qpair failed and we were unable to recover it. 00:25:53.367 [2024-11-19 11:27:48.346764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.367 [2024-11-19 11:27:48.346788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.367 qpair failed and we were unable to recover it. 00:25:53.367 [2024-11-19 11:27:48.346902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.367 [2024-11-19 11:27:48.346927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.367 qpair failed and we were unable to recover it. 00:25:53.367 [2024-11-19 11:27:48.347067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.367 [2024-11-19 11:27:48.347092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.367 qpair failed and we were unable to recover it. 00:25:53.367 [2024-11-19 11:27:48.347218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.367 [2024-11-19 11:27:48.347243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.367 qpair failed and we were unable to recover it. 00:25:53.367 [2024-11-19 11:27:48.347344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.367 [2024-11-19 11:27:48.347392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.367 qpair failed and we were unable to recover it. 00:25:53.367 [2024-11-19 11:27:48.347555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.367 [2024-11-19 11:27:48.347581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.367 qpair failed and we were unable to recover it. 00:25:53.367 [2024-11-19 11:27:48.347711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.367 [2024-11-19 11:27:48.347735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.367 qpair failed and we were unable to recover it. 00:25:53.367 [2024-11-19 11:27:48.347827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.367 [2024-11-19 11:27:48.347851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.367 qpair failed and we were unable to recover it. 00:25:53.367 [2024-11-19 11:27:48.347999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.367 [2024-11-19 11:27:48.348025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.367 qpair failed and we were unable to recover it. 00:25:53.367 [2024-11-19 11:27:48.348147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.367 [2024-11-19 11:27:48.348171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.367 qpair failed and we were unable to recover it. 00:25:53.367 [2024-11-19 11:27:48.348340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.367 [2024-11-19 11:27:48.348372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.367 qpair failed and we were unable to recover it. 00:25:53.367 [2024-11-19 11:27:48.348507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.367 [2024-11-19 11:27:48.348532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.367 qpair failed and we were unable to recover it. 00:25:53.367 [2024-11-19 11:27:48.348678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.367 [2024-11-19 11:27:48.348729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.367 qpair failed and we were unable to recover it. 00:25:53.367 [2024-11-19 11:27:48.348861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.367 [2024-11-19 11:27:48.348887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.367 qpair failed and we were unable to recover it. 00:25:53.367 [2024-11-19 11:27:48.349046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.367 [2024-11-19 11:27:48.349085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.367 qpair failed and we were unable to recover it. 00:25:53.367 [2024-11-19 11:27:48.349245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.367 [2024-11-19 11:27:48.349269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.367 qpair failed and we were unable to recover it. 00:25:53.367 [2024-11-19 11:27:48.349416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.367 [2024-11-19 11:27:48.349459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.367 qpair failed and we were unable to recover it. 00:25:53.367 [2024-11-19 11:27:48.349613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.367 [2024-11-19 11:27:48.349662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.367 qpair failed and we were unable to recover it. 00:25:53.367 [2024-11-19 11:27:48.349794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.367 [2024-11-19 11:27:48.349818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.367 qpair failed and we were unable to recover it. 00:25:53.367 [2024-11-19 11:27:48.349976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.367 [2024-11-19 11:27:48.350000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.367 qpair failed and we were unable to recover it. 00:25:53.367 [2024-11-19 11:27:48.350138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.367 [2024-11-19 11:27:48.350182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.367 qpair failed and we were unable to recover it. 00:25:53.367 [2024-11-19 11:27:48.350305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.367 [2024-11-19 11:27:48.350329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.367 qpair failed and we were unable to recover it. 00:25:53.367 [2024-11-19 11:27:48.350442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.367 [2024-11-19 11:27:48.350466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.367 qpair failed and we were unable to recover it. 00:25:53.367 [2024-11-19 11:27:48.350589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.367 [2024-11-19 11:27:48.350614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.367 qpair failed and we were unable to recover it. 00:25:53.368 [2024-11-19 11:27:48.350734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.368 [2024-11-19 11:27:48.350773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.368 qpair failed and we were unable to recover it. 00:25:53.368 [2024-11-19 11:27:48.350912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.368 [2024-11-19 11:27:48.350935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.368 qpair failed and we were unable to recover it. 00:25:53.368 [2024-11-19 11:27:48.351089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.368 [2024-11-19 11:27:48.351113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.368 qpair failed and we were unable to recover it. 00:25:53.368 [2024-11-19 11:27:48.351242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.368 [2024-11-19 11:27:48.351281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.368 qpair failed and we were unable to recover it. 00:25:53.368 [2024-11-19 11:27:48.351400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.368 [2024-11-19 11:27:48.351425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.368 qpair failed and we were unable to recover it. 00:25:53.368 [2024-11-19 11:27:48.351510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.368 [2024-11-19 11:27:48.351535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.368 qpair failed and we were unable to recover it. 00:25:53.368 [2024-11-19 11:27:48.351666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.368 [2024-11-19 11:27:48.351690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.368 qpair failed and we were unable to recover it. 00:25:53.368 [2024-11-19 11:27:48.351800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.368 [2024-11-19 11:27:48.351824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.368 qpair failed and we were unable to recover it. 00:25:53.368 [2024-11-19 11:27:48.351952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.368 [2024-11-19 11:27:48.351976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.368 qpair failed and we were unable to recover it. 00:25:53.368 [2024-11-19 11:27:48.352106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.368 [2024-11-19 11:27:48.352131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.368 qpair failed and we were unable to recover it. 00:25:53.368 [2024-11-19 11:27:48.352268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.368 [2024-11-19 11:27:48.352319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.368 qpair failed and we were unable to recover it. 00:25:53.368 [2024-11-19 11:27:48.352468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.368 [2024-11-19 11:27:48.352495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.368 qpair failed and we were unable to recover it. 00:25:53.368 [2024-11-19 11:27:48.352615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.368 [2024-11-19 11:27:48.352640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.368 qpair failed and we were unable to recover it. 00:25:53.368 [2024-11-19 11:27:48.352770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.368 [2024-11-19 11:27:48.352794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.368 qpair failed and we were unable to recover it. 00:25:53.368 [2024-11-19 11:27:48.352955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.368 [2024-11-19 11:27:48.352979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.368 qpair failed and we were unable to recover it. 00:25:53.368 [2024-11-19 11:27:48.353106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.368 [2024-11-19 11:27:48.353130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.368 qpair failed and we were unable to recover it. 00:25:53.368 [2024-11-19 11:27:48.353245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.368 [2024-11-19 11:27:48.353270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.368 qpair failed and we were unable to recover it. 00:25:53.368 [2024-11-19 11:27:48.353407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.368 [2024-11-19 11:27:48.353434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.368 qpair failed and we were unable to recover it. 00:25:53.368 [2024-11-19 11:27:48.353550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.368 [2024-11-19 11:27:48.353576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.368 qpair failed and we were unable to recover it. 00:25:53.368 [2024-11-19 11:27:48.353749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.368 [2024-11-19 11:27:48.353774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.368 qpair failed and we were unable to recover it. 00:25:53.368 [2024-11-19 11:27:48.353883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.368 [2024-11-19 11:27:48.353911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.368 qpair failed and we were unable to recover it. 00:25:53.368 [2024-11-19 11:27:48.354047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.368 [2024-11-19 11:27:48.354072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.368 qpair failed and we were unable to recover it. 00:25:53.368 [2024-11-19 11:27:48.354213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.368 [2024-11-19 11:27:48.354237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.368 qpair failed and we were unable to recover it. 00:25:53.368 [2024-11-19 11:27:48.354411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.368 [2024-11-19 11:27:48.354458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.368 qpair failed and we were unable to recover it. 00:25:53.368 [2024-11-19 11:27:48.354615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.368 [2024-11-19 11:27:48.354642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.368 qpair failed and we were unable to recover it. 00:25:53.368 [2024-11-19 11:27:48.354749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.368 [2024-11-19 11:27:48.354786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.368 qpair failed and we were unable to recover it. 00:25:53.368 [2024-11-19 11:27:48.354932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.368 [2024-11-19 11:27:48.354991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.368 qpair failed and we were unable to recover it. 00:25:53.368 [2024-11-19 11:27:48.355155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.368 [2024-11-19 11:27:48.355191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.368 qpair failed and we were unable to recover it. 00:25:53.368 [2024-11-19 11:27:48.355332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.368 [2024-11-19 11:27:48.355384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.368 qpair failed and we were unable to recover it. 00:25:53.368 [2024-11-19 11:27:48.355536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.368 [2024-11-19 11:27:48.355562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.368 qpair failed and we were unable to recover it. 00:25:53.368 [2024-11-19 11:27:48.355661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.368 [2024-11-19 11:27:48.355686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.368 qpair failed and we were unable to recover it. 00:25:53.368 [2024-11-19 11:27:48.355823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.368 [2024-11-19 11:27:48.355847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.368 qpair failed and we were unable to recover it. 00:25:53.368 [2024-11-19 11:27:48.355954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.368 [2024-11-19 11:27:48.355978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.368 qpair failed and we were unable to recover it. 00:25:53.368 [2024-11-19 11:27:48.356112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.368 [2024-11-19 11:27:48.356158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.368 qpair failed and we were unable to recover it. 00:25:53.368 [2024-11-19 11:27:48.356336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.368 [2024-11-19 11:27:48.356395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.368 qpair failed and we were unable to recover it. 00:25:53.368 [2024-11-19 11:27:48.356528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.368 [2024-11-19 11:27:48.356557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.368 qpair failed and we were unable to recover it. 00:25:53.368 [2024-11-19 11:27:48.356679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.369 [2024-11-19 11:27:48.356720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.369 qpair failed and we were unable to recover it. 00:25:53.369 [2024-11-19 11:27:48.356806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.369 [2024-11-19 11:27:48.356832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.369 qpair failed and we were unable to recover it. 00:25:53.369 [2024-11-19 11:27:48.356973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.369 [2024-11-19 11:27:48.356998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.369 qpair failed and we were unable to recover it. 00:25:53.369 [2024-11-19 11:27:48.357131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.369 [2024-11-19 11:27:48.357158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.369 qpair failed and we were unable to recover it. 00:25:53.369 [2024-11-19 11:27:48.357278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.369 [2024-11-19 11:27:48.357304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.369 qpair failed and we were unable to recover it. 00:25:53.369 [2024-11-19 11:27:48.357460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.369 [2024-11-19 11:27:48.357487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.369 qpair failed and we were unable to recover it. 00:25:53.369 [2024-11-19 11:27:48.357611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.369 [2024-11-19 11:27:48.357654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.369 qpair failed and we were unable to recover it. 00:25:53.369 [2024-11-19 11:27:48.357818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.369 [2024-11-19 11:27:48.357866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.369 qpair failed and we were unable to recover it. 00:25:53.369 [2024-11-19 11:27:48.358015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.369 [2024-11-19 11:27:48.358069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.369 qpair failed and we were unable to recover it. 00:25:53.369 [2024-11-19 11:27:48.358224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.369 [2024-11-19 11:27:48.358249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.369 qpair failed and we were unable to recover it. 00:25:53.369 [2024-11-19 11:27:48.358388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.369 [2024-11-19 11:27:48.358427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.369 qpair failed and we were unable to recover it. 00:25:53.369 [2024-11-19 11:27:48.358592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.369 [2024-11-19 11:27:48.358630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.369 qpair failed and we were unable to recover it. 00:25:53.369 [2024-11-19 11:27:48.358763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.369 [2024-11-19 11:27:48.358804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.369 qpair failed and we were unable to recover it. 00:25:53.369 [2024-11-19 11:27:48.358917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.369 [2024-11-19 11:27:48.358942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.369 qpair failed and we were unable to recover it. 00:25:53.369 [2024-11-19 11:27:48.359057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.369 [2024-11-19 11:27:48.359081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.369 qpair failed and we were unable to recover it. 00:25:53.369 [2024-11-19 11:27:48.359218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.369 [2024-11-19 11:27:48.359243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.369 qpair failed and we were unable to recover it. 00:25:53.369 [2024-11-19 11:27:48.359332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.369 [2024-11-19 11:27:48.359357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.369 qpair failed and we were unable to recover it. 00:25:53.369 [2024-11-19 11:27:48.359519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.369 [2024-11-19 11:27:48.359544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.369 qpair failed and we were unable to recover it. 00:25:53.369 [2024-11-19 11:27:48.359672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.369 [2024-11-19 11:27:48.359696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.369 qpair failed and we were unable to recover it. 00:25:53.369 [2024-11-19 11:27:48.359813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.369 [2024-11-19 11:27:48.359851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.369 qpair failed and we were unable to recover it. 00:25:53.369 [2024-11-19 11:27:48.359967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.369 [2024-11-19 11:27:48.359990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.369 qpair failed and we were unable to recover it. 00:25:53.369 [2024-11-19 11:27:48.360116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.369 [2024-11-19 11:27:48.360141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.369 qpair failed and we were unable to recover it. 00:25:53.369 [2024-11-19 11:27:48.360263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.369 [2024-11-19 11:27:48.360289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.369 qpair failed and we were unable to recover it. 00:25:53.369 [2024-11-19 11:27:48.360450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.369 [2024-11-19 11:27:48.360488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.369 qpair failed and we were unable to recover it. 00:25:53.369 [2024-11-19 11:27:48.360590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.369 [2024-11-19 11:27:48.360622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.369 qpair failed and we were unable to recover it. 00:25:53.369 [2024-11-19 11:27:48.360746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.369 [2024-11-19 11:27:48.360772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.369 qpair failed and we were unable to recover it. 00:25:53.369 [2024-11-19 11:27:48.360902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.369 [2024-11-19 11:27:48.360927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.369 qpair failed and we were unable to recover it. 00:25:53.369 [2024-11-19 11:27:48.361087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.369 [2024-11-19 11:27:48.361113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.369 qpair failed and we were unable to recover it. 00:25:53.369 [2024-11-19 11:27:48.361233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.369 [2024-11-19 11:27:48.361271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.369 qpair failed and we were unable to recover it. 00:25:53.369 [2024-11-19 11:27:48.361474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.369 [2024-11-19 11:27:48.361500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.369 qpair failed and we were unable to recover it. 00:25:53.369 [2024-11-19 11:27:48.361664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.369 [2024-11-19 11:27:48.361702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.369 qpair failed and we were unable to recover it. 00:25:53.369 [2024-11-19 11:27:48.361861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.369 [2024-11-19 11:27:48.361935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.369 qpair failed and we were unable to recover it. 00:25:53.369 [2024-11-19 11:27:48.362130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.369 [2024-11-19 11:27:48.362168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.369 qpair failed and we were unable to recover it. 00:25:53.369 [2024-11-19 11:27:48.362342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.369 [2024-11-19 11:27:48.362393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.369 qpair failed and we were unable to recover it. 00:25:53.369 [2024-11-19 11:27:48.362536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.369 [2024-11-19 11:27:48.362562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.369 qpair failed and we were unable to recover it. 00:25:53.369 [2024-11-19 11:27:48.362691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.369 [2024-11-19 11:27:48.362730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.369 qpair failed and we were unable to recover it. 00:25:53.369 [2024-11-19 11:27:48.362878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.370 [2024-11-19 11:27:48.362935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.370 qpair failed and we were unable to recover it. 00:25:53.370 [2024-11-19 11:27:48.363119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.370 [2024-11-19 11:27:48.363158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.370 qpair failed and we were unable to recover it. 00:25:53.370 [2024-11-19 11:27:48.363324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.370 [2024-11-19 11:27:48.363349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.370 qpair failed and we were unable to recover it. 00:25:53.370 [2024-11-19 11:27:48.363475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.370 [2024-11-19 11:27:48.363501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.370 qpair failed and we were unable to recover it. 00:25:53.370 [2024-11-19 11:27:48.363597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.370 [2024-11-19 11:27:48.363623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.370 qpair failed and we were unable to recover it. 00:25:53.370 [2024-11-19 11:27:48.363771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.370 [2024-11-19 11:27:48.363810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.370 qpair failed and we were unable to recover it. 00:25:53.370 [2024-11-19 11:27:48.363923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.370 [2024-11-19 11:27:48.363948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.370 qpair failed and we were unable to recover it. 00:25:53.370 [2024-11-19 11:27:48.364122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.370 [2024-11-19 11:27:48.364146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.370 qpair failed and we were unable to recover it. 00:25:53.370 [2024-11-19 11:27:48.364318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.370 [2024-11-19 11:27:48.364343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.370 qpair failed and we were unable to recover it. 00:25:53.370 [2024-11-19 11:27:48.364497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.370 [2024-11-19 11:27:48.364523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.370 qpair failed and we were unable to recover it. 00:25:53.370 [2024-11-19 11:27:48.364681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.370 [2024-11-19 11:27:48.364707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.370 qpair failed and we were unable to recover it. 00:25:53.370 [2024-11-19 11:27:48.364864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.370 [2024-11-19 11:27:48.364912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.370 qpair failed and we were unable to recover it. 00:25:53.370 [2024-11-19 11:27:48.365072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.370 [2024-11-19 11:27:48.365110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.370 qpair failed and we were unable to recover it. 00:25:53.370 [2024-11-19 11:27:48.365309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.370 [2024-11-19 11:27:48.365347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.370 qpair failed and we were unable to recover it. 00:25:53.370 [2024-11-19 11:27:48.365529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.370 [2024-11-19 11:27:48.365569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.370 qpair failed and we were unable to recover it. 00:25:53.370 [2024-11-19 11:27:48.365759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.370 [2024-11-19 11:27:48.365789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.370 qpair failed and we were unable to recover it. 00:25:53.370 [2024-11-19 11:27:48.365924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.370 [2024-11-19 11:27:48.365971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.370 qpair failed and we were unable to recover it. 00:25:53.370 [2024-11-19 11:27:48.366084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.370 [2024-11-19 11:27:48.366137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.370 qpair failed and we were unable to recover it. 00:25:53.370 [2024-11-19 11:27:48.366280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.370 [2024-11-19 11:27:48.366315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.370 qpair failed and we were unable to recover it. 00:25:53.370 [2024-11-19 11:27:48.366465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.370 [2024-11-19 11:27:48.366491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.370 qpair failed and we were unable to recover it. 00:25:53.370 [2024-11-19 11:27:48.366655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.370 [2024-11-19 11:27:48.366680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.370 qpair failed and we were unable to recover it. 00:25:53.370 [2024-11-19 11:27:48.366811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.370 [2024-11-19 11:27:48.366836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.370 qpair failed and we were unable to recover it. 00:25:53.370 [2024-11-19 11:27:48.366983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.370 [2024-11-19 11:27:48.367008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.370 qpair failed and we were unable to recover it. 00:25:53.370 [2024-11-19 11:27:48.367157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.370 [2024-11-19 11:27:48.367182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.370 qpair failed and we were unable to recover it. 00:25:53.370 [2024-11-19 11:27:48.367317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.370 [2024-11-19 11:27:48.367342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.370 qpair failed and we were unable to recover it. 00:25:53.370 [2024-11-19 11:27:48.367471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.370 [2024-11-19 11:27:48.367496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.370 qpair failed and we were unable to recover it. 00:25:53.370 [2024-11-19 11:27:48.367595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.370 [2024-11-19 11:27:48.367620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.370 qpair failed and we were unable to recover it. 00:25:53.370 [2024-11-19 11:27:48.367800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.370 [2024-11-19 11:27:48.367854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.370 qpair failed and we were unable to recover it. 00:25:53.370 [2024-11-19 11:27:48.368025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.370 [2024-11-19 11:27:48.368072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.370 qpair failed and we were unable to recover it. 00:25:53.370 [2024-11-19 11:27:48.368218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.370 [2024-11-19 11:27:48.368254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.370 qpair failed and we were unable to recover it. 00:25:53.370 [2024-11-19 11:27:48.368420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.370 [2024-11-19 11:27:48.368445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.370 qpair failed and we were unable to recover it. 00:25:53.370 [2024-11-19 11:27:48.368590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.371 [2024-11-19 11:27:48.368614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.371 qpair failed and we were unable to recover it. 00:25:53.371 [2024-11-19 11:27:48.368778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.371 [2024-11-19 11:27:48.368841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.371 qpair failed and we were unable to recover it. 00:25:53.371 [2024-11-19 11:27:48.369024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.371 [2024-11-19 11:27:48.369089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.371 qpair failed and we were unable to recover it. 00:25:53.371 [2024-11-19 11:27:48.369269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.371 [2024-11-19 11:27:48.369308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.371 qpair failed and we were unable to recover it. 00:25:53.371 [2024-11-19 11:27:48.369460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.371 [2024-11-19 11:27:48.369487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.371 qpair failed and we were unable to recover it. 00:25:53.371 [2024-11-19 11:27:48.369614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.371 [2024-11-19 11:27:48.369658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.371 qpair failed and we were unable to recover it. 00:25:53.371 [2024-11-19 11:27:48.369767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.371 [2024-11-19 11:27:48.369791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.371 qpair failed and we were unable to recover it. 00:25:53.371 [2024-11-19 11:27:48.369925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.371 [2024-11-19 11:27:48.369950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.371 qpair failed and we were unable to recover it. 00:25:53.371 [2024-11-19 11:27:48.370141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.371 [2024-11-19 11:27:48.370180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.371 qpair failed and we were unable to recover it. 00:25:53.371 [2024-11-19 11:27:48.370394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.371 [2024-11-19 11:27:48.370426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.371 qpair failed and we were unable to recover it. 00:25:53.371 [2024-11-19 11:27:48.370541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.371 [2024-11-19 11:27:48.370566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.371 qpair failed and we were unable to recover it. 00:25:53.371 [2024-11-19 11:27:48.370792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.371 [2024-11-19 11:27:48.370850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.371 qpair failed and we were unable to recover it. 00:25:53.371 [2024-11-19 11:27:48.371043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.371 [2024-11-19 11:27:48.371102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.371 qpair failed and we were unable to recover it. 00:25:53.371 [2024-11-19 11:27:48.371283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.371 [2024-11-19 11:27:48.371309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.371 qpair failed and we were unable to recover it. 00:25:53.371 [2024-11-19 11:27:48.371444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.371 [2024-11-19 11:27:48.371470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.371 qpair failed and we were unable to recover it. 00:25:53.371 [2024-11-19 11:27:48.371590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.371 [2024-11-19 11:27:48.371616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.371 qpair failed and we were unable to recover it. 00:25:53.371 [2024-11-19 11:27:48.371761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.371 [2024-11-19 11:27:48.371799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.371 qpair failed and we were unable to recover it. 00:25:53.371 [2024-11-19 11:27:48.371899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.371 [2024-11-19 11:27:48.371924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.371 qpair failed and we were unable to recover it. 00:25:53.371 [2024-11-19 11:27:48.372128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.371 [2024-11-19 11:27:48.372166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.371 qpair failed and we were unable to recover it. 00:25:53.371 [2024-11-19 11:27:48.372313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.371 [2024-11-19 11:27:48.372337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.371 qpair failed and we were unable to recover it. 00:25:53.371 [2024-11-19 11:27:48.372494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.371 [2024-11-19 11:27:48.372520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.371 qpair failed and we were unable to recover it. 00:25:53.371 [2024-11-19 11:27:48.372726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.371 [2024-11-19 11:27:48.372752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.371 qpair failed and we were unable to recover it. 00:25:53.371 [2024-11-19 11:27:48.372931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.371 [2024-11-19 11:27:48.372969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.371 qpair failed and we were unable to recover it. 00:25:53.371 [2024-11-19 11:27:48.373152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.371 [2024-11-19 11:27:48.373190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.371 qpair failed and we were unable to recover it. 00:25:53.371 [2024-11-19 11:27:48.373433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.371 [2024-11-19 11:27:48.373463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.371 qpair failed and we were unable to recover it. 00:25:53.371 [2024-11-19 11:27:48.373610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.371 [2024-11-19 11:27:48.373660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.371 qpair failed and we were unable to recover it. 00:25:53.371 [2024-11-19 11:27:48.373823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.371 [2024-11-19 11:27:48.373871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.371 qpair failed and we were unable to recover it. 00:25:53.371 [2024-11-19 11:27:48.374080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.371 [2024-11-19 11:27:48.374118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.371 qpair failed and we were unable to recover it. 00:25:53.371 [2024-11-19 11:27:48.374305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.371 [2024-11-19 11:27:48.374343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.371 qpair failed and we were unable to recover it. 00:25:53.371 [2024-11-19 11:27:48.374503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.371 [2024-11-19 11:27:48.374529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.371 qpair failed and we were unable to recover it. 00:25:53.371 [2024-11-19 11:27:48.374673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.371 [2024-11-19 11:27:48.374698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.371 qpair failed and we were unable to recover it. 00:25:53.371 [2024-11-19 11:27:48.374891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.371 [2024-11-19 11:27:48.374916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.371 qpair failed and we were unable to recover it. 00:25:53.371 [2024-11-19 11:27:48.375096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.371 [2024-11-19 11:27:48.375134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.371 qpair failed and we were unable to recover it. 00:25:53.371 [2024-11-19 11:27:48.375347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.371 [2024-11-19 11:27:48.375411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.371 qpair failed and we were unable to recover it. 00:25:53.371 [2024-11-19 11:27:48.375534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.371 [2024-11-19 11:27:48.375560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.371 qpair failed and we were unable to recover it. 00:25:53.371 [2024-11-19 11:27:48.375746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.371 [2024-11-19 11:27:48.375807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.371 qpair failed and we were unable to recover it. 00:25:53.371 [2024-11-19 11:27:48.376039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.372 [2024-11-19 11:27:48.376099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.372 qpair failed and we were unable to recover it. 00:25:53.372 [2024-11-19 11:27:48.376307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.372 [2024-11-19 11:27:48.376346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.372 qpair failed and we were unable to recover it. 00:25:53.372 [2024-11-19 11:27:48.376493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.372 [2024-11-19 11:27:48.376518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.372 qpair failed and we were unable to recover it. 00:25:53.372 [2024-11-19 11:27:48.376674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.372 [2024-11-19 11:27:48.376699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.372 qpair failed and we were unable to recover it. 00:25:53.372 [2024-11-19 11:27:48.376804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.372 [2024-11-19 11:27:48.376829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.372 qpair failed and we were unable to recover it. 00:25:53.372 [2024-11-19 11:27:48.376964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.372 [2024-11-19 11:27:48.376988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.372 qpair failed and we were unable to recover it. 00:25:53.372 [2024-11-19 11:27:48.377143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.372 [2024-11-19 11:27:48.377182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.372 qpair failed and we were unable to recover it. 00:25:53.372 [2024-11-19 11:27:48.377339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.372 [2024-11-19 11:27:48.377387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.372 qpair failed and we were unable to recover it. 00:25:53.372 [2024-11-19 11:27:48.377533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.372 [2024-11-19 11:27:48.377559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.372 qpair failed and we were unable to recover it. 00:25:53.372 [2024-11-19 11:27:48.377670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.372 [2024-11-19 11:27:48.377697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.372 qpair failed and we were unable to recover it. 00:25:53.372 [2024-11-19 11:27:48.377846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.372 [2024-11-19 11:27:48.377885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.372 qpair failed and we were unable to recover it. 00:25:53.372 [2024-11-19 11:27:48.378093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.372 [2024-11-19 11:27:48.378131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.372 qpair failed and we were unable to recover it. 00:25:53.372 [2024-11-19 11:27:48.378265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.372 [2024-11-19 11:27:48.378321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.372 qpair failed and we were unable to recover it. 00:25:53.372 [2024-11-19 11:27:48.378472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.372 [2024-11-19 11:27:48.378498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.372 qpair failed and we were unable to recover it. 00:25:53.372 [2024-11-19 11:27:48.378642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.372 [2024-11-19 11:27:48.378688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.372 qpair failed and we were unable to recover it. 00:25:53.372 [2024-11-19 11:27:48.378869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.372 [2024-11-19 11:27:48.378913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.372 qpair failed and we were unable to recover it. 00:25:53.372 [2024-11-19 11:27:48.379127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.372 [2024-11-19 11:27:48.379166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.372 qpair failed and we were unable to recover it. 00:25:53.372 [2024-11-19 11:27:48.379347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.372 [2024-11-19 11:27:48.379396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.372 qpair failed and we were unable to recover it. 00:25:53.372 [2024-11-19 11:27:48.379537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.372 [2024-11-19 11:27:48.379562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.372 qpair failed and we were unable to recover it. 00:25:53.372 [2024-11-19 11:27:48.379704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.372 [2024-11-19 11:27:48.379743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.372 qpair failed and we were unable to recover it. 00:25:53.372 [2024-11-19 11:27:48.379915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.372 [2024-11-19 11:27:48.379953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.372 qpair failed and we were unable to recover it. 00:25:53.372 [2024-11-19 11:27:48.380148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.372 [2024-11-19 11:27:48.380186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.372 qpair failed and we were unable to recover it. 00:25:53.372 [2024-11-19 11:27:48.380384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.372 [2024-11-19 11:27:48.380426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.372 qpair failed and we were unable to recover it. 00:25:53.372 [2024-11-19 11:27:48.380578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.372 [2024-11-19 11:27:48.380602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.372 qpair failed and we were unable to recover it. 00:25:53.372 [2024-11-19 11:27:48.380807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.372 [2024-11-19 11:27:48.380861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.372 qpair failed and we were unable to recover it. 00:25:53.372 [2024-11-19 11:27:48.381074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.372 [2024-11-19 11:27:48.381129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.372 qpair failed and we were unable to recover it. 00:25:53.372 [2024-11-19 11:27:48.381348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.372 [2024-11-19 11:27:48.381396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.372 qpair failed and we were unable to recover it. 00:25:53.372 [2024-11-19 11:27:48.381562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.372 [2024-11-19 11:27:48.381596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.372 qpair failed and we were unable to recover it. 00:25:53.372 [2024-11-19 11:27:48.381769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.372 [2024-11-19 11:27:48.381798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.372 qpair failed and we were unable to recover it. 00:25:53.372 [2024-11-19 11:27:48.381964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.372 [2024-11-19 11:27:48.382019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.372 qpair failed and we were unable to recover it. 00:25:53.372 [2024-11-19 11:27:48.382168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.372 [2024-11-19 11:27:48.382206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.372 qpair failed and we were unable to recover it. 00:25:53.372 [2024-11-19 11:27:48.382393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.372 [2024-11-19 11:27:48.382443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.372 qpair failed and we were unable to recover it. 00:25:53.372 [2024-11-19 11:27:48.382557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.372 [2024-11-19 11:27:48.382582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.372 qpair failed and we were unable to recover it. 00:25:53.372 [2024-11-19 11:27:48.382697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.372 [2024-11-19 11:27:48.382754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.372 qpair failed and we were unable to recover it. 00:25:53.372 [2024-11-19 11:27:48.382948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.372 [2024-11-19 11:27:48.382972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.372 qpair failed and we were unable to recover it. 00:25:53.372 [2024-11-19 11:27:48.383175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.372 [2024-11-19 11:27:48.383212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.372 qpair failed and we were unable to recover it. 00:25:53.372 [2024-11-19 11:27:48.383401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.373 [2024-11-19 11:27:48.383441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.373 qpair failed and we were unable to recover it. 00:25:53.373 [2024-11-19 11:27:48.383577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.373 [2024-11-19 11:27:48.383601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.373 qpair failed and we were unable to recover it. 00:25:53.373 [2024-11-19 11:27:48.383794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.373 [2024-11-19 11:27:48.383852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.373 qpair failed and we were unable to recover it. 00:25:53.373 [2024-11-19 11:27:48.384029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.373 [2024-11-19 11:27:48.384068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.373 qpair failed and we were unable to recover it. 00:25:53.373 [2024-11-19 11:27:48.384273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.373 [2024-11-19 11:27:48.384310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.373 qpair failed and we were unable to recover it. 00:25:53.373 [2024-11-19 11:27:48.384450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.373 [2024-11-19 11:27:48.384476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.373 qpair failed and we were unable to recover it. 00:25:53.373 [2024-11-19 11:27:48.384590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.373 [2024-11-19 11:27:48.384614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.373 qpair failed and we were unable to recover it. 00:25:53.373 [2024-11-19 11:27:48.384808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.373 [2024-11-19 11:27:48.384832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.373 qpair failed and we were unable to recover it. 00:25:53.373 [2024-11-19 11:27:48.384956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.373 [2024-11-19 11:27:48.385004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.373 qpair failed and we were unable to recover it. 00:25:53.373 [2024-11-19 11:27:48.385145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.373 [2024-11-19 11:27:48.385183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.373 qpair failed and we were unable to recover it. 00:25:53.373 [2024-11-19 11:27:48.385318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.373 [2024-11-19 11:27:48.385343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.373 qpair failed and we were unable to recover it. 00:25:53.373 [2024-11-19 11:27:48.385519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.373 [2024-11-19 11:27:48.385545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.373 qpair failed and we were unable to recover it. 00:25:53.373 [2024-11-19 11:27:48.385652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.373 [2024-11-19 11:27:48.385690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.373 qpair failed and we were unable to recover it. 00:25:53.373 [2024-11-19 11:27:48.385867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.373 [2024-11-19 11:27:48.385892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.373 qpair failed and we were unable to recover it. 00:25:53.373 [2024-11-19 11:27:48.386093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.373 [2024-11-19 11:27:48.386131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.373 qpair failed and we were unable to recover it. 00:25:53.373 [2024-11-19 11:27:48.386348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.373 [2024-11-19 11:27:48.386428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.373 qpair failed and we were unable to recover it. 00:25:53.373 [2024-11-19 11:27:48.386574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.373 [2024-11-19 11:27:48.386598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.373 qpair failed and we were unable to recover it. 00:25:53.373 [2024-11-19 11:27:48.386790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.373 [2024-11-19 11:27:48.386855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.373 qpair failed and we were unable to recover it. 00:25:53.373 [2024-11-19 11:27:48.387067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.373 [2024-11-19 11:27:48.387122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.373 qpair failed and we were unable to recover it. 00:25:53.373 [2024-11-19 11:27:48.387330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.373 [2024-11-19 11:27:48.387379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.373 qpair failed and we were unable to recover it. 00:25:53.373 [2024-11-19 11:27:48.387539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.373 [2024-11-19 11:27:48.387597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.373 qpair failed and we were unable to recover it. 00:25:53.373 [2024-11-19 11:27:48.387812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.373 [2024-11-19 11:27:48.387869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.373 qpair failed and we were unable to recover it. 00:25:53.373 [2024-11-19 11:27:48.388032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.373 [2024-11-19 11:27:48.388089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.373 qpair failed and we were unable to recover it. 00:25:53.373 [2024-11-19 11:27:48.388267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.373 [2024-11-19 11:27:48.388306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.373 qpair failed and we were unable to recover it. 00:25:53.373 [2024-11-19 11:27:48.388513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.373 [2024-11-19 11:27:48.388570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.373 qpair failed and we were unable to recover it. 00:25:53.373 [2024-11-19 11:27:48.388746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.373 [2024-11-19 11:27:48.388803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.373 qpair failed and we were unable to recover it. 00:25:53.373 [2024-11-19 11:27:48.389040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.373 [2024-11-19 11:27:48.389099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.373 qpair failed and we were unable to recover it. 00:25:53.373 [2024-11-19 11:27:48.389247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.373 [2024-11-19 11:27:48.389285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.373 qpair failed and we were unable to recover it. 00:25:53.373 [2024-11-19 11:27:48.389494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.373 [2024-11-19 11:27:48.389558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.373 qpair failed and we were unable to recover it. 00:25:53.373 [2024-11-19 11:27:48.389790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.373 [2024-11-19 11:27:48.389848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.373 qpair failed and we were unable to recover it. 00:25:53.373 [2024-11-19 11:27:48.390068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.373 [2024-11-19 11:27:48.390124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.373 qpair failed and we were unable to recover it. 00:25:53.373 [2024-11-19 11:27:48.390303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.373 [2024-11-19 11:27:48.390341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.373 qpair failed and we were unable to recover it. 00:25:53.373 [2024-11-19 11:27:48.390528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.373 [2024-11-19 11:27:48.390591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.373 qpair failed and we were unable to recover it. 00:25:53.373 [2024-11-19 11:27:48.390807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.373 [2024-11-19 11:27:48.390865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.373 qpair failed and we were unable to recover it. 00:25:53.373 [2024-11-19 11:27:48.391039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.373 [2024-11-19 11:27:48.391097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.373 qpair failed and we were unable to recover it. 00:25:53.373 [2024-11-19 11:27:48.391272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.373 [2024-11-19 11:27:48.391311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.373 qpair failed and we were unable to recover it. 00:25:53.373 [2024-11-19 11:27:48.391479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.374 [2024-11-19 11:27:48.391536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.374 qpair failed and we were unable to recover it. 00:25:53.374 [2024-11-19 11:27:48.391781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.374 [2024-11-19 11:27:48.391840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.374 qpair failed and we were unable to recover it. 00:25:53.374 [2024-11-19 11:27:48.392081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.374 [2024-11-19 11:27:48.392139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.374 qpair failed and we were unable to recover it. 00:25:53.374 [2024-11-19 11:27:48.392330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.374 [2024-11-19 11:27:48.392377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.374 qpair failed and we were unable to recover it. 00:25:53.374 [2024-11-19 11:27:48.392518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.374 [2024-11-19 11:27:48.392580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.374 qpair failed and we were unable to recover it. 00:25:53.374 [2024-11-19 11:27:48.392711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.374 [2024-11-19 11:27:48.392773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.374 qpair failed and we were unable to recover it. 00:25:53.374 [2024-11-19 11:27:48.392952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.374 [2024-11-19 11:27:48.393010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.374 qpair failed and we were unable to recover it. 00:25:53.374 [2024-11-19 11:27:48.393232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.374 [2024-11-19 11:27:48.393279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.374 qpair failed and we were unable to recover it. 00:25:53.374 [2024-11-19 11:27:48.393493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.374 [2024-11-19 11:27:48.393551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.374 qpair failed and we were unable to recover it. 00:25:53.374 [2024-11-19 11:27:48.393773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.374 [2024-11-19 11:27:48.393831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.374 qpair failed and we were unable to recover it. 00:25:53.374 [2024-11-19 11:27:48.394036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.374 [2024-11-19 11:27:48.394094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.374 qpair failed and we were unable to recover it. 00:25:53.374 [2024-11-19 11:27:48.394273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.374 [2024-11-19 11:27:48.394311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.374 qpair failed and we were unable to recover it. 00:25:53.374 [2024-11-19 11:27:48.394497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.374 [2024-11-19 11:27:48.394556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.374 qpair failed and we were unable to recover it. 00:25:53.374 [2024-11-19 11:27:48.394797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.374 [2024-11-19 11:27:48.394854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.374 qpair failed and we were unable to recover it. 00:25:53.374 [2024-11-19 11:27:48.395038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.374 [2024-11-19 11:27:48.395097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.374 qpair failed and we were unable to recover it. 00:25:53.374 [2024-11-19 11:27:48.395292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.374 [2024-11-19 11:27:48.395331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.374 qpair failed and we were unable to recover it. 00:25:53.374 [2024-11-19 11:27:48.395539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.374 [2024-11-19 11:27:48.395597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.374 qpair failed and we were unable to recover it. 00:25:53.374 [2024-11-19 11:27:48.395783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.374 [2024-11-19 11:27:48.395821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.374 qpair failed and we were unable to recover it. 00:25:53.374 [2024-11-19 11:27:48.396006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.374 [2024-11-19 11:27:48.396065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.374 qpair failed and we were unable to recover it. 00:25:53.374 [2024-11-19 11:27:48.396209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.374 [2024-11-19 11:27:48.396259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.374 qpair failed and we were unable to recover it. 00:25:53.374 [2024-11-19 11:27:48.396470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.374 [2024-11-19 11:27:48.396525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.374 qpair failed and we were unable to recover it. 00:25:53.374 [2024-11-19 11:27:48.396737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.374 [2024-11-19 11:27:48.396796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.374 qpair failed and we were unable to recover it. 00:25:53.374 [2024-11-19 11:27:48.397017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.374 [2024-11-19 11:27:48.397075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.374 qpair failed and we were unable to recover it. 00:25:53.374 [2024-11-19 11:27:48.397258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.374 [2024-11-19 11:27:48.397297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.374 qpair failed and we were unable to recover it. 00:25:53.374 [2024-11-19 11:27:48.397548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.374 [2024-11-19 11:27:48.397607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.374 qpair failed and we were unable to recover it. 00:25:53.374 [2024-11-19 11:27:48.397825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.374 [2024-11-19 11:27:48.397883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.374 qpair failed and we were unable to recover it. 00:25:53.374 [2024-11-19 11:27:48.398115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.374 [2024-11-19 11:27:48.398172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.374 qpair failed and we were unable to recover it. 00:25:53.374 [2024-11-19 11:27:48.398301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.374 [2024-11-19 11:27:48.398340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.374 qpair failed and we were unable to recover it. 00:25:53.374 [2024-11-19 11:27:48.398592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.374 [2024-11-19 11:27:48.398651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.374 qpair failed and we were unable to recover it. 00:25:53.374 [2024-11-19 11:27:48.398885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.374 [2024-11-19 11:27:48.398941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.374 qpair failed and we were unable to recover it. 00:25:53.374 [2024-11-19 11:27:48.399111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.374 [2024-11-19 11:27:48.399149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.374 qpair failed and we were unable to recover it. 00:25:53.374 [2024-11-19 11:27:48.399353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.374 [2024-11-19 11:27:48.399415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.374 qpair failed and we were unable to recover it. 00:25:53.374 [2024-11-19 11:27:48.399631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.374 [2024-11-19 11:27:48.399689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.374 qpair failed and we were unable to recover it. 00:25:53.374 [2024-11-19 11:27:48.399880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.374 [2024-11-19 11:27:48.399938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.374 qpair failed and we were unable to recover it. 00:25:53.374 [2024-11-19 11:27:48.400155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.374 [2024-11-19 11:27:48.400215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.374 qpair failed and we were unable to recover it. 00:25:53.374 [2024-11-19 11:27:48.400359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.374 [2024-11-19 11:27:48.400420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.374 qpair failed and we were unable to recover it. 00:25:53.374 [2024-11-19 11:27:48.400656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.375 [2024-11-19 11:27:48.400727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.375 qpair failed and we were unable to recover it. 00:25:53.375 [2024-11-19 11:27:48.400897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.375 [2024-11-19 11:27:48.400955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.375 qpair failed and we were unable to recover it. 00:25:53.375 [2024-11-19 11:27:48.401198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.375 [2024-11-19 11:27:48.401237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.375 qpair failed and we were unable to recover it. 00:25:53.375 [2024-11-19 11:27:48.401442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.375 [2024-11-19 11:27:48.401506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.375 qpair failed and we were unable to recover it. 00:25:53.375 [2024-11-19 11:27:48.401750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.375 [2024-11-19 11:27:48.401808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.375 qpair failed and we were unable to recover it. 00:25:53.375 [2024-11-19 11:27:48.402054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.375 [2024-11-19 11:27:48.402116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.375 qpair failed and we were unable to recover it. 00:25:53.375 [2024-11-19 11:27:48.402320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.375 [2024-11-19 11:27:48.402359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.375 qpair failed and we were unable to recover it. 00:25:53.375 [2024-11-19 11:27:48.402581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.375 [2024-11-19 11:27:48.402639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.375 qpair failed and we were unable to recover it. 00:25:53.375 [2024-11-19 11:27:48.402793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.375 [2024-11-19 11:27:48.402850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.375 qpair failed and we were unable to recover it. 00:25:53.375 [2024-11-19 11:27:48.403058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.375 [2024-11-19 11:27:48.403115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.375 qpair failed and we were unable to recover it. 00:25:53.375 [2024-11-19 11:27:48.403264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.375 [2024-11-19 11:27:48.403303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.375 qpair failed and we were unable to recover it. 00:25:53.375 [2024-11-19 11:27:48.403520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.375 [2024-11-19 11:27:48.403580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.375 qpair failed and we were unable to recover it. 00:25:53.375 [2024-11-19 11:27:48.403749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.375 [2024-11-19 11:27:48.403811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.375 qpair failed and we were unable to recover it. 00:25:53.375 [2024-11-19 11:27:48.404009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.375 [2024-11-19 11:27:48.404065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.375 qpair failed and we were unable to recover it. 00:25:53.375 [2024-11-19 11:27:48.404256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.375 [2024-11-19 11:27:48.404294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.375 qpair failed and we were unable to recover it. 00:25:53.375 [2024-11-19 11:27:48.404443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.375 [2024-11-19 11:27:48.404512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.375 qpair failed and we were unable to recover it. 00:25:53.375 [2024-11-19 11:27:48.404652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.375 [2024-11-19 11:27:48.404690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.375 qpair failed and we were unable to recover it. 00:25:53.375 [2024-11-19 11:27:48.404835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.375 [2024-11-19 11:27:48.404874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.375 qpair failed and we were unable to recover it. 00:25:53.375 [2024-11-19 11:27:48.405020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.375 [2024-11-19 11:27:48.405059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.375 qpair failed and we were unable to recover it. 00:25:53.375 [2024-11-19 11:27:48.405211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.375 [2024-11-19 11:27:48.405257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.375 qpair failed and we were unable to recover it. 00:25:53.375 [2024-11-19 11:27:48.405463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.375 [2024-11-19 11:27:48.405502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.375 qpair failed and we were unable to recover it. 00:25:53.375 [2024-11-19 11:27:48.405640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.375 [2024-11-19 11:27:48.405679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.375 qpair failed and we were unable to recover it. 00:25:53.375 [2024-11-19 11:27:48.405831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.375 [2024-11-19 11:27:48.405878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.375 qpair failed and we were unable to recover it. 00:25:53.375 [2024-11-19 11:27:48.406059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.375 [2024-11-19 11:27:48.406099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.375 qpair failed and we were unable to recover it. 00:25:53.375 [2024-11-19 11:27:48.406326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.375 [2024-11-19 11:27:48.406388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.375 qpair failed and we were unable to recover it. 00:25:53.375 [2024-11-19 11:27:48.406547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.375 [2024-11-19 11:27:48.406586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.375 qpair failed and we were unable to recover it. 00:25:53.375 [2024-11-19 11:27:48.406790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.375 [2024-11-19 11:27:48.406828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.375 qpair failed and we were unable to recover it. 00:25:53.375 [2024-11-19 11:27:48.406995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.375 [2024-11-19 11:27:48.407055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.375 qpair failed and we were unable to recover it. 00:25:53.375 [2024-11-19 11:27:48.407297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.375 [2024-11-19 11:27:48.407337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.375 qpair failed and we were unable to recover it. 00:25:53.375 [2024-11-19 11:27:48.407523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.375 [2024-11-19 11:27:48.407581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.375 qpair failed and we were unable to recover it. 00:25:53.375 [2024-11-19 11:27:48.407821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.375 [2024-11-19 11:27:48.407860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.375 qpair failed and we were unable to recover it. 00:25:53.375 [2024-11-19 11:27:48.408126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.375 [2024-11-19 11:27:48.408184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.375 qpair failed and we were unable to recover it. 00:25:53.375 [2024-11-19 11:27:48.408381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.376 [2024-11-19 11:27:48.408421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.376 qpair failed and we were unable to recover it. 00:25:53.376 [2024-11-19 11:27:48.408581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.376 [2024-11-19 11:27:48.408640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.376 qpair failed and we were unable to recover it. 00:25:53.376 [2024-11-19 11:27:48.408841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.376 [2024-11-19 11:27:48.408898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.376 qpair failed and we were unable to recover it. 00:25:53.376 [2024-11-19 11:27:48.409053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.376 [2024-11-19 11:27:48.409109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.376 qpair failed and we were unable to recover it. 00:25:53.376 [2024-11-19 11:27:48.409334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.376 [2024-11-19 11:27:48.409383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.376 qpair failed and we were unable to recover it. 00:25:53.376 [2024-11-19 11:27:48.409599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.376 [2024-11-19 11:27:48.409658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.376 qpair failed and we were unable to recover it. 00:25:53.376 [2024-11-19 11:27:48.409899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.376 [2024-11-19 11:27:48.409958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.376 qpair failed and we were unable to recover it. 00:25:53.376 [2024-11-19 11:27:48.410177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.376 [2024-11-19 11:27:48.410233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.376 qpair failed and we were unable to recover it. 00:25:53.376 [2024-11-19 11:27:48.410449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.376 [2024-11-19 11:27:48.410514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.376 qpair failed and we were unable to recover it. 00:25:53.376 [2024-11-19 11:27:48.410720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.376 [2024-11-19 11:27:48.410778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.376 qpair failed and we were unable to recover it. 00:25:53.376 [2024-11-19 11:27:48.411015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.376 [2024-11-19 11:27:48.411075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.376 qpair failed and we were unable to recover it. 00:25:53.376 [2024-11-19 11:27:48.411257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.376 [2024-11-19 11:27:48.411300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.376 qpair failed and we were unable to recover it. 00:25:53.376 [2024-11-19 11:27:48.411490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.376 [2024-11-19 11:27:48.411547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.376 qpair failed and we were unable to recover it. 00:25:53.376 [2024-11-19 11:27:48.411730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.376 [2024-11-19 11:27:48.411790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.376 qpair failed and we were unable to recover it. 00:25:53.376 [2024-11-19 11:27:48.411939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.376 [2024-11-19 11:27:48.411979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.376 qpair failed and we were unable to recover it. 00:25:53.376 [2024-11-19 11:27:48.412109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.376 [2024-11-19 11:27:48.412146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.376 qpair failed and we were unable to recover it. 00:25:53.376 [2024-11-19 11:27:48.412333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.376 [2024-11-19 11:27:48.412381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.376 qpair failed and we were unable to recover it. 00:25:53.376 [2024-11-19 11:27:48.412525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.376 [2024-11-19 11:27:48.412575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.376 qpair failed and we were unable to recover it. 00:25:53.376 [2024-11-19 11:27:48.412743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.376 [2024-11-19 11:27:48.412789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.376 qpair failed and we were unable to recover it. 00:25:53.376 [2024-11-19 11:27:48.412967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.376 [2024-11-19 11:27:48.413005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.376 qpair failed and we were unable to recover it. 00:25:53.376 [2024-11-19 11:27:48.413193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.376 [2024-11-19 11:27:48.413230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.376 qpair failed and we were unable to recover it. 00:25:53.376 [2024-11-19 11:27:48.413383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.376 [2024-11-19 11:27:48.413434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.376 qpair failed and we were unable to recover it. 00:25:53.376 [2024-11-19 11:27:48.413600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.376 [2024-11-19 11:27:48.413669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.376 qpair failed and we were unable to recover it. 00:25:53.376 [2024-11-19 11:27:48.413841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.376 [2024-11-19 11:27:48.413900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.376 qpair failed and we were unable to recover it. 00:25:53.376 [2024-11-19 11:27:48.414022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.376 [2024-11-19 11:27:48.414060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.376 qpair failed and we were unable to recover it. 00:25:53.376 [2024-11-19 11:27:48.414236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.376 [2024-11-19 11:27:48.414283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.376 qpair failed and we were unable to recover it. 00:25:53.376 [2024-11-19 11:27:48.414459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.376 [2024-11-19 11:27:48.414520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.376 qpair failed and we were unable to recover it. 00:25:53.376 [2024-11-19 11:27:48.414692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.376 [2024-11-19 11:27:48.414751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.376 qpair failed and we were unable to recover it. 00:25:53.376 [2024-11-19 11:27:48.414998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.376 [2024-11-19 11:27:48.415055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.376 qpair failed and we were unable to recover it. 00:25:53.376 [2024-11-19 11:27:48.415233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.376 [2024-11-19 11:27:48.415271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.376 qpair failed and we were unable to recover it. 00:25:53.376 [2024-11-19 11:27:48.415440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.376 [2024-11-19 11:27:48.415505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.376 qpair failed and we were unable to recover it. 00:25:53.376 [2024-11-19 11:27:48.415665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.376 [2024-11-19 11:27:48.415724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.376 qpair failed and we were unable to recover it. 00:25:53.376 [2024-11-19 11:27:48.415916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.376 [2024-11-19 11:27:48.415975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.376 qpair failed and we were unable to recover it. 00:25:53.376 [2024-11-19 11:27:48.416180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.376 [2024-11-19 11:27:48.416218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.376 qpair failed and we were unable to recover it. 00:25:53.376 [2024-11-19 11:27:48.416467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.376 [2024-11-19 11:27:48.416527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.376 qpair failed and we were unable to recover it. 00:25:53.376 [2024-11-19 11:27:48.416750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.376 [2024-11-19 11:27:48.416809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.376 qpair failed and we were unable to recover it. 00:25:53.376 [2024-11-19 11:27:48.417054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.377 [2024-11-19 11:27:48.417112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.377 qpair failed and we were unable to recover it. 00:25:53.377 [2024-11-19 11:27:48.417262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.377 [2024-11-19 11:27:48.417313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.377 qpair failed and we were unable to recover it. 00:25:53.377 [2024-11-19 11:27:48.417510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.377 [2024-11-19 11:27:48.417549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.377 qpair failed and we were unable to recover it. 00:25:53.377 [2024-11-19 11:27:48.417790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.377 [2024-11-19 11:27:48.417848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.377 qpair failed and we were unable to recover it. 00:25:53.377 [2024-11-19 11:27:48.418042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.377 [2024-11-19 11:27:48.418081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.377 qpair failed and we were unable to recover it. 00:25:53.377 [2024-11-19 11:27:48.418280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.377 [2024-11-19 11:27:48.418319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.377 qpair failed and we were unable to recover it. 00:25:53.377 [2024-11-19 11:27:48.418535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.377 [2024-11-19 11:27:48.418595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.377 qpair failed and we were unable to recover it. 00:25:53.377 [2024-11-19 11:27:48.418752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.377 [2024-11-19 11:27:48.418819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.377 qpair failed and we were unable to recover it. 00:25:53.377 [2024-11-19 11:27:48.419004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.377 [2024-11-19 11:27:48.419043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.377 qpair failed and we were unable to recover it. 00:25:53.377 [2024-11-19 11:27:48.419227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.377 [2024-11-19 11:27:48.419266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.377 qpair failed and we were unable to recover it. 00:25:53.377 [2024-11-19 11:27:48.419451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.377 [2024-11-19 11:27:48.419515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.377 qpair failed and we were unable to recover it. 00:25:53.377 [2024-11-19 11:27:48.419769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.377 [2024-11-19 11:27:48.419827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.377 qpair failed and we were unable to recover it. 00:25:53.377 [2024-11-19 11:27:48.420057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.377 [2024-11-19 11:27:48.420115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.377 qpair failed and we were unable to recover it. 00:25:53.377 [2024-11-19 11:27:48.420347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.377 [2024-11-19 11:27:48.420396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.377 qpair failed and we were unable to recover it. 00:25:53.377 [2024-11-19 11:27:48.420562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.377 [2024-11-19 11:27:48.420619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.377 qpair failed and we were unable to recover it. 00:25:53.377 [2024-11-19 11:27:48.420800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.377 [2024-11-19 11:27:48.420858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.377 qpair failed and we were unable to recover it. 00:25:53.377 [2024-11-19 11:27:48.421037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.377 [2024-11-19 11:27:48.421094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.377 qpair failed and we were unable to recover it. 00:25:53.377 [2024-11-19 11:27:48.421320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.377 [2024-11-19 11:27:48.421357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.377 qpair failed and we were unable to recover it. 00:25:53.377 [2024-11-19 11:27:48.421512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.377 [2024-11-19 11:27:48.421576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.377 qpair failed and we were unable to recover it. 00:25:53.377 [2024-11-19 11:27:48.421812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.377 [2024-11-19 11:27:48.421871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.377 qpair failed and we were unable to recover it. 00:25:53.377 [2024-11-19 11:27:48.422074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.377 [2024-11-19 11:27:48.422132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.377 qpair failed and we were unable to recover it. 00:25:53.377 [2024-11-19 11:27:48.422357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.377 [2024-11-19 11:27:48.422415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.377 qpair failed and we were unable to recover it. 00:25:53.377 [2024-11-19 11:27:48.422567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.377 [2024-11-19 11:27:48.422633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.377 qpair failed and we were unable to recover it. 00:25:53.377 [2024-11-19 11:27:48.422868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.377 [2024-11-19 11:27:48.422926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.377 qpair failed and we were unable to recover it. 00:25:53.377 [2024-11-19 11:27:48.423121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.377 [2024-11-19 11:27:48.423178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.377 qpair failed and we were unable to recover it. 00:25:53.377 [2024-11-19 11:27:48.423357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.377 [2024-11-19 11:27:48.423410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.377 qpair failed and we were unable to recover it. 00:25:53.377 [2024-11-19 11:27:48.423612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.377 [2024-11-19 11:27:48.423651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.377 qpair failed and we were unable to recover it. 00:25:53.377 [2024-11-19 11:27:48.423856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.377 [2024-11-19 11:27:48.423912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.377 qpair failed and we were unable to recover it. 00:25:53.377 [2024-11-19 11:27:48.424123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.377 [2024-11-19 11:27:48.424183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.377 qpair failed and we were unable to recover it. 00:25:53.377 [2024-11-19 11:27:48.424406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.377 [2024-11-19 11:27:48.424446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.377 qpair failed and we were unable to recover it. 00:25:53.377 [2024-11-19 11:27:48.424693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.377 [2024-11-19 11:27:48.424751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.377 qpair failed and we were unable to recover it. 00:25:53.377 [2024-11-19 11:27:48.424911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.377 [2024-11-19 11:27:48.424969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.377 qpair failed and we were unable to recover it. 00:25:53.377 [2024-11-19 11:27:48.425177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.377 [2024-11-19 11:27:48.425236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.377 qpair failed and we were unable to recover it. 00:25:53.377 [2024-11-19 11:27:48.425445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.377 [2024-11-19 11:27:48.425504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.377 qpair failed and we were unable to recover it. 00:25:53.377 [2024-11-19 11:27:48.425659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.377 [2024-11-19 11:27:48.425718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.377 qpair failed and we were unable to recover it. 00:25:53.377 [2024-11-19 11:27:48.425856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.377 [2024-11-19 11:27:48.425918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.377 qpair failed and we were unable to recover it. 00:25:53.377 [2024-11-19 11:27:48.426088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.378 [2024-11-19 11:27:48.426127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.378 qpair failed and we were unable to recover it. 00:25:53.378 [2024-11-19 11:27:48.426329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.378 [2024-11-19 11:27:48.426388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.378 qpair failed and we were unable to recover it. 00:25:53.378 [2024-11-19 11:27:48.426519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.378 [2024-11-19 11:27:48.426555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.378 qpair failed and we were unable to recover it. 00:25:53.378 [2024-11-19 11:27:48.426778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.378 [2024-11-19 11:27:48.426822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.378 qpair failed and we were unable to recover it. 00:25:53.378 [2024-11-19 11:27:48.427049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.378 [2024-11-19 11:27:48.427088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.378 qpair failed and we were unable to recover it. 00:25:53.378 [2024-11-19 11:27:48.427313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.378 [2024-11-19 11:27:48.427351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.378 qpair failed and we were unable to recover it. 00:25:53.378 [2024-11-19 11:27:48.427577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.378 [2024-11-19 11:27:48.427616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.378 qpair failed and we were unable to recover it. 00:25:53.378 [2024-11-19 11:27:48.427778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.378 [2024-11-19 11:27:48.427839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.378 qpair failed and we were unable to recover it. 00:25:53.378 [2024-11-19 11:27:48.428072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.378 [2024-11-19 11:27:48.428130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.378 qpair failed and we were unable to recover it. 00:25:53.378 [2024-11-19 11:27:48.428352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.378 [2024-11-19 11:27:48.428403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.378 qpair failed and we were unable to recover it. 00:25:53.378 [2024-11-19 11:27:48.428586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.378 [2024-11-19 11:27:48.428644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.378 qpair failed and we were unable to recover it. 00:25:53.378 [2024-11-19 11:27:48.428890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.378 [2024-11-19 11:27:48.428947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.378 qpair failed and we were unable to recover it. 00:25:53.378 [2024-11-19 11:27:48.429207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.378 [2024-11-19 11:27:48.429266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.378 qpair failed and we were unable to recover it. 00:25:53.378 [2024-11-19 11:27:48.429475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.378 [2024-11-19 11:27:48.429515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.378 qpair failed and we were unable to recover it. 00:25:53.378 [2024-11-19 11:27:48.429749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.378 [2024-11-19 11:27:48.429807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.378 qpair failed and we were unable to recover it. 00:25:53.378 [2024-11-19 11:27:48.430047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.378 [2024-11-19 11:27:48.430106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.378 qpair failed and we were unable to recover it. 00:25:53.378 [2024-11-19 11:27:48.430334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.378 [2024-11-19 11:27:48.430405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.378 qpair failed and we were unable to recover it. 00:25:53.378 [2024-11-19 11:27:48.430659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.378 [2024-11-19 11:27:48.430715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.378 qpair failed and we were unable to recover it. 00:25:53.378 [2024-11-19 11:27:48.430961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.378 [2024-11-19 11:27:48.431019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.378 qpair failed and we were unable to recover it. 00:25:53.378 [2024-11-19 11:27:48.431229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.378 [2024-11-19 11:27:48.431288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.378 qpair failed and we were unable to recover it. 00:25:53.378 [2024-11-19 11:27:48.431487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.378 [2024-11-19 11:27:48.431526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.378 qpair failed and we were unable to recover it. 00:25:53.378 [2024-11-19 11:27:48.431700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.378 [2024-11-19 11:27:48.431758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.378 qpair failed and we were unable to recover it. 00:25:53.378 [2024-11-19 11:27:48.432004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.378 [2024-11-19 11:27:48.432063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.378 qpair failed and we were unable to recover it. 00:25:53.378 [2024-11-19 11:27:48.432188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.378 [2024-11-19 11:27:48.432224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.378 qpair failed and we were unable to recover it. 00:25:53.378 [2024-11-19 11:27:48.432381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.378 [2024-11-19 11:27:48.432420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.378 qpair failed and we were unable to recover it. 00:25:53.378 [2024-11-19 11:27:48.432620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.378 [2024-11-19 11:27:48.432688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.378 qpair failed and we were unable to recover it. 00:25:53.378 [2024-11-19 11:27:48.432888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.378 [2024-11-19 11:27:48.432946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.378 qpair failed and we were unable to recover it. 00:25:53.378 [2024-11-19 11:27:48.433192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.378 [2024-11-19 11:27:48.433230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.378 qpair failed and we were unable to recover it. 00:25:53.378 [2024-11-19 11:27:48.433450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.378 [2024-11-19 11:27:48.433510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.378 qpair failed and we were unable to recover it. 00:25:53.378 [2024-11-19 11:27:48.433666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.378 [2024-11-19 11:27:48.433728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.378 qpair failed and we were unable to recover it. 00:25:53.378 [2024-11-19 11:27:48.433880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.378 [2024-11-19 11:27:48.433919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.378 qpair failed and we were unable to recover it. 00:25:53.378 [2024-11-19 11:27:48.434147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.378 [2024-11-19 11:27:48.434187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.378 qpair failed and we were unable to recover it. 00:25:53.378 [2024-11-19 11:27:48.434390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.378 [2024-11-19 11:27:48.434430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.378 qpair failed and we were unable to recover it. 00:25:53.378 [2024-11-19 11:27:48.434615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.378 [2024-11-19 11:27:48.434654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.378 qpair failed and we were unable to recover it. 00:25:53.378 [2024-11-19 11:27:48.434804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.378 [2024-11-19 11:27:48.434843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.378 qpair failed and we were unable to recover it. 00:25:53.378 [2024-11-19 11:27:48.435020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.378 [2024-11-19 11:27:48.435059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.378 qpair failed and we were unable to recover it. 00:25:53.378 [2024-11-19 11:27:48.435238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.379 [2024-11-19 11:27:48.435276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.379 qpair failed and we were unable to recover it. 00:25:53.379 [2024-11-19 11:27:48.435480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.379 [2024-11-19 11:27:48.435539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.379 qpair failed and we were unable to recover it. 00:25:53.379 [2024-11-19 11:27:48.435741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.379 [2024-11-19 11:27:48.435798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.379 qpair failed and we were unable to recover it. 00:25:53.379 [2024-11-19 11:27:48.435977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.379 [2024-11-19 11:27:48.436044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.379 qpair failed and we were unable to recover it. 00:25:53.379 [2024-11-19 11:27:48.436240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.379 [2024-11-19 11:27:48.436279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.379 qpair failed and we were unable to recover it. 00:25:53.379 [2024-11-19 11:27:48.436474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.379 [2024-11-19 11:27:48.436532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.379 qpair failed and we were unable to recover it. 00:25:53.379 [2024-11-19 11:27:48.436776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.379 [2024-11-19 11:27:48.436836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.379 qpair failed and we were unable to recover it. 00:25:53.379 [2024-11-19 11:27:48.437056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.379 [2024-11-19 11:27:48.437120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.379 qpair failed and we were unable to recover it. 00:25:53.379 [2024-11-19 11:27:48.437309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.379 [2024-11-19 11:27:48.437347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.379 qpair failed and we were unable to recover it. 00:25:53.379 [2024-11-19 11:27:48.437570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.379 [2024-11-19 11:27:48.437629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.379 qpair failed and we were unable to recover it. 00:25:53.379 [2024-11-19 11:27:48.437855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.379 [2024-11-19 11:27:48.437912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.379 qpair failed and we were unable to recover it. 00:25:53.379 [2024-11-19 11:27:48.438087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.379 [2024-11-19 11:27:48.438144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.379 qpair failed and we were unable to recover it. 00:25:53.379 [2024-11-19 11:27:48.438398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.379 [2024-11-19 11:27:48.438438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.379 qpair failed and we were unable to recover it. 00:25:53.379 [2024-11-19 11:27:48.438675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.379 [2024-11-19 11:27:48.438734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.379 qpair failed and we were unable to recover it. 00:25:53.379 [2024-11-19 11:27:48.438974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.379 [2024-11-19 11:27:48.439031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.379 qpair failed and we were unable to recover it. 00:25:53.379 [2024-11-19 11:27:48.439273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.379 [2024-11-19 11:27:48.439311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.379 qpair failed and we were unable to recover it. 00:25:53.379 [2024-11-19 11:27:48.439453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.379 [2024-11-19 11:27:48.439500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.379 qpair failed and we were unable to recover it. 00:25:53.379 [2024-11-19 11:27:48.439651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.379 [2024-11-19 11:27:48.439725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.379 qpair failed and we were unable to recover it. 00:25:53.379 [2024-11-19 11:27:48.439924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.379 [2024-11-19 11:27:48.439981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.379 qpair failed and we were unable to recover it. 00:25:53.379 [2024-11-19 11:27:48.440219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.379 [2024-11-19 11:27:48.440258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.379 qpair failed and we were unable to recover it. 00:25:53.379 [2024-11-19 11:27:48.440508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.379 [2024-11-19 11:27:48.440567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.379 qpair failed and we were unable to recover it. 00:25:53.379 [2024-11-19 11:27:48.440775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.379 [2024-11-19 11:27:48.440834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.379 qpair failed and we were unable to recover it. 00:25:53.379 [2024-11-19 11:27:48.441061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.379 [2024-11-19 11:27:48.441119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.379 qpair failed and we were unable to recover it. 00:25:53.379 [2024-11-19 11:27:48.441349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.379 [2024-11-19 11:27:48.441398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.379 qpair failed and we were unable to recover it. 00:25:53.379 [2024-11-19 11:27:48.441540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.379 [2024-11-19 11:27:48.441603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.379 qpair failed and we were unable to recover it. 00:25:53.379 [2024-11-19 11:27:48.441846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.379 [2024-11-19 11:27:48.441904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.379 qpair failed and we were unable to recover it. 00:25:53.379 [2024-11-19 11:27:48.442052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.379 [2024-11-19 11:27:48.442112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.379 qpair failed and we were unable to recover it. 00:25:53.379 [2024-11-19 11:27:48.442293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.379 [2024-11-19 11:27:48.442336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.379 qpair failed and we were unable to recover it. 00:25:53.379 [2024-11-19 11:27:48.442533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.379 [2024-11-19 11:27:48.442590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.379 qpair failed and we were unable to recover it. 00:25:53.379 [2024-11-19 11:27:48.442731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.379 [2024-11-19 11:27:48.442795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.379 qpair failed and we were unable to recover it. 00:25:53.379 [2024-11-19 11:27:48.442939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.379 [2024-11-19 11:27:48.443002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.379 qpair failed and we were unable to recover it. 00:25:53.379 [2024-11-19 11:27:48.443182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.379 [2024-11-19 11:27:48.443221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.379 qpair failed and we were unable to recover it. 00:25:53.379 [2024-11-19 11:27:48.443463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.379 [2024-11-19 11:27:48.443520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.379 qpair failed and we were unable to recover it. 00:25:53.379 [2024-11-19 11:27:48.443702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.379 [2024-11-19 11:27:48.443740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.379 qpair failed and we were unable to recover it. 00:25:53.379 [2024-11-19 11:27:48.443988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.379 [2024-11-19 11:27:48.444047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.379 qpair failed and we were unable to recover it. 00:25:53.379 [2024-11-19 11:27:48.444267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.379 [2024-11-19 11:27:48.444305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.379 qpair failed and we were unable to recover it. 00:25:53.379 [2024-11-19 11:27:48.444506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.380 [2024-11-19 11:27:48.444565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.380 qpair failed and we were unable to recover it. 00:25:53.380 [2024-11-19 11:27:48.444796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.380 [2024-11-19 11:27:48.444853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.380 qpair failed and we were unable to recover it. 00:25:53.380 [2024-11-19 11:27:48.445024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.380 [2024-11-19 11:27:48.445083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.380 qpair failed and we were unable to recover it. 00:25:53.380 [2024-11-19 11:27:48.445316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.380 [2024-11-19 11:27:48.445354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.380 qpair failed and we were unable to recover it. 00:25:53.380 [2024-11-19 11:27:48.445589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.380 [2024-11-19 11:27:48.445646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.380 qpair failed and we were unable to recover it. 00:25:53.380 [2024-11-19 11:27:48.445892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.380 [2024-11-19 11:27:48.445949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.380 qpair failed and we were unable to recover it. 00:25:53.380 [2024-11-19 11:27:48.446160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.380 [2024-11-19 11:27:48.446218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.380 qpair failed and we were unable to recover it. 00:25:53.380 [2024-11-19 11:27:48.446473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.380 [2024-11-19 11:27:48.446532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.380 qpair failed and we were unable to recover it. 00:25:53.380 [2024-11-19 11:27:48.446734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.380 [2024-11-19 11:27:48.446792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.380 qpair failed and we were unable to recover it. 00:25:53.380 [2024-11-19 11:27:48.446993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.380 [2024-11-19 11:27:48.447050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.380 qpair failed and we were unable to recover it. 00:25:53.380 [2024-11-19 11:27:48.447293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.380 [2024-11-19 11:27:48.447332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.380 qpair failed and we were unable to recover it. 00:25:53.380 [2024-11-19 11:27:48.447612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.380 [2024-11-19 11:27:48.447677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.380 qpair failed and we were unable to recover it. 00:25:53.380 [2024-11-19 11:27:48.447852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.380 [2024-11-19 11:27:48.447911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.380 qpair failed and we were unable to recover it. 00:25:53.380 [2024-11-19 11:27:48.448102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.380 [2024-11-19 11:27:48.448158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.380 qpair failed and we were unable to recover it. 00:25:53.380 [2024-11-19 11:27:48.448345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.380 [2024-11-19 11:27:48.448393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.380 qpair failed and we were unable to recover it. 00:25:53.380 [2024-11-19 11:27:48.448550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.380 [2024-11-19 11:27:48.448610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.380 qpair failed and we were unable to recover it. 00:25:53.380 [2024-11-19 11:27:48.448802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.380 [2024-11-19 11:27:48.448840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.380 qpair failed and we were unable to recover it. 00:25:53.380 [2024-11-19 11:27:48.449076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.380 [2024-11-19 11:27:48.449115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.380 qpair failed and we were unable to recover it. 00:25:53.380 [2024-11-19 11:27:48.449293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.380 [2024-11-19 11:27:48.449332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.380 qpair failed and we were unable to recover it. 00:25:53.380 [2024-11-19 11:27:48.449571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.380 [2024-11-19 11:27:48.449610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.380 qpair failed and we were unable to recover it. 00:25:53.380 [2024-11-19 11:27:48.449838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.380 [2024-11-19 11:27:48.449877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.380 qpair failed and we were unable to recover it. 00:25:53.380 [2024-11-19 11:27:48.450106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.380 [2024-11-19 11:27:48.450165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.380 qpair failed and we were unable to recover it. 00:25:53.380 [2024-11-19 11:27:48.450351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.380 [2024-11-19 11:27:48.450413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.380 qpair failed and we were unable to recover it. 00:25:53.380 [2024-11-19 11:27:48.450642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.380 [2024-11-19 11:27:48.450704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.380 qpair failed and we were unable to recover it. 00:25:53.380 [2024-11-19 11:27:48.450952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.380 [2024-11-19 11:27:48.451010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.380 qpair failed and we were unable to recover it. 00:25:53.380 [2024-11-19 11:27:48.451205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.380 [2024-11-19 11:27:48.451264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.380 qpair failed and we were unable to recover it. 00:25:53.380 [2024-11-19 11:27:48.451491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.380 [2024-11-19 11:27:48.451531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.380 qpair failed and we were unable to recover it. 00:25:53.380 [2024-11-19 11:27:48.451723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.380 [2024-11-19 11:27:48.451782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.380 qpair failed and we were unable to recover it. 00:25:53.380 [2024-11-19 11:27:48.451986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.380 [2024-11-19 11:27:48.452044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.380 qpair failed and we were unable to recover it. 00:25:53.380 [2024-11-19 11:27:48.452223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.380 [2024-11-19 11:27:48.452261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.380 qpair failed and we were unable to recover it. 00:25:53.380 [2024-11-19 11:27:48.452464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.380 [2024-11-19 11:27:48.452521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.380 qpair failed and we were unable to recover it. 00:25:53.380 [2024-11-19 11:27:48.452747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.380 [2024-11-19 11:27:48.452805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.380 qpair failed and we were unable to recover it. 00:25:53.381 [2024-11-19 11:27:48.453003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.381 [2024-11-19 11:27:48.453060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.381 qpair failed and we were unable to recover it. 00:25:53.381 [2024-11-19 11:27:48.453266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.381 [2024-11-19 11:27:48.453305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.381 qpair failed and we were unable to recover it. 00:25:53.381 [2024-11-19 11:27:48.453499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.381 [2024-11-19 11:27:48.453556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.381 qpair failed and we were unable to recover it. 00:25:53.381 [2024-11-19 11:27:48.453786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.381 [2024-11-19 11:27:48.453844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.381 qpair failed and we were unable to recover it. 00:25:53.381 [2024-11-19 11:27:48.454069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.381 [2024-11-19 11:27:48.454128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.381 qpair failed and we were unable to recover it. 00:25:53.381 [2024-11-19 11:27:48.454352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.381 [2024-11-19 11:27:48.454400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.381 qpair failed and we were unable to recover it. 00:25:53.381 [2024-11-19 11:27:48.454609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.381 [2024-11-19 11:27:48.454668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.381 qpair failed and we were unable to recover it. 00:25:53.381 [2024-11-19 11:27:48.454838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.381 [2024-11-19 11:27:48.454895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.381 qpair failed and we were unable to recover it. 00:25:53.381 [2024-11-19 11:27:48.455120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.381 [2024-11-19 11:27:48.455179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.381 qpair failed and we were unable to recover it. 00:25:53.381 [2024-11-19 11:27:48.455410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.381 [2024-11-19 11:27:48.455450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.381 qpair failed and we were unable to recover it. 00:25:53.381 [2024-11-19 11:27:48.455696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.381 [2024-11-19 11:27:48.455755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.381 qpair failed and we were unable to recover it. 00:25:53.381 [2024-11-19 11:27:48.455973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.381 [2024-11-19 11:27:48.456029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.381 qpair failed and we were unable to recover it. 00:25:53.381 [2024-11-19 11:27:48.456237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.381 [2024-11-19 11:27:48.456275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.381 qpair failed and we were unable to recover it. 00:25:53.381 [2024-11-19 11:27:48.456493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.381 [2024-11-19 11:27:48.456532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.381 qpair failed and we were unable to recover it. 00:25:53.381 [2024-11-19 11:27:48.456776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.381 [2024-11-19 11:27:48.456837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.381 qpair failed and we were unable to recover it. 00:25:53.381 [2024-11-19 11:27:48.457075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.381 [2024-11-19 11:27:48.457131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.381 qpair failed and we were unable to recover it. 00:25:53.381 [2024-11-19 11:27:48.457371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.381 [2024-11-19 11:27:48.457411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.381 qpair failed and we were unable to recover it. 00:25:53.381 [2024-11-19 11:27:48.457639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.381 [2024-11-19 11:27:48.457678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.381 qpair failed and we were unable to recover it. 00:25:53.381 [2024-11-19 11:27:48.457881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.381 [2024-11-19 11:27:48.457939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.381 qpair failed and we were unable to recover it. 00:25:53.381 [2024-11-19 11:27:48.458142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.381 [2024-11-19 11:27:48.458207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.381 qpair failed and we were unable to recover it. 00:25:53.381 [2024-11-19 11:27:48.458349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.381 [2024-11-19 11:27:48.458403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.381 qpair failed and we were unable to recover it. 00:25:53.381 [2024-11-19 11:27:48.458641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.381 [2024-11-19 11:27:48.458700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.381 qpair failed and we were unable to recover it. 00:25:53.381 [2024-11-19 11:27:48.458933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.381 [2024-11-19 11:27:48.458990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.381 qpair failed and we were unable to recover it. 00:25:53.381 [2024-11-19 11:27:48.459240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.381 [2024-11-19 11:27:48.459299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.381 qpair failed and we were unable to recover it. 00:25:53.381 [2024-11-19 11:27:48.459562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.381 [2024-11-19 11:27:48.459620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.381 qpair failed and we were unable to recover it. 00:25:53.381 [2024-11-19 11:27:48.459844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.381 [2024-11-19 11:27:48.459909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.381 qpair failed and we were unable to recover it. 00:25:53.381 [2024-11-19 11:27:48.460111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.381 [2024-11-19 11:27:48.460170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.381 qpair failed and we were unable to recover it. 00:25:53.381 [2024-11-19 11:27:48.460335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.381 [2024-11-19 11:27:48.460385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.381 qpair failed and we were unable to recover it. 00:25:53.381 [2024-11-19 11:27:48.460564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.381 [2024-11-19 11:27:48.460621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.381 qpair failed and we were unable to recover it. 00:25:53.381 [2024-11-19 11:27:48.460871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.381 [2024-11-19 11:27:48.460929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.381 qpair failed and we were unable to recover it. 00:25:53.381 [2024-11-19 11:27:48.461133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.381 [2024-11-19 11:27:48.461189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.381 qpair failed and we were unable to recover it. 00:25:53.381 [2024-11-19 11:27:48.461428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.381 [2024-11-19 11:27:48.461467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.381 qpair failed and we were unable to recover it. 00:25:53.381 [2024-11-19 11:27:48.461715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.381 [2024-11-19 11:27:48.461774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.381 qpair failed and we were unable to recover it. 00:25:53.381 [2024-11-19 11:27:48.462008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.381 [2024-11-19 11:27:48.462066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.381 qpair failed and we were unable to recover it. 00:25:53.381 [2024-11-19 11:27:48.462217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.381 [2024-11-19 11:27:48.462256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.381 qpair failed and we were unable to recover it. 00:25:53.381 [2024-11-19 11:27:48.462494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.381 [2024-11-19 11:27:48.462553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.381 qpair failed and we were unable to recover it. 00:25:53.382 [2024-11-19 11:27:48.462757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.382 [2024-11-19 11:27:48.462815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.382 qpair failed and we were unable to recover it. 00:25:53.382 [2024-11-19 11:27:48.463054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.382 [2024-11-19 11:27:48.463112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.382 qpair failed and we were unable to recover it. 00:25:53.382 [2024-11-19 11:27:48.463293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.382 [2024-11-19 11:27:48.463332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.382 qpair failed and we were unable to recover it. 00:25:53.382 [2024-11-19 11:27:48.463556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.382 [2024-11-19 11:27:48.463613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.382 qpair failed and we were unable to recover it. 00:25:53.382 [2024-11-19 11:27:48.463866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.382 [2024-11-19 11:27:48.463924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.382 qpair failed and we were unable to recover it. 00:25:53.382 [2024-11-19 11:27:48.464134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.382 [2024-11-19 11:27:48.464196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.382 qpair failed and we were unable to recover it. 00:25:53.382 [2024-11-19 11:27:48.464434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.382 [2024-11-19 11:27:48.464474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.382 qpair failed and we were unable to recover it. 00:25:53.382 [2024-11-19 11:27:48.464704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.382 [2024-11-19 11:27:48.464762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.382 qpair failed and we were unable to recover it. 00:25:53.382 [2024-11-19 11:27:48.465042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.382 [2024-11-19 11:27:48.465107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.382 qpair failed and we were unable to recover it. 00:25:53.382 [2024-11-19 11:27:48.465377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.382 [2024-11-19 11:27:48.465416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.382 qpair failed and we were unable to recover it. 00:25:53.382 [2024-11-19 11:27:48.465634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.382 [2024-11-19 11:27:48.465690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.382 qpair failed and we were unable to recover it. 00:25:53.382 [2024-11-19 11:27:48.465904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.382 [2024-11-19 11:27:48.465962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.382 qpair failed and we were unable to recover it. 00:25:53.382 [2024-11-19 11:27:48.466174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.382 [2024-11-19 11:27:48.466234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.382 qpair failed and we were unable to recover it. 00:25:53.382 [2024-11-19 11:27:48.466438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.382 [2024-11-19 11:27:48.466495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.382 qpair failed and we were unable to recover it. 00:25:53.382 [2024-11-19 11:27:48.466722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.382 [2024-11-19 11:27:48.466779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.382 qpair failed and we were unable to recover it. 00:25:53.382 [2024-11-19 11:27:48.466994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.382 [2024-11-19 11:27:48.467052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.382 qpair failed and we were unable to recover it. 00:25:53.382 [2024-11-19 11:27:48.467248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.382 [2024-11-19 11:27:48.467287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.382 qpair failed and we were unable to recover it. 00:25:53.382 [2024-11-19 11:27:48.467518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.382 [2024-11-19 11:27:48.467576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.382 qpair failed and we were unable to recover it. 00:25:53.382 [2024-11-19 11:27:48.467796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.382 [2024-11-19 11:27:48.467855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.382 qpair failed and we were unable to recover it. 00:25:53.382 [2024-11-19 11:27:48.468106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.382 [2024-11-19 11:27:48.468165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.382 qpair failed and we were unable to recover it. 00:25:53.382 [2024-11-19 11:27:48.468396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.382 [2024-11-19 11:27:48.468436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.382 qpair failed and we were unable to recover it. 00:25:53.382 [2024-11-19 11:27:48.468660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.382 [2024-11-19 11:27:48.468718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.382 qpair failed and we were unable to recover it. 00:25:53.382 [2024-11-19 11:27:48.468967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.382 [2024-11-19 11:27:48.469025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.382 qpair failed and we were unable to recover it. 00:25:53.382 [2024-11-19 11:27:48.469246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.382 [2024-11-19 11:27:48.469290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.382 qpair failed and we were unable to recover it. 00:25:53.382 [2024-11-19 11:27:48.469504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.382 [2024-11-19 11:27:48.469544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.382 qpair failed and we were unable to recover it. 00:25:53.382 [2024-11-19 11:27:48.469761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.382 [2024-11-19 11:27:48.469821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.382 qpair failed and we were unable to recover it. 00:25:53.382 [2024-11-19 11:27:48.470024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.382 [2024-11-19 11:27:48.470083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.382 qpair failed and we were unable to recover it. 00:25:53.382 [2024-11-19 11:27:48.470273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.382 [2024-11-19 11:27:48.470311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.382 qpair failed and we were unable to recover it. 00:25:53.382 [2024-11-19 11:27:48.470559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.382 [2024-11-19 11:27:48.470618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.382 qpair failed and we were unable to recover it. 00:25:53.382 [2024-11-19 11:27:48.470788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.382 [2024-11-19 11:27:48.470844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.382 qpair failed and we were unable to recover it. 00:25:53.382 [2024-11-19 11:27:48.471071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.382 [2024-11-19 11:27:48.471127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.382 qpair failed and we were unable to recover it. 00:25:53.382 [2024-11-19 11:27:48.471384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.382 [2024-11-19 11:27:48.471435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.382 qpair failed and we were unable to recover it. 00:25:53.382 [2024-11-19 11:27:48.471680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.382 [2024-11-19 11:27:48.471742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.382 qpair failed and we were unable to recover it. 00:25:53.382 [2024-11-19 11:27:48.471988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.382 [2024-11-19 11:27:48.472046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.382 qpair failed and we were unable to recover it. 00:25:53.382 [2024-11-19 11:27:48.472226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.382 [2024-11-19 11:27:48.472265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.382 qpair failed and we were unable to recover it. 00:25:53.382 [2024-11-19 11:27:48.472459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.382 [2024-11-19 11:27:48.472499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.382 qpair failed and we were unable to recover it. 00:25:53.383 [2024-11-19 11:27:48.472703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.383 [2024-11-19 11:27:48.472759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.383 qpair failed and we were unable to recover it. 00:25:53.383 [2024-11-19 11:27:48.472969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.383 [2024-11-19 11:27:48.473028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.383 qpair failed and we were unable to recover it. 00:25:53.383 [2024-11-19 11:27:48.473256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.383 [2024-11-19 11:27:48.473296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.383 qpair failed and we were unable to recover it. 00:25:53.383 [2024-11-19 11:27:48.473461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.383 [2024-11-19 11:27:48.473522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.383 qpair failed and we were unable to recover it. 00:25:53.383 [2024-11-19 11:27:48.473724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.383 [2024-11-19 11:27:48.473782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.383 qpair failed and we were unable to recover it. 00:25:53.383 [2024-11-19 11:27:48.473990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.383 [2024-11-19 11:27:48.474048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.383 qpair failed and we were unable to recover it. 00:25:53.383 [2024-11-19 11:27:48.474244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.383 [2024-11-19 11:27:48.474282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.383 qpair failed and we were unable to recover it. 00:25:53.383 [2024-11-19 11:27:48.474533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.383 [2024-11-19 11:27:48.474594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.383 qpair failed and we were unable to recover it. 00:25:53.383 [2024-11-19 11:27:48.474829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.383 [2024-11-19 11:27:48.474887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.383 qpair failed and we were unable to recover it. 00:25:53.383 [2024-11-19 11:27:48.475126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.383 [2024-11-19 11:27:48.475183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.383 qpair failed and we were unable to recover it. 00:25:53.383 [2024-11-19 11:27:48.475383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.383 [2024-11-19 11:27:48.475423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.383 qpair failed and we were unable to recover it. 00:25:53.383 [2024-11-19 11:27:48.475627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.383 [2024-11-19 11:27:48.475693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.383 qpair failed and we were unable to recover it. 00:25:53.383 [2024-11-19 11:27:48.475907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.383 [2024-11-19 11:27:48.475965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.383 qpair failed and we were unable to recover it. 00:25:53.383 [2024-11-19 11:27:48.476200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.383 [2024-11-19 11:27:48.476257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.383 qpair failed and we were unable to recover it. 00:25:53.383 [2024-11-19 11:27:48.476426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.383 [2024-11-19 11:27:48.476467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.383 qpair failed and we were unable to recover it. 00:25:53.383 [2024-11-19 11:27:48.476715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.383 [2024-11-19 11:27:48.476773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.383 qpair failed and we were unable to recover it. 00:25:53.383 [2024-11-19 11:27:48.476962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.383 [2024-11-19 11:27:48.477018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.383 qpair failed and we were unable to recover it. 00:25:53.383 [2024-11-19 11:27:48.477174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.383 [2024-11-19 11:27:48.477212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.383 qpair failed and we were unable to recover it. 00:25:53.383 [2024-11-19 11:27:48.477452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.383 [2024-11-19 11:27:48.477512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.383 qpair failed and we were unable to recover it. 00:25:53.383 [2024-11-19 11:27:48.477765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.383 [2024-11-19 11:27:48.477824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.383 qpair failed and we were unable to recover it. 00:25:53.383 [2024-11-19 11:27:48.478067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.383 [2024-11-19 11:27:48.478126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.383 qpair failed and we were unable to recover it. 00:25:53.383 [2024-11-19 11:27:48.478315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.383 [2024-11-19 11:27:48.478354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.383 qpair failed and we were unable to recover it. 00:25:53.383 [2024-11-19 11:27:48.478565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.383 [2024-11-19 11:27:48.478624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.383 qpair failed and we were unable to recover it. 00:25:53.383 [2024-11-19 11:27:48.478854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.383 [2024-11-19 11:27:48.478893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.383 qpair failed and we were unable to recover it. 00:25:53.383 [2024-11-19 11:27:48.479132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.383 [2024-11-19 11:27:48.479191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.383 qpair failed and we were unable to recover it. 00:25:53.383 [2024-11-19 11:27:48.479397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.383 [2024-11-19 11:27:48.479467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.383 qpair failed and we were unable to recover it. 00:25:53.383 [2024-11-19 11:27:48.479711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.383 [2024-11-19 11:27:48.479771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.383 qpair failed and we were unable to recover it. 00:25:53.383 [2024-11-19 11:27:48.479971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.383 [2024-11-19 11:27:48.480035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.383 qpair failed and we were unable to recover it. 00:25:53.383 [2024-11-19 11:27:48.480255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.383 [2024-11-19 11:27:48.480294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.383 qpair failed and we were unable to recover it. 00:25:53.383 [2024-11-19 11:27:48.480578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.383 [2024-11-19 11:27:48.480635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.383 qpair failed and we were unable to recover it. 00:25:53.383 [2024-11-19 11:27:48.480909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.383 [2024-11-19 11:27:48.480968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.383 qpair failed and we were unable to recover it. 00:25:53.383 [2024-11-19 11:27:48.481157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.383 [2024-11-19 11:27:48.481202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.383 qpair failed and we were unable to recover it. 00:25:53.383 [2024-11-19 11:27:48.481451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.383 [2024-11-19 11:27:48.481513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.383 qpair failed and we were unable to recover it. 00:25:53.383 [2024-11-19 11:27:48.481762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.383 [2024-11-19 11:27:48.481821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.383 qpair failed and we were unable to recover it. 00:25:53.383 [2024-11-19 11:27:48.482030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.383 [2024-11-19 11:27:48.482086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.383 qpair failed and we were unable to recover it. 00:25:53.383 [2024-11-19 11:27:48.482231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.383 [2024-11-19 11:27:48.482274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.383 qpair failed and we were unable to recover it. 00:25:53.384 [2024-11-19 11:27:48.482453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.384 [2024-11-19 11:27:48.482520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.384 qpair failed and we were unable to recover it. 00:25:53.384 [2024-11-19 11:27:48.482750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.384 [2024-11-19 11:27:48.482807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.384 qpair failed and we were unable to recover it. 00:25:53.384 [2024-11-19 11:27:48.483048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.384 [2024-11-19 11:27:48.483106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.384 qpair failed and we were unable to recover it. 00:25:53.384 [2024-11-19 11:27:48.483302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.384 [2024-11-19 11:27:48.483341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.384 qpair failed and we were unable to recover it. 00:25:53.384 [2024-11-19 11:27:48.483633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.384 [2024-11-19 11:27:48.483707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.384 qpair failed and we were unable to recover it. 00:25:53.384 [2024-11-19 11:27:48.483907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.384 [2024-11-19 11:27:48.483965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.384 qpair failed and we were unable to recover it. 00:25:53.384 [2024-11-19 11:27:48.484184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.384 [2024-11-19 11:27:48.484223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.384 qpair failed and we were unable to recover it. 00:25:53.384 [2024-11-19 11:27:48.484401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.384 [2024-11-19 11:27:48.484440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.384 qpair failed and we were unable to recover it. 00:25:53.384 [2024-11-19 11:27:48.484636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.384 [2024-11-19 11:27:48.484699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.384 qpair failed and we were unable to recover it. 00:25:53.384 [2024-11-19 11:27:48.484931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.384 [2024-11-19 11:27:48.484990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.384 qpair failed and we were unable to recover it. 00:25:53.384 [2024-11-19 11:27:48.485225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.384 [2024-11-19 11:27:48.485264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.384 qpair failed and we were unable to recover it. 00:25:53.384 [2024-11-19 11:27:48.485469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.384 [2024-11-19 11:27:48.485529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.384 qpair failed and we were unable to recover it. 00:25:53.384 [2024-11-19 11:27:48.485759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.384 [2024-11-19 11:27:48.485817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.384 qpair failed and we were unable to recover it. 00:25:53.384 [2024-11-19 11:27:48.486111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.384 [2024-11-19 11:27:48.486168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.384 qpair failed and we were unable to recover it. 00:25:53.384 [2024-11-19 11:27:48.486359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.384 [2024-11-19 11:27:48.486407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.384 qpair failed and we were unable to recover it. 00:25:53.384 [2024-11-19 11:27:48.486612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.384 [2024-11-19 11:27:48.486670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.384 qpair failed and we were unable to recover it. 00:25:53.384 [2024-11-19 11:27:48.486905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.384 [2024-11-19 11:27:48.486963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.384 qpair failed and we were unable to recover it. 00:25:53.384 [2024-11-19 11:27:48.487166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.384 [2024-11-19 11:27:48.487224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.384 qpair failed and we were unable to recover it. 00:25:53.384 [2024-11-19 11:27:48.487435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.384 [2024-11-19 11:27:48.487497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.384 qpair failed and we were unable to recover it. 00:25:53.384 [2024-11-19 11:27:48.487647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.384 [2024-11-19 11:27:48.487707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.384 qpair failed and we were unable to recover it. 00:25:53.384 [2024-11-19 11:27:48.487959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.384 [2024-11-19 11:27:48.488015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.384 qpair failed and we were unable to recover it. 00:25:53.384 [2024-11-19 11:27:48.488247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.384 [2024-11-19 11:27:48.488286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.384 qpair failed and we were unable to recover it. 00:25:53.384 [2024-11-19 11:27:48.488535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.384 [2024-11-19 11:27:48.488593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.384 qpair failed and we were unable to recover it. 00:25:53.384 [2024-11-19 11:27:48.488807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.384 [2024-11-19 11:27:48.488866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.384 qpair failed and we were unable to recover it. 00:25:53.384 [2024-11-19 11:27:48.489049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.384 [2024-11-19 11:27:48.489106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.384 qpair failed and we were unable to recover it. 00:25:53.384 [2024-11-19 11:27:48.489304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.384 [2024-11-19 11:27:48.489343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.384 qpair failed and we were unable to recover it. 00:25:53.384 [2024-11-19 11:27:48.489566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.384 [2024-11-19 11:27:48.489625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.384 qpair failed and we were unable to recover it. 00:25:53.384 [2024-11-19 11:27:48.489874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.384 [2024-11-19 11:27:48.489933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.384 qpair failed and we were unable to recover it. 00:25:53.384 [2024-11-19 11:27:48.490146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.384 [2024-11-19 11:27:48.490201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.384 qpair failed and we were unable to recover it. 00:25:53.384 [2024-11-19 11:27:48.490469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.384 [2024-11-19 11:27:48.490531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.384 qpair failed and we were unable to recover it. 00:25:53.384 [2024-11-19 11:27:48.490781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.384 [2024-11-19 11:27:48.490839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.384 qpair failed and we were unable to recover it. 00:25:53.384 [2024-11-19 11:27:48.491082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.385 [2024-11-19 11:27:48.491145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.385 qpair failed and we were unable to recover it. 00:25:53.385 [2024-11-19 11:27:48.491266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.385 [2024-11-19 11:27:48.491316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.385 qpair failed and we were unable to recover it. 00:25:53.385 [2024-11-19 11:27:48.491575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.385 [2024-11-19 11:27:48.491632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.385 qpair failed and we were unable to recover it. 00:25:53.385 [2024-11-19 11:27:48.491839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.385 [2024-11-19 11:27:48.491897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.385 qpair failed and we were unable to recover it. 00:25:53.385 [2024-11-19 11:27:48.492089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.385 [2024-11-19 11:27:48.492148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.385 qpair failed and we were unable to recover it. 00:25:53.385 [2024-11-19 11:27:48.492403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.385 [2024-11-19 11:27:48.492442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.385 qpair failed and we were unable to recover it. 00:25:53.385 [2024-11-19 11:27:48.492643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.385 [2024-11-19 11:27:48.492704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.385 qpair failed and we were unable to recover it. 00:25:53.385 [2024-11-19 11:27:48.492904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.385 [2024-11-19 11:27:48.492962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.385 qpair failed and we were unable to recover it. 00:25:53.385 [2024-11-19 11:27:48.493189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.385 [2024-11-19 11:27:48.493246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.385 qpair failed and we were unable to recover it. 00:25:53.385 [2024-11-19 11:27:48.493461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.385 [2024-11-19 11:27:48.493521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.385 qpair failed and we were unable to recover it. 00:25:53.385 [2024-11-19 11:27:48.493742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.385 [2024-11-19 11:27:48.493801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.385 qpair failed and we were unable to recover it. 00:25:53.385 [2024-11-19 11:27:48.494001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.385 [2024-11-19 11:27:48.494058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.385 qpair failed and we were unable to recover it. 00:25:53.385 [2024-11-19 11:27:48.494233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.385 [2024-11-19 11:27:48.494283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.385 qpair failed and we were unable to recover it. 00:25:53.385 [2024-11-19 11:27:48.494480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.385 [2024-11-19 11:27:48.494540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.385 qpair failed and we were unable to recover it. 00:25:53.385 [2024-11-19 11:27:48.494779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.385 [2024-11-19 11:27:48.494837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.385 qpair failed and we were unable to recover it. 00:25:53.385 [2024-11-19 11:27:48.495011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.385 [2024-11-19 11:27:48.495068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.385 qpair failed and we were unable to recover it. 00:25:53.385 [2024-11-19 11:27:48.495196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.385 [2024-11-19 11:27:48.495234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.385 qpair failed and we were unable to recover it. 00:25:53.385 [2024-11-19 11:27:48.495381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.385 [2024-11-19 11:27:48.495420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.385 qpair failed and we were unable to recover it. 00:25:53.385 [2024-11-19 11:27:48.495579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.385 [2024-11-19 11:27:48.495618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.385 qpair failed and we were unable to recover it. 00:25:53.385 [2024-11-19 11:27:48.495774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.385 [2024-11-19 11:27:48.495812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.385 qpair failed and we were unable to recover it. 00:25:53.385 [2024-11-19 11:27:48.495934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.385 [2024-11-19 11:27:48.495972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.385 qpair failed and we were unable to recover it. 00:25:53.385 [2024-11-19 11:27:48.496112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.385 [2024-11-19 11:27:48.496151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.385 qpair failed and we were unable to recover it. 00:25:53.385 [2024-11-19 11:27:48.496317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.385 [2024-11-19 11:27:48.496355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.385 qpair failed and we were unable to recover it. 00:25:53.385 [2024-11-19 11:27:48.496532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.385 [2024-11-19 11:27:48.496570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.385 qpair failed and we were unable to recover it. 00:25:53.385 [2024-11-19 11:27:48.496770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.385 [2024-11-19 11:27:48.496808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.385 qpair failed and we were unable to recover it. 00:25:53.385 [2024-11-19 11:27:48.497015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.385 [2024-11-19 11:27:48.497053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.385 qpair failed and we were unable to recover it. 00:25:53.385 [2024-11-19 11:27:48.497212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.385 [2024-11-19 11:27:48.497251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.385 qpair failed and we were unable to recover it. 00:25:53.385 [2024-11-19 11:27:48.497492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.385 [2024-11-19 11:27:48.497550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.385 qpair failed and we were unable to recover it. 00:25:53.385 [2024-11-19 11:27:48.497788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.385 [2024-11-19 11:27:48.497845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.385 qpair failed and we were unable to recover it. 00:25:53.385 [2024-11-19 11:27:48.498082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.385 [2024-11-19 11:27:48.498139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.385 qpair failed and we were unable to recover it. 00:25:53.385 [2024-11-19 11:27:48.498340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.385 [2024-11-19 11:27:48.498391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.385 qpair failed and we were unable to recover it. 00:25:53.385 [2024-11-19 11:27:48.498572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.385 [2024-11-19 11:27:48.498629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.386 qpair failed and we were unable to recover it. 00:25:53.386 [2024-11-19 11:27:48.498800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.386 [2024-11-19 11:27:48.498856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.386 qpair failed and we were unable to recover it. 00:25:53.386 [2024-11-19 11:27:48.499069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.386 [2024-11-19 11:27:48.499127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.386 qpair failed and we were unable to recover it. 00:25:53.386 [2024-11-19 11:27:48.499354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.386 [2024-11-19 11:27:48.499404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.386 qpair failed and we were unable to recover it. 00:25:53.386 [2024-11-19 11:27:48.499639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.386 [2024-11-19 11:27:48.499710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.386 qpair failed and we were unable to recover it. 00:25:53.386 [2024-11-19 11:27:48.499893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.386 [2024-11-19 11:27:48.499952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.386 qpair failed and we were unable to recover it. 00:25:53.386 [2024-11-19 11:27:48.500188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.386 [2024-11-19 11:27:48.500247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.386 qpair failed and we were unable to recover it. 00:25:53.386 [2024-11-19 11:27:48.500474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.386 [2024-11-19 11:27:48.500533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.386 qpair failed and we were unable to recover it. 00:25:53.386 [2024-11-19 11:27:48.500737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.386 [2024-11-19 11:27:48.500795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.386 qpair failed and we were unable to recover it. 00:25:53.386 [2024-11-19 11:27:48.500994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.386 [2024-11-19 11:27:48.501058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.386 qpair failed and we were unable to recover it. 00:25:53.386 [2024-11-19 11:27:48.501207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.386 [2024-11-19 11:27:48.501246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.386 qpair failed and we were unable to recover it. 00:25:53.386 [2024-11-19 11:27:48.501402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.386 [2024-11-19 11:27:48.501442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.386 qpair failed and we were unable to recover it. 00:25:53.386 [2024-11-19 11:27:48.501595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.386 [2024-11-19 11:27:48.501654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.386 qpair failed and we were unable to recover it. 00:25:53.386 [2024-11-19 11:27:48.501853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.386 [2024-11-19 11:27:48.501892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.386 qpair failed and we were unable to recover it. 00:25:53.386 [2024-11-19 11:27:48.502103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.386 [2024-11-19 11:27:48.502142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.386 qpair failed and we were unable to recover it. 00:25:53.386 [2024-11-19 11:27:48.502319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.386 [2024-11-19 11:27:48.502358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.386 qpair failed and we were unable to recover it. 00:25:53.386 [2024-11-19 11:27:48.502556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.386 [2024-11-19 11:27:48.502615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.386 qpair failed and we were unable to recover it. 00:25:53.386 [2024-11-19 11:27:48.502833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.386 [2024-11-19 11:27:48.502888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.386 qpair failed and we were unable to recover it. 00:25:53.386 [2024-11-19 11:27:48.503072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.386 [2024-11-19 11:27:48.503131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.386 qpair failed and we were unable to recover it. 00:25:53.386 [2024-11-19 11:27:48.503335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.386 [2024-11-19 11:27:48.503383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.386 qpair failed and we were unable to recover it. 00:25:53.386 [2024-11-19 11:27:48.503607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.386 [2024-11-19 11:27:48.503664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.386 qpair failed and we were unable to recover it. 00:25:53.386 [2024-11-19 11:27:48.503829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.386 [2024-11-19 11:27:48.503887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.386 qpair failed and we were unable to recover it. 00:25:53.386 [2024-11-19 11:27:48.504113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.386 [2024-11-19 11:27:48.504169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.386 qpair failed and we were unable to recover it. 00:25:53.386 [2024-11-19 11:27:48.504345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.386 [2024-11-19 11:27:48.504408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.386 qpair failed and we were unable to recover it. 00:25:53.386 [2024-11-19 11:27:48.504574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.386 [2024-11-19 11:27:48.504631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.386 qpair failed and we were unable to recover it. 00:25:53.386 [2024-11-19 11:27:48.504826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.386 [2024-11-19 11:27:48.504883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.386 qpair failed and we were unable to recover it. 00:25:53.386 [2024-11-19 11:27:48.505100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.386 [2024-11-19 11:27:48.505159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.386 qpair failed and we were unable to recover it. 00:25:53.386 [2024-11-19 11:27:48.505320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.386 [2024-11-19 11:27:48.505358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.386 qpair failed and we were unable to recover it. 00:25:53.386 [2024-11-19 11:27:48.505540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.386 [2024-11-19 11:27:48.505578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.386 qpair failed and we were unable to recover it. 00:25:53.386 [2024-11-19 11:27:48.505750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.386 [2024-11-19 11:27:48.505788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.386 qpair failed and we were unable to recover it. 00:25:53.386 [2024-11-19 11:27:48.505961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.386 [2024-11-19 11:27:48.506019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.386 qpair failed and we were unable to recover it. 00:25:53.386 [2024-11-19 11:27:48.506159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.386 [2024-11-19 11:27:48.506196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.386 qpair failed and we were unable to recover it. 00:25:53.386 [2024-11-19 11:27:48.506376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.386 [2024-11-19 11:27:48.506414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.386 qpair failed and we were unable to recover it. 00:25:53.386 [2024-11-19 11:27:48.506594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.386 [2024-11-19 11:27:48.506632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.386 qpair failed and we were unable to recover it. 00:25:53.386 [2024-11-19 11:27:48.506810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.386 [2024-11-19 11:27:48.506848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.386 qpair failed and we were unable to recover it. 00:25:53.386 [2024-11-19 11:27:48.507053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.386 [2024-11-19 11:27:48.507092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.386 qpair failed and we were unable to recover it. 00:25:53.386 [2024-11-19 11:27:48.507255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.387 [2024-11-19 11:27:48.507293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.387 qpair failed and we were unable to recover it. 00:25:53.387 [2024-11-19 11:27:48.507482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.387 [2024-11-19 11:27:48.507542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.387 qpair failed and we were unable to recover it. 00:25:53.387 [2024-11-19 11:27:48.507714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.387 [2024-11-19 11:27:48.507751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.387 qpair failed and we were unable to recover it. 00:25:53.387 [2024-11-19 11:27:48.507943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.387 [2024-11-19 11:27:48.508001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.387 qpair failed and we were unable to recover it. 00:25:53.387 [2024-11-19 11:27:48.508235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.387 [2024-11-19 11:27:48.508273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.387 qpair failed and we were unable to recover it. 00:25:53.387 [2024-11-19 11:27:48.508498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.387 [2024-11-19 11:27:48.508554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.387 qpair failed and we were unable to recover it. 00:25:53.387 [2024-11-19 11:27:48.508752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.387 [2024-11-19 11:27:48.508809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.387 qpair failed and we were unable to recover it. 00:25:53.387 [2024-11-19 11:27:48.508998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.387 [2024-11-19 11:27:48.509056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.387 qpair failed and we were unable to recover it. 00:25:53.387 [2024-11-19 11:27:48.509200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.387 [2024-11-19 11:27:48.509247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.387 qpair failed and we were unable to recover it. 00:25:53.387 [2024-11-19 11:27:48.509473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.387 [2024-11-19 11:27:48.509532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.387 qpair failed and we were unable to recover it. 00:25:53.387 [2024-11-19 11:27:48.509708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.387 [2024-11-19 11:27:48.509765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.387 qpair failed and we were unable to recover it. 00:25:53.387 [2024-11-19 11:27:48.509961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.387 [2024-11-19 11:27:48.510020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.387 qpair failed and we were unable to recover it. 00:25:53.387 [2024-11-19 11:27:48.510239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.387 [2024-11-19 11:27:48.510277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.387 qpair failed and we were unable to recover it. 00:25:53.387 [2024-11-19 11:27:48.510498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.387 [2024-11-19 11:27:48.510562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.387 qpair failed and we were unable to recover it. 00:25:53.387 [2024-11-19 11:27:48.510783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.387 [2024-11-19 11:27:48.510842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.387 qpair failed and we were unable to recover it. 00:25:53.387 [2024-11-19 11:27:48.511034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.387 [2024-11-19 11:27:48.511091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.387 qpair failed and we were unable to recover it. 00:25:53.387 [2024-11-19 11:27:48.511294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.387 [2024-11-19 11:27:48.511332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.387 qpair failed and we were unable to recover it. 00:25:53.387 [2024-11-19 11:27:48.511553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.387 [2024-11-19 11:27:48.511612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.387 qpair failed and we were unable to recover it. 00:25:53.387 [2024-11-19 11:27:48.511775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.387 [2024-11-19 11:27:48.511833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.387 qpair failed and we were unable to recover it. 00:25:53.387 [2024-11-19 11:27:48.512052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.387 [2024-11-19 11:27:48.512109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.387 qpair failed and we were unable to recover it. 00:25:53.387 [2024-11-19 11:27:48.512387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.387 [2024-11-19 11:27:48.512437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.387 qpair failed and we were unable to recover it. 00:25:53.387 [2024-11-19 11:27:48.512583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.387 [2024-11-19 11:27:48.512650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.387 qpair failed and we were unable to recover it. 00:25:53.387 [2024-11-19 11:27:48.512817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.387 [2024-11-19 11:27:48.512875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.387 qpair failed and we were unable to recover it. 00:25:53.387 [2024-11-19 11:27:48.513099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.387 [2024-11-19 11:27:48.513158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.387 qpair failed and we were unable to recover it. 00:25:53.387 [2024-11-19 11:27:48.513383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.387 [2024-11-19 11:27:48.513423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.387 qpair failed and we were unable to recover it. 00:25:53.387 [2024-11-19 11:27:48.513565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.387 [2024-11-19 11:27:48.513641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.387 qpair failed and we were unable to recover it. 00:25:53.387 [2024-11-19 11:27:48.513836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.387 [2024-11-19 11:27:48.513892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.387 qpair failed and we were unable to recover it. 00:25:53.387 [2024-11-19 11:27:48.514124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.387 [2024-11-19 11:27:48.514183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.387 qpair failed and we were unable to recover it. 00:25:53.387 [2024-11-19 11:27:48.514329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.387 [2024-11-19 11:27:48.514379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.387 qpair failed and we were unable to recover it. 00:25:53.387 [2024-11-19 11:27:48.514584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.387 [2024-11-19 11:27:48.514642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.387 qpair failed and we were unable to recover it. 00:25:53.387 [2024-11-19 11:27:48.514844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.387 [2024-11-19 11:27:48.514903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.387 qpair failed and we were unable to recover it. 00:25:53.387 [2024-11-19 11:27:48.515108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.388 [2024-11-19 11:27:48.515165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.388 qpair failed and we were unable to recover it. 00:25:53.388 [2024-11-19 11:27:48.515344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.388 [2024-11-19 11:27:48.515395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.388 qpair failed and we were unable to recover it. 00:25:53.388 [2024-11-19 11:27:48.515573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.388 [2024-11-19 11:27:48.515632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.388 qpair failed and we were unable to recover it. 00:25:53.388 [2024-11-19 11:27:48.515790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.388 [2024-11-19 11:27:48.515848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.388 qpair failed and we were unable to recover it. 00:25:53.388 [2024-11-19 11:27:48.516045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.388 [2024-11-19 11:27:48.516101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.388 qpair failed and we were unable to recover it. 00:25:53.388 [2024-11-19 11:27:48.516283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.388 [2024-11-19 11:27:48.516322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.388 qpair failed and we were unable to recover it. 00:25:53.388 [2024-11-19 11:27:48.516542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.388 [2024-11-19 11:27:48.516598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.388 qpair failed and we were unable to recover it. 00:25:53.388 [2024-11-19 11:27:48.516765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.388 [2024-11-19 11:27:48.516822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.388 qpair failed and we were unable to recover it. 00:25:53.388 [2024-11-19 11:27:48.516966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.388 [2024-11-19 11:27:48.517029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.388 qpair failed and we were unable to recover it. 00:25:53.388 [2024-11-19 11:27:48.517148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.388 [2024-11-19 11:27:48.517187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.388 qpair failed and we were unable to recover it. 00:25:53.388 [2024-11-19 11:27:48.517422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.388 [2024-11-19 11:27:48.517462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.388 qpair failed and we were unable to recover it. 00:25:53.388 [2024-11-19 11:27:48.517609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.388 [2024-11-19 11:27:48.517670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.388 qpair failed and we were unable to recover it. 00:25:53.388 [2024-11-19 11:27:48.517846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.388 [2024-11-19 11:27:48.517904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.388 qpair failed and we were unable to recover it. 00:25:53.388 [2024-11-19 11:27:48.518044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.388 [2024-11-19 11:27:48.518082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.388 qpair failed and we were unable to recover it. 00:25:53.388 [2024-11-19 11:27:48.518206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.388 [2024-11-19 11:27:48.518244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.388 qpair failed and we were unable to recover it. 00:25:53.388 [2024-11-19 11:27:48.518409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.388 [2024-11-19 11:27:48.518449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.388 qpair failed and we were unable to recover it. 00:25:53.388 [2024-11-19 11:27:48.518612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.388 [2024-11-19 11:27:48.518651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.388 qpair failed and we were unable to recover it. 00:25:53.388 [2024-11-19 11:27:48.518793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.388 [2024-11-19 11:27:48.518831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.388 qpair failed and we were unable to recover it. 00:25:53.388 [2024-11-19 11:27:48.518977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.388 [2024-11-19 11:27:48.519015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.388 qpair failed and we were unable to recover it. 00:25:53.388 [2024-11-19 11:27:48.519191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.388 [2024-11-19 11:27:48.519229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.388 qpair failed and we were unable to recover it. 00:25:53.388 [2024-11-19 11:27:48.519387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.388 [2024-11-19 11:27:48.519426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.388 qpair failed and we were unable to recover it. 00:25:53.388 [2024-11-19 11:27:48.519580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.388 [2024-11-19 11:27:48.519618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.388 qpair failed and we were unable to recover it. 00:25:53.388 [2024-11-19 11:27:48.519783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.388 [2024-11-19 11:27:48.519845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.388 qpair failed and we were unable to recover it. 00:25:53.388 [2024-11-19 11:27:48.520027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.388 [2024-11-19 11:27:48.520065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.388 qpair failed and we were unable to recover it. 00:25:53.388 [2024-11-19 11:27:48.520261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.388 [2024-11-19 11:27:48.520299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.388 qpair failed and we were unable to recover it. 00:25:53.388 [2024-11-19 11:27:48.520455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.388 [2024-11-19 11:27:48.520520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.388 qpair failed and we were unable to recover it. 00:25:53.388 [2024-11-19 11:27:48.520708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.388 [2024-11-19 11:27:48.520764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.388 qpair failed and we were unable to recover it. 00:25:53.388 [2024-11-19 11:27:48.520922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.388 [2024-11-19 11:27:48.520978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.388 qpair failed and we were unable to recover it. 00:25:53.388 [2024-11-19 11:27:48.521097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.388 [2024-11-19 11:27:48.521135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.388 qpair failed and we were unable to recover it. 00:25:53.388 [2024-11-19 11:27:48.521286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.388 [2024-11-19 11:27:48.521325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.388 qpair failed and we were unable to recover it. 00:25:53.388 [2024-11-19 11:27:48.521489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.388 [2024-11-19 11:27:48.521528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.388 qpair failed and we were unable to recover it. 00:25:53.388 [2024-11-19 11:27:48.521679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.388 [2024-11-19 11:27:48.521716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.388 qpair failed and we were unable to recover it. 00:25:53.388 [2024-11-19 11:27:48.521898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.388 [2024-11-19 11:27:48.521937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.388 qpair failed and we were unable to recover it. 00:25:53.388 [2024-11-19 11:27:48.522095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.388 [2024-11-19 11:27:48.522133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.388 qpair failed and we were unable to recover it. 00:25:53.388 [2024-11-19 11:27:48.522254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.388 [2024-11-19 11:27:48.522291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.388 qpair failed and we were unable to recover it. 00:25:53.388 [2024-11-19 11:27:48.522414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.388 [2024-11-19 11:27:48.522454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.388 qpair failed and we were unable to recover it. 00:25:53.388 [2024-11-19 11:27:48.522621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.389 [2024-11-19 11:27:48.522660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.389 qpair failed and we were unable to recover it. 00:25:53.389 [2024-11-19 11:27:48.522801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.389 [2024-11-19 11:27:48.522839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.389 qpair failed and we were unable to recover it. 00:25:53.389 [2024-11-19 11:27:48.522960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.389 [2024-11-19 11:27:48.522998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.389 qpair failed and we were unable to recover it. 00:25:53.389 [2024-11-19 11:27:48.523147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.389 [2024-11-19 11:27:48.523185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.389 qpair failed and we were unable to recover it. 00:25:53.389 [2024-11-19 11:27:48.523332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.389 [2024-11-19 11:27:48.523381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.389 qpair failed and we were unable to recover it. 00:25:53.389 [2024-11-19 11:27:48.523515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.389 [2024-11-19 11:27:48.523553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.389 qpair failed and we were unable to recover it. 00:25:53.389 [2024-11-19 11:27:48.523703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.389 [2024-11-19 11:27:48.523741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.389 qpair failed and we were unable to recover it. 00:25:53.389 [2024-11-19 11:27:48.523895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.389 [2024-11-19 11:27:48.523932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.389 qpair failed and we were unable to recover it. 00:25:53.389 [2024-11-19 11:27:48.524110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.389 [2024-11-19 11:27:48.524148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.389 qpair failed and we were unable to recover it. 00:25:53.389 [2024-11-19 11:27:48.524261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.389 [2024-11-19 11:27:48.524299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.389 qpair failed and we were unable to recover it. 00:25:53.389 [2024-11-19 11:27:48.524441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.389 [2024-11-19 11:27:48.524480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.389 qpair failed and we were unable to recover it. 00:25:53.389 [2024-11-19 11:27:48.524637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.389 [2024-11-19 11:27:48.524675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.389 qpair failed and we were unable to recover it. 00:25:53.389 [2024-11-19 11:27:48.524827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.389 [2024-11-19 11:27:48.524865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.389 qpair failed and we were unable to recover it. 00:25:53.389 [2024-11-19 11:27:48.525015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.389 [2024-11-19 11:27:48.525053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.389 qpair failed and we were unable to recover it. 00:25:53.389 [2024-11-19 11:27:48.525227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.389 [2024-11-19 11:27:48.525265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.389 qpair failed and we were unable to recover it. 00:25:53.389 [2024-11-19 11:27:48.525403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.389 [2024-11-19 11:27:48.525442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.389 qpair failed and we were unable to recover it. 00:25:53.389 [2024-11-19 11:27:48.525633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.389 [2024-11-19 11:27:48.525695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.389 qpair failed and we were unable to recover it. 00:25:53.389 [2024-11-19 11:27:48.525853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.389 [2024-11-19 11:27:48.525912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.389 qpair failed and we were unable to recover it. 00:25:53.389 [2024-11-19 11:27:48.526132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.389 [2024-11-19 11:27:48.526171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.389 qpair failed and we were unable to recover it. 00:25:53.389 [2024-11-19 11:27:48.526385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.389 [2024-11-19 11:27:48.526436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.389 qpair failed and we were unable to recover it. 00:25:53.389 [2024-11-19 11:27:48.526588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.389 [2024-11-19 11:27:48.526646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.389 qpair failed and we were unable to recover it. 00:25:53.389 [2024-11-19 11:27:48.526804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.389 [2024-11-19 11:27:48.526863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.389 qpair failed and we were unable to recover it. 00:25:53.389 [2024-11-19 11:27:48.527085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.389 [2024-11-19 11:27:48.527143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.389 qpair failed and we were unable to recover it. 00:25:53.389 [2024-11-19 11:27:48.527325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.389 [2024-11-19 11:27:48.527374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.389 qpair failed and we were unable to recover it. 00:25:53.389 [2024-11-19 11:27:48.527553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.389 [2024-11-19 11:27:48.527613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.389 qpair failed and we were unable to recover it. 00:25:53.389 [2024-11-19 11:27:48.527813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.389 [2024-11-19 11:27:48.527869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.389 qpair failed and we were unable to recover it. 00:25:53.389 [2024-11-19 11:27:48.528024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.389 [2024-11-19 11:27:48.528087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.389 qpair failed and we were unable to recover it. 00:25:53.389 [2024-11-19 11:27:48.528275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.389 [2024-11-19 11:27:48.528313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.389 qpair failed and we were unable to recover it. 00:25:53.389 [2024-11-19 11:27:48.528525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.389 [2024-11-19 11:27:48.528563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.389 qpair failed and we were unable to recover it. 00:25:53.389 [2024-11-19 11:27:48.528777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.389 [2024-11-19 11:27:48.528816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.389 qpair failed and we were unable to recover it. 00:25:53.389 [2024-11-19 11:27:48.529037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.389 [2024-11-19 11:27:48.529094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.389 qpair failed and we were unable to recover it. 00:25:53.389 [2024-11-19 11:27:48.529265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.389 [2024-11-19 11:27:48.529310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.389 qpair failed and we were unable to recover it. 00:25:53.390 [2024-11-19 11:27:48.529514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.390 [2024-11-19 11:27:48.529574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.390 qpair failed and we were unable to recover it. 00:25:53.390 [2024-11-19 11:27:48.529763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.390 [2024-11-19 11:27:48.529801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.390 qpair failed and we were unable to recover it. 00:25:53.390 [2024-11-19 11:27:48.529961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.390 [2024-11-19 11:27:48.530000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.390 qpair failed and we were unable to recover it. 00:25:53.390 [2024-11-19 11:27:48.530238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.390 [2024-11-19 11:27:48.530277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.390 qpair failed and we were unable to recover it. 00:25:53.390 [2024-11-19 11:27:48.530464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.390 [2024-11-19 11:27:48.530525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.390 qpair failed and we were unable to recover it. 00:25:53.390 [2024-11-19 11:27:48.530674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.390 [2024-11-19 11:27:48.530733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.390 qpair failed and we were unable to recover it. 00:25:53.390 [2024-11-19 11:27:48.530887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.390 [2024-11-19 11:27:48.530923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.390 qpair failed and we were unable to recover it. 00:25:53.390 [2024-11-19 11:27:48.531046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.390 [2024-11-19 11:27:48.531081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.390 qpair failed and we were unable to recover it. 00:25:53.390 [2024-11-19 11:27:48.531270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.390 [2024-11-19 11:27:48.531306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.390 qpair failed and we were unable to recover it. 00:25:53.390 [2024-11-19 11:27:48.531478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.390 [2024-11-19 11:27:48.531516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.390 qpair failed and we were unable to recover it. 00:25:53.390 [2024-11-19 11:27:48.531710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.390 [2024-11-19 11:27:48.531746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.390 qpair failed and we were unable to recover it. 00:25:53.390 [2024-11-19 11:27:48.531943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.390 [2024-11-19 11:27:48.531978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.390 qpair failed and we were unable to recover it. 00:25:53.390 [2024-11-19 11:27:48.532098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.390 [2024-11-19 11:27:48.532133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.390 qpair failed and we were unable to recover it. 00:25:53.390 [2024-11-19 11:27:48.532308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.390 [2024-11-19 11:27:48.532350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.390 qpair failed and we were unable to recover it. 00:25:53.390 [2024-11-19 11:27:48.532548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.390 [2024-11-19 11:27:48.532604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.390 qpair failed and we were unable to recover it. 00:25:53.390 [2024-11-19 11:27:48.532799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.390 [2024-11-19 11:27:48.532854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.390 qpair failed and we were unable to recover it. 00:25:53.390 [2024-11-19 11:27:48.533007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.390 [2024-11-19 11:27:48.533063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.390 qpair failed and we were unable to recover it. 00:25:53.390 [2024-11-19 11:27:48.533259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.390 [2024-11-19 11:27:48.533295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.390 qpair failed and we were unable to recover it. 00:25:53.390 [2024-11-19 11:27:48.533458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.390 [2024-11-19 11:27:48.533515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.390 qpair failed and we were unable to recover it. 00:25:53.390 [2024-11-19 11:27:48.533669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.390 [2024-11-19 11:27:48.533729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.390 qpair failed and we were unable to recover it. 00:25:53.390 [2024-11-19 11:27:48.533861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.390 [2024-11-19 11:27:48.533897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.390 qpair failed and we were unable to recover it. 00:25:53.390 [2024-11-19 11:27:48.534120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.390 [2024-11-19 11:27:48.534156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.390 qpair failed and we were unable to recover it. 00:25:53.390 [2024-11-19 11:27:48.534358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.390 [2024-11-19 11:27:48.534421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.390 qpair failed and we were unable to recover it. 00:25:53.390 [2024-11-19 11:27:48.534530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.390 [2024-11-19 11:27:48.534579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.390 qpair failed and we were unable to recover it. 00:25:53.390 [2024-11-19 11:27:48.534734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.390 [2024-11-19 11:27:48.534762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.390 qpair failed and we were unable to recover it. 00:25:53.390 [2024-11-19 11:27:48.534901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.390 [2024-11-19 11:27:48.534927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.390 qpair failed and we were unable to recover it. 00:25:53.390 [2024-11-19 11:27:48.535100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.390 [2024-11-19 11:27:48.535127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.390 qpair failed and we were unable to recover it. 00:25:53.390 [2024-11-19 11:27:48.535272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.390 [2024-11-19 11:27:48.535313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.390 qpair failed and we were unable to recover it. 00:25:53.390 [2024-11-19 11:27:48.535425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.391 [2024-11-19 11:27:48.535452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.391 qpair failed and we were unable to recover it. 00:25:53.391 [2024-11-19 11:27:48.535564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.391 [2024-11-19 11:27:48.535592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.391 qpair failed and we were unable to recover it. 00:25:53.391 [2024-11-19 11:27:48.535730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.391 [2024-11-19 11:27:48.535783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.391 qpair failed and we were unable to recover it. 00:25:53.391 [2024-11-19 11:27:48.535941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.391 [2024-11-19 11:27:48.535993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.391 qpair failed and we were unable to recover it. 00:25:53.391 [2024-11-19 11:27:48.536174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.391 [2024-11-19 11:27:48.536200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.391 qpair failed and we were unable to recover it. 00:25:53.391 [2024-11-19 11:27:48.536372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.391 [2024-11-19 11:27:48.536400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.391 qpair failed and we were unable to recover it. 00:25:53.391 [2024-11-19 11:27:48.536502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.391 [2024-11-19 11:27:48.536534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.391 qpair failed and we were unable to recover it. 00:25:53.391 [2024-11-19 11:27:48.536754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.391 [2024-11-19 11:27:48.536790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.391 qpair failed and we were unable to recover it. 00:25:53.391 [2024-11-19 11:27:48.536984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.391 [2024-11-19 11:27:48.537022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.391 qpair failed and we were unable to recover it. 00:25:53.391 [2024-11-19 11:27:48.537176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.391 [2024-11-19 11:27:48.537201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.391 qpair failed and we were unable to recover it. 00:25:53.391 [2024-11-19 11:27:48.537397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.391 [2024-11-19 11:27:48.537425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.391 qpair failed and we were unable to recover it. 00:25:53.391 [2024-11-19 11:27:48.537527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.391 [2024-11-19 11:27:48.537555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.391 qpair failed and we were unable to recover it. 00:25:53.391 [2024-11-19 11:27:48.537692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.391 [2024-11-19 11:27:48.537728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.391 qpair failed and we were unable to recover it. 00:25:53.391 [2024-11-19 11:27:48.537925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.391 [2024-11-19 11:27:48.538041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.391 qpair failed and we were unable to recover it. 00:25:53.391 [2024-11-19 11:27:48.538227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.391 [2024-11-19 11:27:48.538254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.391 qpair failed and we were unable to recover it. 00:25:53.391 [2024-11-19 11:27:48.538381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.391 [2024-11-19 11:27:48.538409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.391 qpair failed and we were unable to recover it. 00:25:53.391 [2024-11-19 11:27:48.538546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.391 [2024-11-19 11:27:48.538573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.391 qpair failed and we were unable to recover it. 00:25:53.391 [2024-11-19 11:27:48.538722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.391 [2024-11-19 11:27:48.538757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.391 qpair failed and we were unable to recover it. 00:25:53.391 [2024-11-19 11:27:48.538976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.391 [2024-11-19 11:27:48.539038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.391 qpair failed and we were unable to recover it. 00:25:53.391 [2024-11-19 11:27:48.539207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.391 [2024-11-19 11:27:48.539231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.391 qpair failed and we were unable to recover it. 00:25:53.391 [2024-11-19 11:27:48.539383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.391 [2024-11-19 11:27:48.539410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.391 qpair failed and we were unable to recover it. 00:25:53.391 [2024-11-19 11:27:48.539525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.391 [2024-11-19 11:27:48.539572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.391 qpair failed and we were unable to recover it. 00:25:53.391 [2024-11-19 11:27:48.539736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.391 [2024-11-19 11:27:48.539761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.391 qpair failed and we were unable to recover it. 00:25:53.391 [2024-11-19 11:27:48.539954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.391 [2024-11-19 11:27:48.539978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.391 qpair failed and we were unable to recover it. 00:25:53.391 [2024-11-19 11:27:48.540129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.391 [2024-11-19 11:27:48.540155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.391 qpair failed and we were unable to recover it. 00:25:53.391 [2024-11-19 11:27:48.540314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.391 [2024-11-19 11:27:48.540356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.391 qpair failed and we were unable to recover it. 00:25:53.391 [2024-11-19 11:27:48.540499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.391 [2024-11-19 11:27:48.540534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.391 qpair failed and we were unable to recover it. 00:25:53.391 [2024-11-19 11:27:48.540684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.391 [2024-11-19 11:27:48.540709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.391 qpair failed and we were unable to recover it. 00:25:53.391 [2024-11-19 11:27:48.540903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.391 [2024-11-19 11:27:48.540928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.391 qpair failed and we were unable to recover it. 00:25:53.391 [2024-11-19 11:27:48.541075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.391 [2024-11-19 11:27:48.541115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.391 qpair failed and we were unable to recover it. 00:25:53.391 [2024-11-19 11:27:48.541255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.391 [2024-11-19 11:27:48.541280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.391 qpair failed and we were unable to recover it. 00:25:53.391 [2024-11-19 11:27:48.541428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.391 [2024-11-19 11:27:48.541457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.391 qpair failed and we were unable to recover it. 00:25:53.391 [2024-11-19 11:27:48.541552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.391 [2024-11-19 11:27:48.541578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.391 qpair failed and we were unable to recover it. 00:25:53.391 [2024-11-19 11:27:48.541723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.391 [2024-11-19 11:27:48.541754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.391 qpair failed and we were unable to recover it. 00:25:53.391 [2024-11-19 11:27:48.541903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.391 [2024-11-19 11:27:48.541928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.391 qpair failed and we were unable to recover it. 00:25:53.392 [2024-11-19 11:27:48.542086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.392 [2024-11-19 11:27:48.542131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.392 qpair failed and we were unable to recover it. 00:25:53.392 [2024-11-19 11:27:48.542327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.392 [2024-11-19 11:27:48.542353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.392 qpair failed and we were unable to recover it. 00:25:53.392 [2024-11-19 11:27:48.542527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.392 [2024-11-19 11:27:48.542554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.392 qpair failed and we were unable to recover it. 00:25:53.392 [2024-11-19 11:27:48.542728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.392 [2024-11-19 11:27:48.542755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.392 qpair failed and we were unable to recover it. 00:25:53.392 [2024-11-19 11:27:48.542894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.392 [2024-11-19 11:27:48.542942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.392 qpair failed and we were unable to recover it. 00:25:53.392 [2024-11-19 11:27:48.543087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.392 [2024-11-19 11:27:48.543133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.392 qpair failed and we were unable to recover it. 00:25:53.392 [2024-11-19 11:27:48.543287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.392 [2024-11-19 11:27:48.543327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.392 qpair failed and we were unable to recover it. 00:25:53.392 [2024-11-19 11:27:48.543457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.392 [2024-11-19 11:27:48.543485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.392 qpair failed and we were unable to recover it. 00:25:53.392 [2024-11-19 11:27:48.543654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.392 [2024-11-19 11:27:48.543682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.392 qpair failed and we were unable to recover it. 00:25:53.392 [2024-11-19 11:27:48.543835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.392 [2024-11-19 11:27:48.543861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.392 qpair failed and we were unable to recover it. 00:25:53.392 [2024-11-19 11:27:48.544070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.392 [2024-11-19 11:27:48.544116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.392 qpair failed and we were unable to recover it. 00:25:53.392 [2024-11-19 11:27:48.544284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.392 [2024-11-19 11:27:48.544311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.392 qpair failed and we were unable to recover it. 00:25:53.392 [2024-11-19 11:27:48.544475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.392 [2024-11-19 11:27:48.544502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.392 qpair failed and we were unable to recover it. 00:25:53.392 [2024-11-19 11:27:48.544598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.392 [2024-11-19 11:27:48.544637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.392 qpair failed and we were unable to recover it. 00:25:53.392 [2024-11-19 11:27:48.544758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.392 [2024-11-19 11:27:48.544785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.392 qpair failed and we were unable to recover it. 00:25:53.392 [2024-11-19 11:27:48.544963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.392 [2024-11-19 11:27:48.544997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.392 qpair failed and we were unable to recover it. 00:25:53.392 [2024-11-19 11:27:48.545183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.392 [2024-11-19 11:27:48.545209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.392 qpair failed and we were unable to recover it. 00:25:53.392 [2024-11-19 11:27:48.545346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.392 [2024-11-19 11:27:48.545384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.392 qpair failed and we were unable to recover it. 00:25:53.392 [2024-11-19 11:27:48.545518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.392 [2024-11-19 11:27:48.545546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.392 qpair failed and we were unable to recover it. 00:25:53.392 [2024-11-19 11:27:48.545675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.392 [2024-11-19 11:27:48.545701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.392 qpair failed and we were unable to recover it. 00:25:53.392 [2024-11-19 11:27:48.545878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.392 [2024-11-19 11:27:48.545903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.392 qpair failed and we were unable to recover it. 00:25:53.392 [2024-11-19 11:27:48.546043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.392 [2024-11-19 11:27:48.546068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.392 qpair failed and we were unable to recover it. 00:25:53.392 [2024-11-19 11:27:48.546197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.392 [2024-11-19 11:27:48.546222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.392 qpair failed and we were unable to recover it. 00:25:53.392 [2024-11-19 11:27:48.546386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.392 [2024-11-19 11:27:48.546423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.392 qpair failed and we were unable to recover it. 00:25:53.392 [2024-11-19 11:27:48.546553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.392 [2024-11-19 11:27:48.546580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.392 qpair failed and we were unable to recover it. 00:25:53.392 [2024-11-19 11:27:48.546772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.392 [2024-11-19 11:27:48.546797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.392 qpair failed and we were unable to recover it. 00:25:53.392 [2024-11-19 11:27:48.546977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.392 [2024-11-19 11:27:48.547003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.392 qpair failed and we were unable to recover it. 00:25:53.392 [2024-11-19 11:27:48.547118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.392 [2024-11-19 11:27:48.547160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.392 qpair failed and we were unable to recover it. 00:25:53.392 [2024-11-19 11:27:48.547308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.392 [2024-11-19 11:27:48.547348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.392 qpair failed and we were unable to recover it. 00:25:53.392 [2024-11-19 11:27:48.547486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.392 [2024-11-19 11:27:48.547514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.392 qpair failed and we were unable to recover it. 00:25:53.392 [2024-11-19 11:27:48.547612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.392 [2024-11-19 11:27:48.547656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.392 qpair failed and we were unable to recover it. 00:25:53.392 [2024-11-19 11:27:48.547784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.392 [2024-11-19 11:27:48.547811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.392 qpair failed and we were unable to recover it. 00:25:53.392 [2024-11-19 11:27:48.547958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.392 [2024-11-19 11:27:48.547984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.392 qpair failed and we were unable to recover it. 00:25:53.392 [2024-11-19 11:27:48.548115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.392 [2024-11-19 11:27:48.548143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.392 qpair failed and we were unable to recover it. 00:25:53.392 [2024-11-19 11:27:48.548283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.392 [2024-11-19 11:27:48.548326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.392 qpair failed and we were unable to recover it. 00:25:53.392 [2024-11-19 11:27:48.548503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.392 [2024-11-19 11:27:48.548530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.392 qpair failed and we were unable to recover it. 00:25:53.393 [2024-11-19 11:27:48.548643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.393 [2024-11-19 11:27:48.548670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.393 qpair failed and we were unable to recover it. 00:25:53.393 [2024-11-19 11:27:48.548786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.393 [2024-11-19 11:27:48.548813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.393 qpair failed and we were unable to recover it. 00:25:53.393 [2024-11-19 11:27:48.548960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.393 [2024-11-19 11:27:48.548992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.393 qpair failed and we were unable to recover it. 00:25:53.393 [2024-11-19 11:27:48.549145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.393 [2024-11-19 11:27:48.549171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.393 qpair failed and we were unable to recover it. 00:25:53.393 [2024-11-19 11:27:48.549290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.393 [2024-11-19 11:27:48.549327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.393 qpair failed and we were unable to recover it. 00:25:53.393 [2024-11-19 11:27:48.549493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.393 [2024-11-19 11:27:48.549520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.393 qpair failed and we were unable to recover it. 00:25:53.393 [2024-11-19 11:27:48.549650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.393 [2024-11-19 11:27:48.549691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.393 qpair failed and we were unable to recover it. 00:25:53.393 [2024-11-19 11:27:48.549841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.393 [2024-11-19 11:27:48.549866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.393 qpair failed and we were unable to recover it. 00:25:53.393 [2024-11-19 11:27:48.550087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.393 [2024-11-19 11:27:48.550112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.393 qpair failed and we were unable to recover it. 00:25:53.393 [2024-11-19 11:27:48.550330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.393 [2024-11-19 11:27:48.550379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.393 qpair failed and we were unable to recover it. 00:25:53.393 [2024-11-19 11:27:48.550506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.393 [2024-11-19 11:27:48.550533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.393 qpair failed and we were unable to recover it. 00:25:53.393 [2024-11-19 11:27:48.550707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.393 [2024-11-19 11:27:48.550734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.393 qpair failed and we were unable to recover it. 00:25:53.393 [2024-11-19 11:27:48.550926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.393 [2024-11-19 11:27:48.550953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.393 qpair failed and we were unable to recover it. 00:25:53.393 [2024-11-19 11:27:48.551135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.393 [2024-11-19 11:27:48.551161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.393 qpair failed and we were unable to recover it. 00:25:53.393 [2024-11-19 11:27:48.551303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.393 [2024-11-19 11:27:48.551344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.393 qpair failed and we were unable to recover it. 00:25:53.393 [2024-11-19 11:27:48.551532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.393 [2024-11-19 11:27:48.551559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.393 qpair failed and we were unable to recover it. 00:25:53.393 [2024-11-19 11:27:48.551681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.393 [2024-11-19 11:27:48.551723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.393 qpair failed and we were unable to recover it. 00:25:53.393 [2024-11-19 11:27:48.551891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.393 [2024-11-19 11:27:48.551917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.393 qpair failed and we were unable to recover it. 00:25:53.393 [2024-11-19 11:27:48.552099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.393 [2024-11-19 11:27:48.552136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.393 qpair failed and we were unable to recover it. 00:25:53.393 [2024-11-19 11:27:48.552253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.393 [2024-11-19 11:27:48.552302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.393 qpair failed and we were unable to recover it. 00:25:53.393 [2024-11-19 11:27:48.552439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.393 [2024-11-19 11:27:48.552467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.393 qpair failed and we were unable to recover it. 00:25:53.393 [2024-11-19 11:27:48.552561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.393 [2024-11-19 11:27:48.552588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.393 qpair failed and we were unable to recover it. 00:25:53.393 [2024-11-19 11:27:48.552708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.393 [2024-11-19 11:27:48.552734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.393 qpair failed and we were unable to recover it. 00:25:53.393 [2024-11-19 11:27:48.552832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.393 [2024-11-19 11:27:48.552859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.393 qpair failed and we were unable to recover it. 00:25:53.393 [2024-11-19 11:27:48.553031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.393 [2024-11-19 11:27:48.553082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.393 qpair failed and we were unable to recover it. 00:25:53.393 [2024-11-19 11:27:48.553233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.393 [2024-11-19 11:27:48.553259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.393 qpair failed and we were unable to recover it. 00:25:53.393 [2024-11-19 11:27:48.553396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.393 [2024-11-19 11:27:48.553424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.393 qpair failed and we were unable to recover it. 00:25:53.393 [2024-11-19 11:27:48.553603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.393 [2024-11-19 11:27:48.553630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.393 qpair failed and we were unable to recover it. 00:25:53.393 [2024-11-19 11:27:48.553818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.393 [2024-11-19 11:27:48.553859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.393 qpair failed and we were unable to recover it. 00:25:53.393 [2024-11-19 11:27:48.554026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.393 [2024-11-19 11:27:48.554053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.393 qpair failed and we were unable to recover it. 00:25:53.393 [2024-11-19 11:27:48.554159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.393 [2024-11-19 11:27:48.554187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.393 qpair failed and we were unable to recover it. 00:25:53.393 [2024-11-19 11:27:48.554345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.393 [2024-11-19 11:27:48.554380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.393 qpair failed and we were unable to recover it. 00:25:53.393 [2024-11-19 11:27:48.554484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.393 [2024-11-19 11:27:48.554511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.393 qpair failed and we were unable to recover it. 00:25:53.393 [2024-11-19 11:27:48.554672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.393 [2024-11-19 11:27:48.554699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.393 qpair failed and we were unable to recover it. 00:25:53.393 [2024-11-19 11:27:48.554827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.393 [2024-11-19 11:27:48.554869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.393 qpair failed and we were unable to recover it. 00:25:53.393 [2024-11-19 11:27:48.555016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.394 [2024-11-19 11:27:48.555058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.394 qpair failed and we were unable to recover it. 00:25:53.394 [2024-11-19 11:27:48.555212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.394 [2024-11-19 11:27:48.555256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.394 qpair failed and we were unable to recover it. 00:25:53.394 [2024-11-19 11:27:48.555404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.394 [2024-11-19 11:27:48.555431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.394 qpair failed and we were unable to recover it. 00:25:53.394 [2024-11-19 11:27:48.555567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.394 [2024-11-19 11:27:48.555594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.394 qpair failed and we were unable to recover it. 00:25:53.394 [2024-11-19 11:27:48.555764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.394 [2024-11-19 11:27:48.555808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.394 qpair failed and we were unable to recover it. 00:25:53.394 [2024-11-19 11:27:48.555953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.394 [2024-11-19 11:27:48.555979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.394 qpair failed and we were unable to recover it. 00:25:53.394 [2024-11-19 11:27:48.556133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.394 [2024-11-19 11:27:48.556176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.394 qpair failed and we were unable to recover it. 00:25:53.394 [2024-11-19 11:27:48.556372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.394 [2024-11-19 11:27:48.556418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.394 qpair failed and we were unable to recover it. 00:25:53.394 [2024-11-19 11:27:48.556578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.394 [2024-11-19 11:27:48.556605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.394 qpair failed and we were unable to recover it. 00:25:53.394 [2024-11-19 11:27:48.556752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.394 [2024-11-19 11:27:48.556779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.394 qpair failed and we were unable to recover it. 00:25:53.394 [2024-11-19 11:27:48.556915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.394 [2024-11-19 11:27:48.556956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.394 qpair failed and we were unable to recover it. 00:25:53.394 [2024-11-19 11:27:48.557117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.394 [2024-11-19 11:27:48.557143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.394 qpair failed and we were unable to recover it. 00:25:53.394 [2024-11-19 11:27:48.557335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.394 [2024-11-19 11:27:48.557392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.394 qpair failed and we were unable to recover it. 00:25:53.394 [2024-11-19 11:27:48.557530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.394 [2024-11-19 11:27:48.557557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.394 qpair failed and we were unable to recover it. 00:25:53.394 [2024-11-19 11:27:48.557739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.394 [2024-11-19 11:27:48.557766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.394 qpair failed and we were unable to recover it. 00:25:53.394 [2024-11-19 11:27:48.557918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.394 [2024-11-19 11:27:48.557944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.394 qpair failed and we were unable to recover it. 00:25:53.394 [2024-11-19 11:27:48.558059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.394 [2024-11-19 11:27:48.558085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.394 qpair failed and we were unable to recover it. 00:25:53.394 [2024-11-19 11:27:48.558305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.394 [2024-11-19 11:27:48.558341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.394 qpair failed and we were unable to recover it. 00:25:53.394 [2024-11-19 11:27:48.558522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.394 [2024-11-19 11:27:48.558557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.394 qpair failed and we were unable to recover it. 00:25:53.394 [2024-11-19 11:27:48.558742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.394 [2024-11-19 11:27:48.558779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.394 qpair failed and we were unable to recover it. 00:25:53.394 [2024-11-19 11:27:48.558944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.394 [2024-11-19 11:27:48.558970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.394 qpair failed and we were unable to recover it. 00:25:53.394 [2024-11-19 11:27:48.559219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.394 [2024-11-19 11:27:48.559257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.394 qpair failed and we were unable to recover it. 00:25:53.394 [2024-11-19 11:27:48.559434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.394 [2024-11-19 11:27:48.559462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.394 qpair failed and we were unable to recover it. 00:25:53.394 [2024-11-19 11:27:48.559596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.394 [2024-11-19 11:27:48.559623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.394 qpair failed and we were unable to recover it. 00:25:53.394 [2024-11-19 11:27:48.559797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.394 [2024-11-19 11:27:48.559823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.394 qpair failed and we were unable to recover it. 00:25:53.394 [2024-11-19 11:27:48.559970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.394 [2024-11-19 11:27:48.560006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.394 qpair failed and we were unable to recover it. 00:25:53.394 [2024-11-19 11:27:48.560154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.394 [2024-11-19 11:27:48.560195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.394 qpair failed and we were unable to recover it. 00:25:53.394 [2024-11-19 11:27:48.560325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.394 [2024-11-19 11:27:48.560376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.394 qpair failed and we were unable to recover it. 00:25:53.394 [2024-11-19 11:27:48.560484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.394 [2024-11-19 11:27:48.560512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.394 qpair failed and we were unable to recover it. 00:25:53.394 [2024-11-19 11:27:48.560635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.394 [2024-11-19 11:27:48.560663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.394 qpair failed and we were unable to recover it. 00:25:53.394 [2024-11-19 11:27:48.560822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.394 [2024-11-19 11:27:48.560848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.394 qpair failed and we were unable to recover it. 00:25:53.394 [2024-11-19 11:27:48.560970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.394 [2024-11-19 11:27:48.560997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.394 qpair failed and we were unable to recover it. 00:25:53.394 [2024-11-19 11:27:48.561086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.394 [2024-11-19 11:27:48.561114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.394 qpair failed and we were unable to recover it. 00:25:53.394 [2024-11-19 11:27:48.561243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.394 [2024-11-19 11:27:48.561271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.394 qpair failed and we were unable to recover it. 00:25:53.394 [2024-11-19 11:27:48.561376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.394 [2024-11-19 11:27:48.561404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.394 qpair failed and we were unable to recover it. 00:25:53.394 [2024-11-19 11:27:48.561545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.394 [2024-11-19 11:27:48.561572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.394 qpair failed and we were unable to recover it. 00:25:53.395 [2024-11-19 11:27:48.561711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.395 [2024-11-19 11:27:48.561748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.395 qpair failed and we were unable to recover it. 00:25:53.395 [2024-11-19 11:27:48.561948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.395 [2024-11-19 11:27:48.561975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.395 qpair failed and we were unable to recover it. 00:25:53.395 [2024-11-19 11:27:48.562136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.395 [2024-11-19 11:27:48.562162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.395 qpair failed and we were unable to recover it. 00:25:53.395 [2024-11-19 11:27:48.562283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.395 [2024-11-19 11:27:48.562323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.395 qpair failed and we were unable to recover it. 00:25:53.395 [2024-11-19 11:27:48.562495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.395 [2024-11-19 11:27:48.562523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.395 qpair failed and we were unable to recover it. 00:25:53.395 [2024-11-19 11:27:48.562681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.395 [2024-11-19 11:27:48.562723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.395 qpair failed and we were unable to recover it. 00:25:53.395 [2024-11-19 11:27:48.562869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.395 [2024-11-19 11:27:48.562895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.395 qpair failed and we were unable to recover it. 00:25:53.395 [2024-11-19 11:27:48.563038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.395 [2024-11-19 11:27:48.563065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.395 qpair failed and we were unable to recover it. 00:25:53.395 [2024-11-19 11:27:48.563168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.395 [2024-11-19 11:27:48.563194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.395 qpair failed and we were unable to recover it. 00:25:53.395 [2024-11-19 11:27:48.563346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.395 [2024-11-19 11:27:48.563394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.395 qpair failed and we were unable to recover it. 00:25:53.395 [2024-11-19 11:27:48.563593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.395 [2024-11-19 11:27:48.563620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.395 qpair failed and we were unable to recover it. 00:25:53.395 [2024-11-19 11:27:48.563761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.395 [2024-11-19 11:27:48.563796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.395 qpair failed and we were unable to recover it. 00:25:53.395 [2024-11-19 11:27:48.564044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.395 [2024-11-19 11:27:48.564070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.395 qpair failed and we were unable to recover it. 00:25:53.395 [2024-11-19 11:27:48.564209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.395 [2024-11-19 11:27:48.564235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.395 qpair failed and we were unable to recover it. 00:25:53.395 [2024-11-19 11:27:48.564421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.395 [2024-11-19 11:27:48.564449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.395 qpair failed and we were unable to recover it. 00:25:53.395 [2024-11-19 11:27:48.564607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.395 [2024-11-19 11:27:48.564635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.395 qpair failed and we were unable to recover it. 00:25:53.395 [2024-11-19 11:27:48.564811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.395 [2024-11-19 11:27:48.564851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.395 qpair failed and we were unable to recover it. 00:25:53.395 [2024-11-19 11:27:48.565012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.395 [2024-11-19 11:27:48.565040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.395 qpair failed and we were unable to recover it. 00:25:53.395 [2024-11-19 11:27:48.565185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.395 [2024-11-19 11:27:48.565218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.395 qpair failed and we were unable to recover it. 00:25:53.395 [2024-11-19 11:27:48.565353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.395 [2024-11-19 11:27:48.565390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.395 qpair failed and we were unable to recover it. 00:25:53.395 [2024-11-19 11:27:48.565569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.395 [2024-11-19 11:27:48.565596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.395 qpair failed and we were unable to recover it. 00:25:53.395 [2024-11-19 11:27:48.565730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.395 [2024-11-19 11:27:48.565756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.395 qpair failed and we were unable to recover it. 00:25:53.395 [2024-11-19 11:27:48.565895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.395 [2024-11-19 11:27:48.565922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.395 qpair failed and we were unable to recover it. 00:25:53.395 [2024-11-19 11:27:48.566126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.395 [2024-11-19 11:27:48.566162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.395 qpair failed and we were unable to recover it. 00:25:53.395 [2024-11-19 11:27:48.566356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.395 [2024-11-19 11:27:48.566391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.395 qpair failed and we were unable to recover it. 00:25:53.395 [2024-11-19 11:27:48.566516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.395 [2024-11-19 11:27:48.566543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.395 qpair failed and we were unable to recover it. 00:25:53.395 [2024-11-19 11:27:48.566693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.395 [2024-11-19 11:27:48.566728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.395 qpair failed and we were unable to recover it. 00:25:53.395 [2024-11-19 11:27:48.566856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.395 [2024-11-19 11:27:48.566895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.395 qpair failed and we were unable to recover it. 00:25:53.395 [2024-11-19 11:27:48.567038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.395 [2024-11-19 11:27:48.567092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.395 qpair failed and we were unable to recover it. 00:25:53.395 [2024-11-19 11:27:48.567235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.395 [2024-11-19 11:27:48.567261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.395 qpair failed and we were unable to recover it. 00:25:53.395 [2024-11-19 11:27:48.567416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.395 [2024-11-19 11:27:48.567452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.395 qpair failed and we were unable to recover it. 00:25:53.395 [2024-11-19 11:27:48.567659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.396 [2024-11-19 11:27:48.567720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.396 qpair failed and we were unable to recover it. 00:25:53.396 [2024-11-19 11:27:48.567914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.396 [2024-11-19 11:27:48.567969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.396 qpair failed and we were unable to recover it. 00:25:53.396 [2024-11-19 11:27:48.568192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.396 [2024-11-19 11:27:48.568243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.396 qpair failed and we were unable to recover it. 00:25:53.396 [2024-11-19 11:27:48.568469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.396 [2024-11-19 11:27:48.568517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.396 qpair failed and we were unable to recover it. 00:25:53.396 [2024-11-19 11:27:48.568676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.396 [2024-11-19 11:27:48.568712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.396 qpair failed and we were unable to recover it. 00:25:53.396 [2024-11-19 11:27:48.568867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.396 [2024-11-19 11:27:48.568916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.396 qpair failed and we were unable to recover it. 00:25:53.396 [2024-11-19 11:27:48.569105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.396 [2024-11-19 11:27:48.569131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.396 qpair failed and we were unable to recover it. 00:25:53.396 [2024-11-19 11:27:48.569251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.396 [2024-11-19 11:27:48.569298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.396 qpair failed and we were unable to recover it. 00:25:53.396 [2024-11-19 11:27:48.569460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.396 [2024-11-19 11:27:48.569489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.396 qpair failed and we were unable to recover it. 00:25:53.396 [2024-11-19 11:27:48.569586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.396 [2024-11-19 11:27:48.569624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.396 qpair failed and we were unable to recover it. 00:25:53.396 [2024-11-19 11:27:48.569760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.396 [2024-11-19 11:27:48.569786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.396 qpair failed and we were unable to recover it. 00:25:53.396 [2024-11-19 11:27:48.569958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.396 [2024-11-19 11:27:48.570000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.396 qpair failed and we were unable to recover it. 00:25:53.396 [2024-11-19 11:27:48.570153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.396 [2024-11-19 11:27:48.570186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.396 qpair failed and we were unable to recover it. 00:25:53.396 [2024-11-19 11:27:48.570288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.396 [2024-11-19 11:27:48.570342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.396 qpair failed and we were unable to recover it. 00:25:53.396 [2024-11-19 11:27:48.570520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.396 [2024-11-19 11:27:48.570554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.396 qpair failed and we were unable to recover it. 00:25:53.396 [2024-11-19 11:27:48.570733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.396 [2024-11-19 11:27:48.570767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.396 qpair failed and we were unable to recover it. 00:25:53.396 [2024-11-19 11:27:48.570945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.396 [2024-11-19 11:27:48.570980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.396 qpair failed and we were unable to recover it. 00:25:53.396 [2024-11-19 11:27:48.571134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.396 [2024-11-19 11:27:48.571167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.396 qpair failed and we were unable to recover it. 00:25:53.396 [2024-11-19 11:27:48.571399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.396 [2024-11-19 11:27:48.571426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.396 qpair failed and we were unable to recover it. 00:25:53.396 [2024-11-19 11:27:48.571528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.396 [2024-11-19 11:27:48.571555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.396 qpair failed and we were unable to recover it. 00:25:53.396 [2024-11-19 11:27:48.571753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.396 [2024-11-19 11:27:48.571780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.396 qpair failed and we were unable to recover it. 00:25:53.396 [2024-11-19 11:27:48.571911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.396 [2024-11-19 11:27:48.571938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.396 qpair failed and we were unable to recover it. 00:25:53.396 [2024-11-19 11:27:48.572043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.396 [2024-11-19 11:27:48.572096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.396 qpair failed and we were unable to recover it. 00:25:53.396 [2024-11-19 11:27:48.572262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.396 [2024-11-19 11:27:48.572306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.396 qpair failed and we were unable to recover it. 00:25:53.396 [2024-11-19 11:27:48.572453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.396 [2024-11-19 11:27:48.572479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.396 qpair failed and we were unable to recover it. 00:25:53.396 [2024-11-19 11:27:48.572648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.396 [2024-11-19 11:27:48.572676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.396 qpair failed and we were unable to recover it. 00:25:53.396 [2024-11-19 11:27:48.572829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.396 [2024-11-19 11:27:48.572854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.396 qpair failed and we were unable to recover it. 00:25:53.396 [2024-11-19 11:27:48.573060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.396 [2024-11-19 11:27:48.573093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.396 qpair failed and we were unable to recover it. 00:25:53.396 [2024-11-19 11:27:48.573213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.396 [2024-11-19 11:27:48.573246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.396 qpair failed and we were unable to recover it. 00:25:53.396 [2024-11-19 11:27:48.573394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.396 [2024-11-19 11:27:48.573438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.396 qpair failed and we were unable to recover it. 00:25:53.396 [2024-11-19 11:27:48.573555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.396 [2024-11-19 11:27:48.573590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.396 qpair failed and we were unable to recover it. 00:25:53.396 [2024-11-19 11:27:48.573725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.396 [2024-11-19 11:27:48.573771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.396 qpair failed and we were unable to recover it. 00:25:53.396 [2024-11-19 11:27:48.573916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.396 [2024-11-19 11:27:48.573950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.396 qpair failed and we were unable to recover it. 00:25:53.396 [2024-11-19 11:27:48.574141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.396 [2024-11-19 11:27:48.574168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.396 qpair failed and we were unable to recover it. 00:25:53.396 [2024-11-19 11:27:48.574308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.396 [2024-11-19 11:27:48.574334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.396 qpair failed and we were unable to recover it. 00:25:53.396 [2024-11-19 11:27:48.574486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.396 [2024-11-19 11:27:48.574512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.396 qpair failed and we were unable to recover it. 00:25:53.396 [2024-11-19 11:27:48.574611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.396 [2024-11-19 11:27:48.574650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.396 qpair failed and we were unable to recover it. 00:25:53.396 [2024-11-19 11:27:48.574795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.396 [2024-11-19 11:27:48.574822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.396 qpair failed and we were unable to recover it. 00:25:53.396 [2024-11-19 11:27:48.575004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.397 [2024-11-19 11:27:48.575045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.397 qpair failed and we were unable to recover it. 00:25:53.397 [2024-11-19 11:27:48.575193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.397 [2024-11-19 11:27:48.575217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.397 qpair failed and we were unable to recover it. 00:25:53.397 [2024-11-19 11:27:48.575376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.397 [2024-11-19 11:27:48.575413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.397 qpair failed and we were unable to recover it. 00:25:53.397 [2024-11-19 11:27:48.575509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.397 [2024-11-19 11:27:48.575535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.397 qpair failed and we were unable to recover it. 00:25:53.397 [2024-11-19 11:27:48.575721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.397 [2024-11-19 11:27:48.575761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.397 qpair failed and we were unable to recover it. 00:25:53.397 [2024-11-19 11:27:48.575987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.397 [2024-11-19 11:27:48.576020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.397 qpair failed and we were unable to recover it. 00:25:53.397 [2024-11-19 11:27:48.576177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.397 [2024-11-19 11:27:48.576210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.397 qpair failed and we were unable to recover it. 00:25:53.397 [2024-11-19 11:27:48.576352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.397 [2024-11-19 11:27:48.576397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.397 qpair failed and we were unable to recover it. 00:25:53.397 [2024-11-19 11:27:48.576536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.397 [2024-11-19 11:27:48.576571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.397 qpair failed and we were unable to recover it. 00:25:53.397 [2024-11-19 11:27:48.576719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.397 [2024-11-19 11:27:48.576760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.397 qpair failed and we were unable to recover it. 00:25:53.397 [2024-11-19 11:27:48.576923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.397 [2024-11-19 11:27:48.576961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.397 qpair failed and we were unable to recover it. 00:25:53.397 [2024-11-19 11:27:48.577159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.397 [2024-11-19 11:27:48.577185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.397 qpair failed and we were unable to recover it. 00:25:53.397 [2024-11-19 11:27:48.577312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.397 [2024-11-19 11:27:48.577338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.397 qpair failed and we were unable to recover it. 00:25:53.397 [2024-11-19 11:27:48.577495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.397 [2024-11-19 11:27:48.577523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.397 qpair failed and we were unable to recover it. 00:25:53.397 [2024-11-19 11:27:48.577686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.397 [2024-11-19 11:27:48.577712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.397 qpair failed and we were unable to recover it. 00:25:53.397 [2024-11-19 11:27:48.577935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.397 [2024-11-19 11:27:48.577961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.397 qpair failed and we were unable to recover it. 00:25:53.397 [2024-11-19 11:27:48.578110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.397 [2024-11-19 11:27:48.578135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.397 qpair failed and we were unable to recover it. 00:25:53.397 [2024-11-19 11:27:48.578348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.397 [2024-11-19 11:27:48.578422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.397 qpair failed and we were unable to recover it. 00:25:53.397 [2024-11-19 11:27:48.578566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.397 [2024-11-19 11:27:48.578593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.397 qpair failed and we were unable to recover it. 00:25:53.397 [2024-11-19 11:27:48.578755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.397 [2024-11-19 11:27:48.578791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.397 qpair failed and we were unable to recover it. 00:25:53.397 [2024-11-19 11:27:48.578940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.397 [2024-11-19 11:27:48.578975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.397 qpair failed and we were unable to recover it. 00:25:53.397 [2024-11-19 11:27:48.579133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.397 [2024-11-19 11:27:48.579174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.397 qpair failed and we were unable to recover it. 00:25:53.397 [2024-11-19 11:27:48.579379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.397 [2024-11-19 11:27:48.579425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.397 qpair failed and we were unable to recover it. 00:25:53.397 [2024-11-19 11:27:48.579560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.397 [2024-11-19 11:27:48.579587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.397 qpair failed and we were unable to recover it. 00:25:53.397 [2024-11-19 11:27:48.579701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.397 [2024-11-19 11:27:48.579741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.397 qpair failed and we were unable to recover it. 00:25:53.397 [2024-11-19 11:27:48.579880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.397 [2024-11-19 11:27:48.579921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.397 qpair failed and we were unable to recover it. 00:25:53.397 [2024-11-19 11:27:48.580101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.397 [2024-11-19 11:27:48.580127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.397 qpair failed and we were unable to recover it. 00:25:53.397 [2024-11-19 11:27:48.580269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.397 [2024-11-19 11:27:48.580310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.397 qpair failed and we were unable to recover it. 00:25:53.397 [2024-11-19 11:27:48.580469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.397 [2024-11-19 11:27:48.580497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.397 qpair failed and we were unable to recover it. 00:25:53.397 [2024-11-19 11:27:48.580672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.397 [2024-11-19 11:27:48.580715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.397 qpair failed and we were unable to recover it. 00:25:53.397 [2024-11-19 11:27:48.580906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.397 [2024-11-19 11:27:48.580934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.397 qpair failed and we were unable to recover it. 00:25:53.397 [2024-11-19 11:27:48.581074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.397 [2024-11-19 11:27:48.581107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.397 qpair failed and we were unable to recover it. 00:25:53.397 [2024-11-19 11:27:48.581273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.397 [2024-11-19 11:27:48.581306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.397 qpair failed and we were unable to recover it. 00:25:53.397 [2024-11-19 11:27:48.581472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.397 [2024-11-19 11:27:48.581508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.397 qpair failed and we were unable to recover it. 00:25:53.397 [2024-11-19 11:27:48.581712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.397 [2024-11-19 11:27:48.581747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.397 qpair failed and we were unable to recover it. 00:25:53.397 [2024-11-19 11:27:48.581942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.397 [2024-11-19 11:27:48.581989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.397 qpair failed and we were unable to recover it. 00:25:53.397 [2024-11-19 11:27:48.582144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.397 [2024-11-19 11:27:48.582169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.397 qpair failed and we were unable to recover it. 00:25:53.397 [2024-11-19 11:27:48.582324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.397 [2024-11-19 11:27:48.582359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.397 qpair failed and we were unable to recover it. 00:25:53.397 [2024-11-19 11:27:48.582493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.397 [2024-11-19 11:27:48.582520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.397 qpair failed and we were unable to recover it. 00:25:53.397 [2024-11-19 11:27:48.582638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.398 [2024-11-19 11:27:48.582664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.398 qpair failed and we were unable to recover it. 00:25:53.398 [2024-11-19 11:27:48.582862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.398 [2024-11-19 11:27:48.582897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.398 qpair failed and we were unable to recover it. 00:25:53.398 [2024-11-19 11:27:48.583081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.398 [2024-11-19 11:27:48.583107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.398 qpair failed and we were unable to recover it. 00:25:53.398 [2024-11-19 11:27:48.583269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.398 [2024-11-19 11:27:48.583311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.398 qpair failed and we were unable to recover it. 00:25:53.398 [2024-11-19 11:27:48.583492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.398 [2024-11-19 11:27:48.583519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.398 qpair failed and we were unable to recover it. 00:25:53.398 [2024-11-19 11:27:48.583699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.398 [2024-11-19 11:27:48.583724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.398 qpair failed and we were unable to recover it. 00:25:53.398 [2024-11-19 11:27:48.583927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.398 [2024-11-19 11:27:48.583953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.398 qpair failed and we were unable to recover it. 00:25:53.398 [2024-11-19 11:27:48.584125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.398 [2024-11-19 11:27:48.584150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.398 qpair failed and we were unable to recover it. 00:25:53.398 [2024-11-19 11:27:48.584337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.398 [2024-11-19 11:27:48.584386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.398 qpair failed and we were unable to recover it. 00:25:53.398 [2024-11-19 11:27:48.584515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.398 [2024-11-19 11:27:48.584540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.398 qpair failed and we were unable to recover it. 00:25:53.398 [2024-11-19 11:27:48.584724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.398 [2024-11-19 11:27:48.584774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.398 qpair failed and we were unable to recover it. 00:25:53.398 [2024-11-19 11:27:48.584916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.398 [2024-11-19 11:27:48.584952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.398 qpair failed and we were unable to recover it. 00:25:53.398 [2024-11-19 11:27:48.585147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.398 [2024-11-19 11:27:48.585182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.398 qpair failed and we were unable to recover it. 00:25:53.398 [2024-11-19 11:27:48.585338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.398 [2024-11-19 11:27:48.585382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.398 qpair failed and we were unable to recover it. 00:25:53.398 [2024-11-19 11:27:48.585530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.398 [2024-11-19 11:27:48.585556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.398 qpair failed and we were unable to recover it. 00:25:53.398 [2024-11-19 11:27:48.585725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.398 [2024-11-19 11:27:48.585767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.398 qpair failed and we were unable to recover it. 00:25:53.398 [2024-11-19 11:27:48.585887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.398 [2024-11-19 11:27:48.585928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.398 qpair failed and we were unable to recover it. 00:25:53.398 [2024-11-19 11:27:48.586089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.398 [2024-11-19 11:27:48.586135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.398 qpair failed and we were unable to recover it. 00:25:53.398 [2024-11-19 11:27:48.586274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.398 [2024-11-19 11:27:48.586321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.398 qpair failed and we were unable to recover it. 00:25:53.398 [2024-11-19 11:27:48.586493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.398 [2024-11-19 11:27:48.586521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.398 qpair failed and we were unable to recover it. 00:25:53.398 [2024-11-19 11:27:48.586662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.398 [2024-11-19 11:27:48.586689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.398 qpair failed and we were unable to recover it. 00:25:53.398 [2024-11-19 11:27:48.586863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.398 [2024-11-19 11:27:48.586890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.398 qpair failed and we were unable to recover it. 00:25:53.398 [2024-11-19 11:27:48.587084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.398 [2024-11-19 11:27:48.587131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.398 qpair failed and we were unable to recover it. 00:25:53.398 [2024-11-19 11:27:48.587302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.398 [2024-11-19 11:27:48.587337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.398 qpair failed and we were unable to recover it. 00:25:53.398 [2024-11-19 11:27:48.587508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.398 [2024-11-19 11:27:48.587544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.398 qpair failed and we were unable to recover it. 00:25:53.398 [2024-11-19 11:27:48.587699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.398 [2024-11-19 11:27:48.587733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.398 qpair failed and we were unable to recover it. 00:25:53.398 [2024-11-19 11:27:48.587923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.398 [2024-11-19 11:27:48.587958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.398 qpair failed and we were unable to recover it. 00:25:53.398 [2024-11-19 11:27:48.588120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.398 [2024-11-19 11:27:48.588145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.398 qpair failed and we were unable to recover it. 00:25:53.398 [2024-11-19 11:27:48.588319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.398 [2024-11-19 11:27:48.588345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.398 qpair failed and we were unable to recover it. 00:25:53.398 [2024-11-19 11:27:48.588490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.398 [2024-11-19 11:27:48.588517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.398 qpair failed and we were unable to recover it. 00:25:53.398 [2024-11-19 11:27:48.588614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.398 [2024-11-19 11:27:48.588661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.398 qpair failed and we were unable to recover it. 00:25:53.398 [2024-11-19 11:27:48.588807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.398 [2024-11-19 11:27:48.588842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.398 qpair failed and we were unable to recover it. 00:25:53.398 [2024-11-19 11:27:48.588959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.398 [2024-11-19 11:27:48.588993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.398 qpair failed and we were unable to recover it. 00:25:53.398 [2024-11-19 11:27:48.589164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.398 [2024-11-19 11:27:48.589199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.398 qpair failed and we were unable to recover it. 00:25:53.398 [2024-11-19 11:27:48.589335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.398 [2024-11-19 11:27:48.589387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.398 qpair failed and we were unable to recover it. 00:25:53.398 [2024-11-19 11:27:48.589534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.398 [2024-11-19 11:27:48.589560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.398 qpair failed and we were unable to recover it. 00:25:53.398 [2024-11-19 11:27:48.589715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.398 [2024-11-19 11:27:48.589759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.398 qpair failed and we were unable to recover it. 00:25:53.398 [2024-11-19 11:27:48.589933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.398 [2024-11-19 11:27:48.589968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.398 qpair failed and we were unable to recover it. 00:25:53.398 [2024-11-19 11:27:48.590181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.398 [2024-11-19 11:27:48.590206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.398 qpair failed and we were unable to recover it. 00:25:53.399 [2024-11-19 11:27:48.590408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.399 [2024-11-19 11:27:48.590445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.399 qpair failed and we were unable to recover it. 00:25:53.399 [2024-11-19 11:27:48.590585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.399 [2024-11-19 11:27:48.590612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.399 qpair failed and we were unable to recover it. 00:25:53.399 [2024-11-19 11:27:48.590713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.399 [2024-11-19 11:27:48.590739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.399 qpair failed and we were unable to recover it. 00:25:53.399 [2024-11-19 11:27:48.590910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.399 [2024-11-19 11:27:48.590936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.399 qpair failed and we were unable to recover it. 00:25:53.399 [2024-11-19 11:27:48.591048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.399 [2024-11-19 11:27:48.591073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.399 qpair failed and we were unable to recover it. 00:25:53.399 [2024-11-19 11:27:48.591221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.399 [2024-11-19 11:27:48.591261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.399 qpair failed and we were unable to recover it. 00:25:53.399 [2024-11-19 11:27:48.591399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.399 [2024-11-19 11:27:48.591445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.399 qpair failed and we were unable to recover it. 00:25:53.399 [2024-11-19 11:27:48.591570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.399 [2024-11-19 11:27:48.591596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.399 qpair failed and we were unable to recover it. 00:25:53.399 [2024-11-19 11:27:48.592393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.399 [2024-11-19 11:27:48.592434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.399 qpair failed and we were unable to recover it. 00:25:53.399 [2024-11-19 11:27:48.592569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.399 [2024-11-19 11:27:48.592595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.399 qpair failed and we were unable to recover it. 00:25:53.399 [2024-11-19 11:27:48.592739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.399 [2024-11-19 11:27:48.592764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.399 qpair failed and we were unable to recover it. 00:25:53.399 [2024-11-19 11:27:48.592905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.399 [2024-11-19 11:27:48.592935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.399 qpair failed and we were unable to recover it. 00:25:53.399 [2024-11-19 11:27:48.593109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.399 [2024-11-19 11:27:48.593135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.399 qpair failed and we were unable to recover it. 00:25:53.399 [2024-11-19 11:27:48.593319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.399 [2024-11-19 11:27:48.593358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.399 qpair failed and we were unable to recover it. 00:25:53.399 [2024-11-19 11:27:48.593522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.399 [2024-11-19 11:27:48.593549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.399 qpair failed and we were unable to recover it. 00:25:53.399 [2024-11-19 11:27:48.593711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.399 [2024-11-19 11:27:48.593745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.399 qpair failed and we were unable to recover it. 00:25:53.399 [2024-11-19 11:27:48.593913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.399 [2024-11-19 11:27:48.593939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.399 qpair failed and we were unable to recover it. 00:25:53.399 [2024-11-19 11:27:48.594127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.399 [2024-11-19 11:27:48.594161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.399 qpair failed and we were unable to recover it. 00:25:53.399 [2024-11-19 11:27:48.594359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.399 [2024-11-19 11:27:48.594394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.399 qpair failed and we were unable to recover it. 00:25:53.399 [2024-11-19 11:27:48.594515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.399 [2024-11-19 11:27:48.594541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.399 qpair failed and we were unable to recover it. 00:25:53.399 [2024-11-19 11:27:48.594667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.399 [2024-11-19 11:27:48.594699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.399 qpair failed and we were unable to recover it. 00:25:53.399 [2024-11-19 11:27:48.594889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.399 [2024-11-19 11:27:48.594931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.399 qpair failed and we were unable to recover it. 00:25:53.399 [2024-11-19 11:27:48.595142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.399 [2024-11-19 11:27:48.595187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.399 qpair failed and we were unable to recover it. 00:25:53.399 [2024-11-19 11:27:48.595375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.399 [2024-11-19 11:27:48.595433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.399 qpair failed and we were unable to recover it. 00:25:53.399 [2024-11-19 11:27:48.595535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.399 [2024-11-19 11:27:48.595561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.399 qpair failed and we were unable to recover it. 00:25:53.399 [2024-11-19 11:27:48.595734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.399 [2024-11-19 11:27:48.595761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.399 qpair failed and we were unable to recover it. 00:25:53.399 [2024-11-19 11:27:48.595927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.399 [2024-11-19 11:27:48.595952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.399 qpair failed and we were unable to recover it. 00:25:53.399 [2024-11-19 11:27:48.596082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.399 [2024-11-19 11:27:48.596109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.399 qpair failed and we were unable to recover it. 00:25:53.399 [2024-11-19 11:27:48.596197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.399 [2024-11-19 11:27:48.596222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.399 qpair failed and we were unable to recover it. 00:25:53.399 [2024-11-19 11:27:48.596370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.399 [2024-11-19 11:27:48.596409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.399 qpair failed and we were unable to recover it. 00:25:53.399 [2024-11-19 11:27:48.596504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.399 [2024-11-19 11:27:48.596531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.399 qpair failed and we were unable to recover it. 00:25:53.399 [2024-11-19 11:27:48.596633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.399 [2024-11-19 11:27:48.596660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.399 qpair failed and we were unable to recover it. 00:25:53.399 [2024-11-19 11:27:48.596797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.399 [2024-11-19 11:27:48.596824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.399 qpair failed and we were unable to recover it. 00:25:53.399 [2024-11-19 11:27:48.597000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.399 [2024-11-19 11:27:48.597028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.399 qpair failed and we were unable to recover it. 00:25:53.399 [2024-11-19 11:27:48.597192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.399 [2024-11-19 11:27:48.597246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.399 qpair failed and we were unable to recover it. 00:25:53.399 [2024-11-19 11:27:48.597463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.399 [2024-11-19 11:27:48.597491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.399 qpair failed and we were unable to recover it. 00:25:53.399 [2024-11-19 11:27:48.597591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.399 [2024-11-19 11:27:48.597616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.399 qpair failed and we were unable to recover it. 00:25:53.400 [2024-11-19 11:27:48.597777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.400 [2024-11-19 11:27:48.597812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.400 qpair failed and we were unable to recover it. 00:25:53.400 [2024-11-19 11:27:48.598002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.400 [2024-11-19 11:27:48.598037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.400 qpair failed and we were unable to recover it. 00:25:53.400 [2024-11-19 11:27:48.598226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.400 [2024-11-19 11:27:48.598273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.400 qpair failed and we were unable to recover it. 00:25:53.400 [2024-11-19 11:27:48.598381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.400 [2024-11-19 11:27:48.598417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.400 qpair failed and we were unable to recover it. 00:25:53.400 [2024-11-19 11:27:48.598530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.400 [2024-11-19 11:27:48.598558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.400 qpair failed and we were unable to recover it. 00:25:53.400 [2024-11-19 11:27:48.598692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.400 [2024-11-19 11:27:48.598736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.400 qpair failed and we were unable to recover it. 00:25:53.400 [2024-11-19 11:27:48.598902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.400 [2024-11-19 11:27:48.598928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.400 qpair failed and we were unable to recover it. 00:25:53.400 [2024-11-19 11:27:48.599119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.400 [2024-11-19 11:27:48.599145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.400 qpair failed and we were unable to recover it. 00:25:53.400 [2024-11-19 11:27:48.599324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.400 [2024-11-19 11:27:48.599352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.400 qpair failed and we were unable to recover it. 00:25:53.400 [2024-11-19 11:27:48.599480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.400 [2024-11-19 11:27:48.599507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.400 qpair failed and we were unable to recover it. 00:25:53.400 [2024-11-19 11:27:48.599642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.400 [2024-11-19 11:27:48.599674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.400 qpair failed and we were unable to recover it. 00:25:53.400 [2024-11-19 11:27:48.599767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.400 [2024-11-19 11:27:48.599792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.400 qpair failed and we were unable to recover it. 00:25:53.400 [2024-11-19 11:27:48.599884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.400 [2024-11-19 11:27:48.599910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.400 qpair failed and we were unable to recover it. 00:25:53.400 [2024-11-19 11:27:48.600035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.400 [2024-11-19 11:27:48.600071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.400 qpair failed and we were unable to recover it. 00:25:53.400 [2024-11-19 11:27:48.600172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.400 [2024-11-19 11:27:48.600223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.400 qpair failed and we were unable to recover it. 00:25:53.400 [2024-11-19 11:27:48.600418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.400 [2024-11-19 11:27:48.600445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.400 qpair failed and we were unable to recover it. 00:25:53.400 [2024-11-19 11:27:48.600560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.400 [2024-11-19 11:27:48.600586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.400 qpair failed and we were unable to recover it. 00:25:53.400 [2024-11-19 11:27:48.600726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.400 [2024-11-19 11:27:48.600759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.400 qpair failed and we were unable to recover it. 00:25:53.400 [2024-11-19 11:27:48.600893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.400 [2024-11-19 11:27:48.600925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.400 qpair failed and we were unable to recover it. 00:25:53.400 [2024-11-19 11:27:48.601120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.400 [2024-11-19 11:27:48.601153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.400 qpair failed and we were unable to recover it. 00:25:53.400 [2024-11-19 11:27:48.601302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.400 [2024-11-19 11:27:48.601343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.400 qpair failed and we were unable to recover it. 00:25:53.400 [2024-11-19 11:27:48.601478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.400 [2024-11-19 11:27:48.601504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.400 qpair failed and we were unable to recover it. 00:25:53.400 [2024-11-19 11:27:48.601678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.400 [2024-11-19 11:27:48.601720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.400 qpair failed and we were unable to recover it. 00:25:53.400 [2024-11-19 11:27:48.601824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.400 [2024-11-19 11:27:48.601865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.400 qpair failed and we were unable to recover it. 00:25:53.400 [2024-11-19 11:27:48.602038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.400 [2024-11-19 11:27:48.602065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.400 qpair failed and we were unable to recover it. 00:25:53.400 [2024-11-19 11:27:48.602224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.400 [2024-11-19 11:27:48.602251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.400 qpair failed and we were unable to recover it. 00:25:53.400 [2024-11-19 11:27:48.602393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.400 [2024-11-19 11:27:48.602431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.400 qpair failed and we were unable to recover it. 00:25:53.400 [2024-11-19 11:27:48.602529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.400 [2024-11-19 11:27:48.602556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.400 qpair failed and we were unable to recover it. 00:25:53.400 [2024-11-19 11:27:48.602686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.400 [2024-11-19 11:27:48.602713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.400 qpair failed and we were unable to recover it. 00:25:53.400 [2024-11-19 11:27:48.602877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.400 [2024-11-19 11:27:48.602902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.400 qpair failed and we were unable to recover it. 00:25:53.400 [2024-11-19 11:27:48.603081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.400 [2024-11-19 11:27:48.603107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.400 qpair failed and we were unable to recover it. 00:25:53.400 [2024-11-19 11:27:48.603256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.400 [2024-11-19 11:27:48.603281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.400 qpair failed and we were unable to recover it. 00:25:53.400 [2024-11-19 11:27:48.603450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.400 [2024-11-19 11:27:48.603482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.400 qpair failed and we were unable to recover it. 00:25:53.400 [2024-11-19 11:27:48.603622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.400 [2024-11-19 11:27:48.603672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.400 qpair failed and we were unable to recover it. 00:25:53.400 [2024-11-19 11:27:48.603813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.400 [2024-11-19 11:27:48.603844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.400 qpair failed and we were unable to recover it. 00:25:53.400 [2024-11-19 11:27:48.603999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.400 [2024-11-19 11:27:48.604032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.400 qpair failed and we were unable to recover it. 00:25:53.400 [2024-11-19 11:27:48.604172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.400 [2024-11-19 11:27:48.604216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.400 qpair failed and we were unable to recover it. 00:25:53.400 [2024-11-19 11:27:48.604331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.400 [2024-11-19 11:27:48.604356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.400 qpair failed and we were unable to recover it. 00:25:53.400 [2024-11-19 11:27:48.604463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.400 [2024-11-19 11:27:48.604489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.400 qpair failed and we were unable to recover it. 00:25:53.400 [2024-11-19 11:27:48.604663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.400 [2024-11-19 11:27:48.604705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.400 qpair failed and we were unable to recover it. 00:25:53.400 [2024-11-19 11:27:48.604884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.400 [2024-11-19 11:27:48.604928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.400 qpair failed and we were unable to recover it. 00:25:53.400 [2024-11-19 11:27:48.605096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.400 [2024-11-19 11:27:48.605122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.400 qpair failed and we were unable to recover it. 00:25:53.401 [2024-11-19 11:27:48.605270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.401 [2024-11-19 11:27:48.605312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.401 qpair failed and we were unable to recover it. 00:25:53.401 [2024-11-19 11:27:48.605452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.401 [2024-11-19 11:27:48.605478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.401 qpair failed and we were unable to recover it. 00:25:53.401 [2024-11-19 11:27:48.605564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.401 [2024-11-19 11:27:48.605590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.401 qpair failed and we were unable to recover it. 00:25:53.401 [2024-11-19 11:27:48.605732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.401 [2024-11-19 11:27:48.605757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.401 qpair failed and we were unable to recover it. 00:25:53.401 [2024-11-19 11:27:48.605935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.401 [2024-11-19 11:27:48.605968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.401 qpair failed and we were unable to recover it. 00:25:53.401 [2024-11-19 11:27:48.606168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.401 [2024-11-19 11:27:48.606203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.401 qpair failed and we were unable to recover it. 00:25:53.401 [2024-11-19 11:27:48.606414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.401 [2024-11-19 11:27:48.606448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.401 qpair failed and we were unable to recover it. 00:25:53.401 [2024-11-19 11:27:48.606633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.401 [2024-11-19 11:27:48.606688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.401 qpair failed and we were unable to recover it. 00:25:53.401 [2024-11-19 11:27:48.606870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.401 [2024-11-19 11:27:48.606895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.401 qpair failed and we were unable to recover it. 00:25:53.401 [2024-11-19 11:27:48.607031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.401 [2024-11-19 11:27:48.607056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.401 qpair failed and we were unable to recover it. 00:25:53.401 [2024-11-19 11:27:48.607211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.401 [2024-11-19 11:27:48.607238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.401 qpair failed and we were unable to recover it. 00:25:53.401 [2024-11-19 11:27:48.607386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.401 [2024-11-19 11:27:48.607418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.401 qpair failed and we were unable to recover it. 00:25:53.401 [2024-11-19 11:27:48.607570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.401 [2024-11-19 11:27:48.607601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.401 qpair failed and we were unable to recover it. 00:25:53.401 [2024-11-19 11:27:48.607718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.401 [2024-11-19 11:27:48.607743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.401 qpair failed and we were unable to recover it. 00:25:53.401 [2024-11-19 11:27:48.607933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.401 [2024-11-19 11:27:48.607960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.401 qpair failed and we were unable to recover it. 00:25:53.401 [2024-11-19 11:27:48.608137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.401 [2024-11-19 11:27:48.608172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.401 qpair failed and we were unable to recover it. 00:25:53.401 [2024-11-19 11:27:48.608385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.401 [2024-11-19 11:27:48.608423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.401 qpair failed and we were unable to recover it. 00:25:53.401 [2024-11-19 11:27:48.608544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.401 [2024-11-19 11:27:48.608572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.401 qpair failed and we were unable to recover it. 00:25:53.401 [2024-11-19 11:27:48.608750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.401 [2024-11-19 11:27:48.608792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.401 qpair failed and we were unable to recover it. 00:25:53.401 [2024-11-19 11:27:48.608938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.401 [2024-11-19 11:27:48.608980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.401 qpair failed and we were unable to recover it. 00:25:53.401 [2024-11-19 11:27:48.609159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.401 [2024-11-19 11:27:48.609195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.401 qpair failed and we were unable to recover it. 00:25:53.401 [2024-11-19 11:27:48.609336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.401 [2024-11-19 11:27:48.609371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.401 qpair failed and we were unable to recover it. 00:25:53.401 [2024-11-19 11:27:48.609520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.401 [2024-11-19 11:27:48.609546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.401 qpair failed and we were unable to recover it. 00:25:53.401 [2024-11-19 11:27:48.609717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.401 [2024-11-19 11:27:48.609744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.401 qpair failed and we were unable to recover it. 00:25:53.401 [2024-11-19 11:27:48.609916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.401 [2024-11-19 11:27:48.609942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.401 qpair failed and we were unable to recover it. 00:25:53.401 [2024-11-19 11:27:48.610140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.401 [2024-11-19 11:27:48.610175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.401 qpair failed and we were unable to recover it. 00:25:53.401 [2024-11-19 11:27:48.610353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.401 [2024-11-19 11:27:48.610422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.401 qpair failed and we were unable to recover it. 00:25:53.401 [2024-11-19 11:27:48.610582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.401 [2024-11-19 11:27:48.610615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.401 qpair failed and we were unable to recover it. 00:25:53.401 [2024-11-19 11:27:48.610772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.401 [2024-11-19 11:27:48.610796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.401 qpair failed and we were unable to recover it. 00:25:53.401 [2024-11-19 11:27:48.610917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.401 [2024-11-19 11:27:48.610942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.401 qpair failed and we were unable to recover it. 00:25:53.401 [2024-11-19 11:27:48.611054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.401 [2024-11-19 11:27:48.611081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.401 qpair failed and we were unable to recover it. 00:25:53.401 [2024-11-19 11:27:48.611202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.401 [2024-11-19 11:27:48.611229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.401 qpair failed and we were unable to recover it. 00:25:53.401 [2024-11-19 11:27:48.611353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.401 [2024-11-19 11:27:48.611390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.401 qpair failed and we were unable to recover it. 00:25:53.401 [2024-11-19 11:27:48.611539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.401 [2024-11-19 11:27:48.611567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.401 qpair failed and we were unable to recover it. 00:25:53.401 [2024-11-19 11:27:48.611671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.401 [2024-11-19 11:27:48.611696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.401 qpair failed and we were unable to recover it. 00:25:53.401 [2024-11-19 11:27:48.611844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.401 [2024-11-19 11:27:48.611870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.401 qpair failed and we were unable to recover it. 00:25:53.401 [2024-11-19 11:27:48.611989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.401 [2024-11-19 11:27:48.612014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.401 qpair failed and we were unable to recover it. 00:25:53.401 [2024-11-19 11:27:48.612132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.401 [2024-11-19 11:27:48.612158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.401 qpair failed and we were unable to recover it. 00:25:53.401 [2024-11-19 11:27:48.612293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.401 [2024-11-19 11:27:48.612320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.401 qpair failed and we were unable to recover it. 00:25:53.401 [2024-11-19 11:27:48.612453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.401 [2024-11-19 11:27:48.612480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.401 qpair failed and we were unable to recover it. 00:25:53.401 [2024-11-19 11:27:48.612584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.401 [2024-11-19 11:27:48.612609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.401 qpair failed and we were unable to recover it. 00:25:53.401 [2024-11-19 11:27:48.612769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.401 [2024-11-19 11:27:48.612814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.401 qpair failed and we were unable to recover it. 00:25:53.402 [2024-11-19 11:27:48.612930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.402 [2024-11-19 11:27:48.612973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.402 qpair failed and we were unable to recover it. 00:25:53.402 [2024-11-19 11:27:48.613117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.402 [2024-11-19 11:27:48.613144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.402 qpair failed and we were unable to recover it. 00:25:53.402 [2024-11-19 11:27:48.613258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.402 [2024-11-19 11:27:48.613285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.402 qpair failed and we were unable to recover it. 00:25:53.402 [2024-11-19 11:27:48.613436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.402 [2024-11-19 11:27:48.613464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.402 qpair failed and we were unable to recover it. 00:25:53.402 [2024-11-19 11:27:48.613584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.402 [2024-11-19 11:27:48.613612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.402 qpair failed and we were unable to recover it. 00:25:53.402 [2024-11-19 11:27:48.613732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.402 [2024-11-19 11:27:48.613759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.402 qpair failed and we were unable to recover it. 00:25:53.402 [2024-11-19 11:27:48.613857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.402 [2024-11-19 11:27:48.613895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.402 qpair failed and we were unable to recover it. 00:25:53.402 [2024-11-19 11:27:48.614077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.402 [2024-11-19 11:27:48.614104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.402 qpair failed and we were unable to recover it. 00:25:53.402 [2024-11-19 11:27:48.614223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.402 [2024-11-19 11:27:48.614265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.402 qpair failed and we were unable to recover it. 00:25:53.402 [2024-11-19 11:27:48.614426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.402 [2024-11-19 11:27:48.614452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.402 qpair failed and we were unable to recover it. 00:25:53.402 [2024-11-19 11:27:48.614541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.402 [2024-11-19 11:27:48.614570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.402 qpair failed and we were unable to recover it. 00:25:53.402 [2024-11-19 11:27:48.614708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.402 [2024-11-19 11:27:48.614740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.402 qpair failed and we were unable to recover it. 00:25:53.402 [2024-11-19 11:27:48.614899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.402 [2024-11-19 11:27:48.614934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.402 qpair failed and we were unable to recover it. 00:25:53.402 [2024-11-19 11:27:48.615119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.402 [2024-11-19 11:27:48.615160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.402 qpair failed and we were unable to recover it. 00:25:53.402 [2024-11-19 11:27:48.615255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.402 [2024-11-19 11:27:48.615298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.402 qpair failed and we were unable to recover it. 00:25:53.402 [2024-11-19 11:27:48.615437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.402 [2024-11-19 11:27:48.615464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.402 qpair failed and we were unable to recover it. 00:25:53.402 [2024-11-19 11:27:48.615595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.402 [2024-11-19 11:27:48.615622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.402 qpair failed and we were unable to recover it. 00:25:53.402 [2024-11-19 11:27:48.615766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.402 [2024-11-19 11:27:48.615807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.402 qpair failed and we were unable to recover it. 00:25:53.402 [2024-11-19 11:27:48.615947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.402 [2024-11-19 11:27:48.615973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.402 qpair failed and we were unable to recover it. 00:25:53.402 [2024-11-19 11:27:48.616121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.402 [2024-11-19 11:27:48.616146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.402 qpair failed and we were unable to recover it. 00:25:53.402 [2024-11-19 11:27:48.616325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.402 [2024-11-19 11:27:48.616360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.402 qpair failed and we were unable to recover it. 00:25:53.402 [2024-11-19 11:27:48.616540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.402 [2024-11-19 11:27:48.616568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.402 qpair failed and we were unable to recover it. 00:25:53.402 [2024-11-19 11:27:48.616694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.402 [2024-11-19 11:27:48.616720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.402 qpair failed and we were unable to recover it. 00:25:53.402 [2024-11-19 11:27:48.616915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.402 [2024-11-19 11:27:48.616950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.402 qpair failed and we were unable to recover it. 00:25:53.402 [2024-11-19 11:27:48.617134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.402 [2024-11-19 11:27:48.617170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.402 qpair failed and we were unable to recover it. 00:25:53.402 [2024-11-19 11:27:48.617311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.402 [2024-11-19 11:27:48.617346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.402 qpair failed and we were unable to recover it. 00:25:53.402 [2024-11-19 11:27:48.617486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.402 [2024-11-19 11:27:48.617511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.402 qpair failed and we were unable to recover it. 00:25:53.402 [2024-11-19 11:27:48.617660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.402 [2024-11-19 11:27:48.617687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.402 qpair failed and we were unable to recover it. 00:25:53.402 [2024-11-19 11:27:48.617826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.402 [2024-11-19 11:27:48.617868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.402 qpair failed and we were unable to recover it. 00:25:53.402 [2024-11-19 11:27:48.617986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.402 [2024-11-19 11:27:48.618012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.402 qpair failed and we were unable to recover it. 00:25:53.402 [2024-11-19 11:27:48.618205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.402 [2024-11-19 11:27:48.618232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.402 qpair failed and we were unable to recover it. 00:25:53.402 [2024-11-19 11:27:48.618369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.402 [2024-11-19 11:27:48.618404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.402 qpair failed and we were unable to recover it. 00:25:53.402 [2024-11-19 11:27:48.618529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.402 [2024-11-19 11:27:48.618554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.402 qpair failed and we were unable to recover it. 00:25:53.402 [2024-11-19 11:27:48.618709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.402 [2024-11-19 11:27:48.618751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.402 qpair failed and we were unable to recover it. 00:25:53.402 [2024-11-19 11:27:48.618897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.402 [2024-11-19 11:27:48.618935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.402 qpair failed and we were unable to recover it. 00:25:53.402 [2024-11-19 11:27:48.619062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.402 [2024-11-19 11:27:48.619087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.402 qpair failed and we were unable to recover it. 00:25:53.402 [2024-11-19 11:27:48.619243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.402 [2024-11-19 11:27:48.619278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.402 qpair failed and we were unable to recover it. 00:25:53.402 [2024-11-19 11:27:48.619434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.402 [2024-11-19 11:27:48.619469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.402 qpair failed and we were unable to recover it. 00:25:53.402 [2024-11-19 11:27:48.619660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.402 [2024-11-19 11:27:48.619701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.402 qpair failed and we were unable to recover it. 00:25:53.402 [2024-11-19 11:27:48.619847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.402 [2024-11-19 11:27:48.619874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.402 qpair failed and we were unable to recover it. 00:25:53.402 [2024-11-19 11:27:48.620031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.402 [2024-11-19 11:27:48.620067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.402 qpair failed and we were unable to recover it. 00:25:53.402 [2024-11-19 11:27:48.620172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.402 [2024-11-19 11:27:48.620199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.402 qpair failed and we were unable to recover it. 00:25:53.402 [2024-11-19 11:27:48.620328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.402 [2024-11-19 11:27:48.620374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.402 qpair failed and we were unable to recover it. 00:25:53.402 [2024-11-19 11:27:48.620516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.402 [2024-11-19 11:27:48.620544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.402 qpair failed and we were unable to recover it. 00:25:53.402 [2024-11-19 11:27:48.620698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.402 [2024-11-19 11:27:48.620738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.403 qpair failed and we were unable to recover it. 00:25:53.403 [2024-11-19 11:27:48.620859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.403 [2024-11-19 11:27:48.620900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.403 qpair failed and we were unable to recover it. 00:25:53.403 [2024-11-19 11:27:48.621077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.403 [2024-11-19 11:27:48.621104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.403 qpair failed and we were unable to recover it. 00:25:53.403 [2024-11-19 11:27:48.621206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.403 [2024-11-19 11:27:48.621232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.403 qpair failed and we were unable to recover it. 00:25:53.403 [2024-11-19 11:27:48.621387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.403 [2024-11-19 11:27:48.621420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.403 qpair failed and we were unable to recover it. 00:25:53.403 [2024-11-19 11:27:48.621520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.403 [2024-11-19 11:27:48.621552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.403 qpair failed and we were unable to recover it. 00:25:53.403 [2024-11-19 11:27:48.621711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.403 [2024-11-19 11:27:48.621744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.403 qpair failed and we were unable to recover it. 00:25:53.403 [2024-11-19 11:27:48.621901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.403 [2024-11-19 11:27:48.621926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.403 qpair failed and we were unable to recover it. 00:25:53.403 [2024-11-19 11:27:48.622054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.403 [2024-11-19 11:27:48.622081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.403 qpair failed and we were unable to recover it. 00:25:53.403 [2024-11-19 11:27:48.622241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.403 [2024-11-19 11:27:48.622269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.403 qpair failed and we were unable to recover it. 00:25:53.403 [2024-11-19 11:27:48.622386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.403 [2024-11-19 11:27:48.622414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.403 qpair failed and we were unable to recover it. 00:25:53.403 [2024-11-19 11:27:48.622542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.403 [2024-11-19 11:27:48.622569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.403 qpair failed and we were unable to recover it. 00:25:53.403 [2024-11-19 11:27:48.622660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.403 [2024-11-19 11:27:48.622686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.403 qpair failed and we were unable to recover it. 00:25:53.403 [2024-11-19 11:27:48.622822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.403 [2024-11-19 11:27:48.622847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.403 qpair failed and we were unable to recover it. 00:25:53.403 [2024-11-19 11:27:48.622993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.403 [2024-11-19 11:27:48.623020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.403 qpair failed and we were unable to recover it. 00:25:53.403 [2024-11-19 11:27:48.623133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.403 [2024-11-19 11:27:48.623159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.403 qpair failed and we were unable to recover it. 00:25:53.403 [2024-11-19 11:27:48.623293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.403 [2024-11-19 11:27:48.623319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.403 qpair failed and we were unable to recover it. 00:25:53.403 [2024-11-19 11:27:48.623511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.403 [2024-11-19 11:27:48.623544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.403 qpair failed and we were unable to recover it. 00:25:53.403 [2024-11-19 11:27:48.623641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.403 [2024-11-19 11:27:48.623672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.403 qpair failed and we were unable to recover it. 00:25:53.403 [2024-11-19 11:27:48.623799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.403 [2024-11-19 11:27:48.623825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.403 qpair failed and we were unable to recover it. 00:25:53.403 [2024-11-19 11:27:48.623986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.403 [2024-11-19 11:27:48.624027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.403 qpair failed and we were unable to recover it. 00:25:53.403 [2024-11-19 11:27:48.624202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.403 [2024-11-19 11:27:48.624228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.403 qpair failed and we were unable to recover it. 00:25:53.403 [2024-11-19 11:27:48.624385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.403 [2024-11-19 11:27:48.624439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.403 qpair failed and we were unable to recover it. 00:25:53.403 [2024-11-19 11:27:48.624562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.403 [2024-11-19 11:27:48.624595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.403 qpair failed and we were unable to recover it. 00:25:53.403 [2024-11-19 11:27:48.624773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.403 [2024-11-19 11:27:48.624798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.403 qpair failed and we were unable to recover it. 00:25:53.403 [2024-11-19 11:27:48.624891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.403 [2024-11-19 11:27:48.624916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.403 qpair failed and we were unable to recover it. 00:25:53.403 [2024-11-19 11:27:48.625006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.403 [2024-11-19 11:27:48.625029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.403 qpair failed and we were unable to recover it. 00:25:53.403 [2024-11-19 11:27:48.625146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.403 [2024-11-19 11:27:48.625171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.403 qpair failed and we were unable to recover it. 00:25:53.403 [2024-11-19 11:27:48.625300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.403 [2024-11-19 11:27:48.625352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.403 qpair failed and we were unable to recover it. 00:25:53.403 [2024-11-19 11:27:48.625534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.403 [2024-11-19 11:27:48.625560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.403 qpair failed and we were unable to recover it. 00:25:53.403 [2024-11-19 11:27:48.625678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.403 [2024-11-19 11:27:48.625713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.403 qpair failed and we were unable to recover it. 00:25:53.403 [2024-11-19 11:27:48.625848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.403 [2024-11-19 11:27:48.625872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.403 qpair failed and we were unable to recover it. 00:25:53.403 [2024-11-19 11:27:48.625998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.403 [2024-11-19 11:27:48.626022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.403 qpair failed and we were unable to recover it. 00:25:53.403 [2024-11-19 11:27:48.626187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.403 [2024-11-19 11:27:48.626216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.403 qpair failed and we were unable to recover it. 00:25:53.403 [2024-11-19 11:27:48.626322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.403 [2024-11-19 11:27:48.626347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.403 qpair failed and we were unable to recover it. 00:25:53.403 [2024-11-19 11:27:48.626493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.403 [2024-11-19 11:27:48.626520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.403 qpair failed and we were unable to recover it. 00:25:53.403 [2024-11-19 11:27:48.626608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.403 [2024-11-19 11:27:48.626635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.403 qpair failed and we were unable to recover it. 00:25:53.403 [2024-11-19 11:27:48.626726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.403 [2024-11-19 11:27:48.626753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.403 qpair failed and we were unable to recover it. 00:25:53.403 [2024-11-19 11:27:48.626884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.403 [2024-11-19 11:27:48.626923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.403 qpair failed and we were unable to recover it. 00:25:53.403 [2024-11-19 11:27:48.627078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.403 [2024-11-19 11:27:48.627104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.403 qpair failed and we were unable to recover it. 00:25:53.403 [2024-11-19 11:27:48.627201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.403 [2024-11-19 11:27:48.627226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.403 qpair failed and we were unable to recover it. 00:25:53.404 [2024-11-19 11:27:48.627306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.404 [2024-11-19 11:27:48.627330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.404 qpair failed and we were unable to recover it. 00:25:53.404 [2024-11-19 11:27:48.627429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.404 [2024-11-19 11:27:48.627454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.404 qpair failed and we were unable to recover it. 00:25:53.404 [2024-11-19 11:27:48.627534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.404 [2024-11-19 11:27:48.627559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.404 qpair failed and we were unable to recover it. 00:25:53.404 [2024-11-19 11:27:48.627702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.404 [2024-11-19 11:27:48.627727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.404 qpair failed and we were unable to recover it. 00:25:53.404 [2024-11-19 11:27:48.627888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.404 [2024-11-19 11:27:48.627911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.404 qpair failed and we were unable to recover it. 00:25:53.404 [2024-11-19 11:27:48.628047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.404 [2024-11-19 11:27:48.628072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.404 qpair failed and we were unable to recover it. 00:25:53.404 [2024-11-19 11:27:48.628188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.404 [2024-11-19 11:27:48.628213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.404 qpair failed and we were unable to recover it. 00:25:53.404 [2024-11-19 11:27:48.628351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.404 [2024-11-19 11:27:48.628385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.404 qpair failed and we were unable to recover it. 00:25:53.404 [2024-11-19 11:27:48.628519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.404 [2024-11-19 11:27:48.628544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.404 qpair failed and we were unable to recover it. 00:25:53.404 [2024-11-19 11:27:48.628655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.404 [2024-11-19 11:27:48.628680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.404 qpair failed and we were unable to recover it. 00:25:53.404 [2024-11-19 11:27:48.628804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.404 [2024-11-19 11:27:48.628831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.404 qpair failed and we were unable to recover it. 00:25:53.404 [2024-11-19 11:27:48.628954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.404 [2024-11-19 11:27:48.628980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.404 qpair failed and we were unable to recover it. 00:25:53.404 [2024-11-19 11:27:48.629140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.404 [2024-11-19 11:27:48.629175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.404 qpair failed and we were unable to recover it. 00:25:53.404 [2024-11-19 11:27:48.629317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.404 [2024-11-19 11:27:48.629345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.404 qpair failed and we were unable to recover it. 00:25:53.404 [2024-11-19 11:27:48.629463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.404 [2024-11-19 11:27:48.629489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.404 qpair failed and we were unable to recover it. 00:25:53.404 [2024-11-19 11:27:48.629596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.404 [2024-11-19 11:27:48.629622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.404 qpair failed and we were unable to recover it. 00:25:53.404 [2024-11-19 11:27:48.629746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.404 [2024-11-19 11:27:48.629772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.404 qpair failed and we were unable to recover it. 00:25:53.404 [2024-11-19 11:27:48.629919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.404 [2024-11-19 11:27:48.629945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.404 qpair failed and we were unable to recover it. 00:25:53.404 [2024-11-19 11:27:48.630058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.404 [2024-11-19 11:27:48.630082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.404 qpair failed and we were unable to recover it. 00:25:53.404 [2024-11-19 11:27:48.630211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.404 [2024-11-19 11:27:48.630235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.404 qpair failed and we were unable to recover it. 00:25:53.404 [2024-11-19 11:27:48.630377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.404 [2024-11-19 11:27:48.630414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.404 qpair failed and we were unable to recover it. 00:25:53.404 [2024-11-19 11:27:48.630515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.404 [2024-11-19 11:27:48.630541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.404 qpair failed and we were unable to recover it. 00:25:53.404 [2024-11-19 11:27:48.630692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.404 [2024-11-19 11:27:48.630734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.404 qpair failed and we were unable to recover it. 00:25:53.404 [2024-11-19 11:27:48.630879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.404 [2024-11-19 11:27:48.630906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.404 qpair failed and we were unable to recover it. 00:25:53.404 [2024-11-19 11:27:48.631027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.404 [2024-11-19 11:27:48.631060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.404 qpair failed and we were unable to recover it. 00:25:53.404 [2024-11-19 11:27:48.631213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.404 [2024-11-19 11:27:48.631248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.404 qpair failed and we were unable to recover it. 00:25:53.404 [2024-11-19 11:27:48.631386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.404 [2024-11-19 11:27:48.631421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.404 qpair failed and we were unable to recover it. 00:25:53.404 [2024-11-19 11:27:48.631522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.404 [2024-11-19 11:27:48.631551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.404 qpair failed and we were unable to recover it. 00:25:53.404 [2024-11-19 11:27:48.631692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.404 [2024-11-19 11:27:48.631739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.404 qpair failed and we were unable to recover it. 00:25:53.404 [2024-11-19 11:27:48.631890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.404 [2024-11-19 11:27:48.631916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.404 qpair failed and we were unable to recover it. 00:25:53.404 [2024-11-19 11:27:48.632055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.404 [2024-11-19 11:27:48.632083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.404 qpair failed and we were unable to recover it. 00:25:53.404 [2024-11-19 11:27:48.633523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.404 [2024-11-19 11:27:48.633566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.404 qpair failed and we were unable to recover it. 00:25:53.404 [2024-11-19 11:27:48.633710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.404 [2024-11-19 11:27:48.633747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.404 qpair failed and we were unable to recover it. 00:25:53.404 [2024-11-19 11:27:48.633881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.404 [2024-11-19 11:27:48.633917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.404 qpair failed and we were unable to recover it. 00:25:53.404 [2024-11-19 11:27:48.634091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.404 [2024-11-19 11:27:48.634129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.404 qpair failed and we were unable to recover it. 00:25:53.404 [2024-11-19 11:27:48.634287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.404 [2024-11-19 11:27:48.634323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:53.404 qpair failed and we were unable to recover it. 00:25:53.404 [2024-11-19 11:27:48.634517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.404 [2024-11-19 11:27:48.634566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.404 qpair failed and we were unable to recover it. 00:25:53.404 [2024-11-19 11:27:48.634738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.404 [2024-11-19 11:27:48.634765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.404 qpair failed and we were unable to recover it. 00:25:53.404 [2024-11-19 11:27:48.634907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.404 [2024-11-19 11:27:48.634932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.404 qpair failed and we were unable to recover it. 00:25:53.404 [2024-11-19 11:27:48.635059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.404 [2024-11-19 11:27:48.635083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.404 qpair failed and we were unable to recover it. 00:25:53.404 [2024-11-19 11:27:48.635186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.404 [2024-11-19 11:27:48.635209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.404 qpair failed and we were unable to recover it. 00:25:53.404 [2024-11-19 11:27:48.635379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.404 [2024-11-19 11:27:48.635405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.404 qpair failed and we were unable to recover it. 00:25:53.404 [2024-11-19 11:27:48.635500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.404 [2024-11-19 11:27:48.635526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.404 qpair failed and we were unable to recover it. 00:25:53.404 [2024-11-19 11:27:48.635631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.404 [2024-11-19 11:27:48.635657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.404 qpair failed and we were unable to recover it. 00:25:53.404 [2024-11-19 11:27:48.635796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.404 [2024-11-19 11:27:48.635837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.404 qpair failed and we were unable to recover it. 00:25:53.405 [2024-11-19 11:27:48.636011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.405 [2024-11-19 11:27:48.636060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.405 qpair failed and we were unable to recover it. 00:25:53.405 [2024-11-19 11:27:48.636254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.405 [2024-11-19 11:27:48.636285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.405 qpair failed and we were unable to recover it. 00:25:53.405 [2024-11-19 11:27:48.636403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.405 [2024-11-19 11:27:48.636430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.405 qpair failed and we were unable to recover it. 00:25:53.405 [2024-11-19 11:27:48.636529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.405 [2024-11-19 11:27:48.636554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.405 qpair failed and we were unable to recover it. 00:25:53.405 [2024-11-19 11:27:48.636685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.405 [2024-11-19 11:27:48.636711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.405 qpair failed and we were unable to recover it. 00:25:53.405 [2024-11-19 11:27:48.636887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.405 [2024-11-19 11:27:48.636927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.405 qpair failed and we were unable to recover it. 00:25:53.405 [2024-11-19 11:27:48.637048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.405 [2024-11-19 11:27:48.637098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.405 qpair failed and we were unable to recover it. 00:25:53.405 [2024-11-19 11:27:48.637271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.405 [2024-11-19 11:27:48.637320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.405 qpair failed and we were unable to recover it. 00:25:53.405 [2024-11-19 11:27:48.637477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.405 [2024-11-19 11:27:48.637505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.405 qpair failed and we were unable to recover it. 00:25:53.405 [2024-11-19 11:27:48.637601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.405 [2024-11-19 11:27:48.637629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.405 qpair failed and we were unable to recover it. 00:25:53.405 [2024-11-19 11:27:48.637751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.405 [2024-11-19 11:27:48.637777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.405 qpair failed and we were unable to recover it. 00:25:53.405 [2024-11-19 11:27:48.637901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.405 [2024-11-19 11:27:48.637926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.405 qpair failed and we were unable to recover it. 00:25:53.405 [2024-11-19 11:27:48.638061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.405 [2024-11-19 11:27:48.638087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.405 qpair failed and we were unable to recover it. 00:25:53.405 [2024-11-19 11:27:48.638193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.405 [2024-11-19 11:27:48.638237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.405 qpair failed and we were unable to recover it. 00:25:53.405 [2024-11-19 11:27:48.638357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.405 [2024-11-19 11:27:48.638392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.405 qpair failed and we were unable to recover it. 00:25:53.405 [2024-11-19 11:27:48.638509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.405 [2024-11-19 11:27:48.638535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.405 qpair failed and we were unable to recover it. 00:25:53.405 [2024-11-19 11:27:48.638679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.405 [2024-11-19 11:27:48.638705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.405 qpair failed and we were unable to recover it. 00:25:53.405 [2024-11-19 11:27:48.638824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.405 [2024-11-19 11:27:48.638849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.405 qpair failed and we were unable to recover it. 00:25:53.405 [2024-11-19 11:27:48.639604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.405 [2024-11-19 11:27:48.639636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.405 qpair failed and we were unable to recover it. 00:25:53.405 [2024-11-19 11:27:48.639790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.405 [2024-11-19 11:27:48.639816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.405 qpair failed and we were unable to recover it. 00:25:53.405 [2024-11-19 11:27:48.639975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.405 [2024-11-19 11:27:48.640001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.405 qpair failed and we were unable to recover it. 00:25:53.405 [2024-11-19 11:27:48.640168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.405 [2024-11-19 11:27:48.640199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.405 qpair failed and we were unable to recover it. 00:25:53.405 [2024-11-19 11:27:48.640345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.405 [2024-11-19 11:27:48.640387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.405 qpair failed and we were unable to recover it. 00:25:53.405 [2024-11-19 11:27:48.640503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.405 [2024-11-19 11:27:48.640529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.405 qpair failed and we were unable to recover it. 00:25:53.405 [2024-11-19 11:27:48.640659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.405 [2024-11-19 11:27:48.640705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.405 qpair failed and we were unable to recover it. 00:25:53.405 [2024-11-19 11:27:48.640841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.405 [2024-11-19 11:27:48.640883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.405 qpair failed and we were unable to recover it. 00:25:53.405 [2024-11-19 11:27:48.641032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.405 [2024-11-19 11:27:48.641057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.405 qpair failed and we were unable to recover it. 00:25:53.405 [2024-11-19 11:27:48.641207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.405 [2024-11-19 11:27:48.641231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.405 qpair failed and we were unable to recover it. 00:25:53.405 [2024-11-19 11:27:48.641396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.405 [2024-11-19 11:27:48.641423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.405 qpair failed and we were unable to recover it. 00:25:53.405 [2024-11-19 11:27:48.641513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.405 [2024-11-19 11:27:48.641539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.405 qpair failed and we were unable to recover it. 00:25:53.405 [2024-11-19 11:27:48.641639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.405 [2024-11-19 11:27:48.641680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.405 qpair failed and we were unable to recover it. 00:25:53.405 [2024-11-19 11:27:48.641793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.405 [2024-11-19 11:27:48.641819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.405 qpair failed and we were unable to recover it. 00:25:53.405 [2024-11-19 11:27:48.641963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.405 [2024-11-19 11:27:48.642002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.405 qpair failed and we were unable to recover it. 00:25:53.405 [2024-11-19 11:27:48.642127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.405 [2024-11-19 11:27:48.642160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.405 qpair failed and we were unable to recover it. 00:25:53.405 [2024-11-19 11:27:48.642262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.405 [2024-11-19 11:27:48.642292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.405 qpair failed and we were unable to recover it. 00:25:53.405 [2024-11-19 11:27:48.642421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.405 [2024-11-19 11:27:48.642448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.405 qpair failed and we were unable to recover it. 00:25:53.405 [2024-11-19 11:27:48.642546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.405 [2024-11-19 11:27:48.642571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.405 qpair failed and we were unable to recover it. 00:25:53.405 [2024-11-19 11:27:48.642681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.405 [2024-11-19 11:27:48.642703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.405 qpair failed and we were unable to recover it. 00:25:53.405 [2024-11-19 11:27:48.642839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.405 [2024-11-19 11:27:48.642863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.405 qpair failed and we were unable to recover it. 00:25:53.405 [2024-11-19 11:27:48.642983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.405 [2024-11-19 11:27:48.643006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.405 qpair failed and we were unable to recover it. 00:25:53.405 [2024-11-19 11:27:48.643178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.405 [2024-11-19 11:27:48.643203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.405 qpair failed and we were unable to recover it. 00:25:53.405 [2024-11-19 11:27:48.643287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.405 [2024-11-19 11:27:48.643314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.405 qpair failed and we were unable to recover it. 00:25:53.405 [2024-11-19 11:27:48.643479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.405 [2024-11-19 11:27:48.643506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.405 qpair failed and we were unable to recover it. 00:25:53.405 [2024-11-19 11:27:48.643596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.405 [2024-11-19 11:27:48.643629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.405 qpair failed and we were unable to recover it. 00:25:53.405 [2024-11-19 11:27:48.643730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.405 [2024-11-19 11:27:48.643755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.406 qpair failed and we were unable to recover it. 00:25:53.406 [2024-11-19 11:27:48.643909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.406 [2024-11-19 11:27:48.643955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.406 qpair failed and we were unable to recover it. 00:25:53.406 [2024-11-19 11:27:48.644099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.406 [2024-11-19 11:27:48.644123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.406 qpair failed and we were unable to recover it. 00:25:53.406 [2024-11-19 11:27:48.644275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.406 [2024-11-19 11:27:48.644300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.406 qpair failed and we were unable to recover it. 00:25:53.406 [2024-11-19 11:27:48.644411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.406 [2024-11-19 11:27:48.644438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.406 qpair failed and we were unable to recover it. 00:25:53.406 [2024-11-19 11:27:48.644530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.406 [2024-11-19 11:27:48.644554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.406 qpair failed and we were unable to recover it. 00:25:53.406 [2024-11-19 11:27:48.644702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.406 [2024-11-19 11:27:48.644727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.406 qpair failed and we were unable to recover it. 00:25:53.406 [2024-11-19 11:27:48.644890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.406 [2024-11-19 11:27:48.644916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.406 qpair failed and we were unable to recover it. 00:25:53.406 [2024-11-19 11:27:48.645029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.406 [2024-11-19 11:27:48.645053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.406 qpair failed and we were unable to recover it. 00:25:53.406 [2024-11-19 11:27:48.645180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.406 [2024-11-19 11:27:48.645211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.406 qpair failed and we were unable to recover it. 00:25:53.406 [2024-11-19 11:27:48.645345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.406 [2024-11-19 11:27:48.645400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.406 qpair failed and we were unable to recover it. 00:25:53.406 [2024-11-19 11:27:48.645501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.406 [2024-11-19 11:27:48.645526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.406 qpair failed and we were unable to recover it. 00:25:53.406 [2024-11-19 11:27:48.645692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.406 [2024-11-19 11:27:48.645736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.406 qpair failed and we were unable to recover it. 00:25:53.406 [2024-11-19 11:27:48.645847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.406 [2024-11-19 11:27:48.645873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.406 qpair failed and we were unable to recover it. 00:25:53.406 [2024-11-19 11:27:48.646023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.406 [2024-11-19 11:27:48.646048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.406 qpair failed and we were unable to recover it. 00:25:53.406 [2024-11-19 11:27:48.646151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.406 [2024-11-19 11:27:48.646177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.406 qpair failed and we were unable to recover it. 00:25:53.406 [2024-11-19 11:27:48.646274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.406 [2024-11-19 11:27:48.646299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.406 qpair failed and we were unable to recover it. 00:25:53.406 [2024-11-19 11:27:48.646402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.406 [2024-11-19 11:27:48.646429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.406 qpair failed and we were unable to recover it. 00:25:53.406 [2024-11-19 11:27:48.646526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.406 [2024-11-19 11:27:48.646551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.406 qpair failed and we were unable to recover it. 00:25:53.406 [2024-11-19 11:27:48.646653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.406 [2024-11-19 11:27:48.646679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.406 qpair failed and we were unable to recover it. 00:25:53.406 [2024-11-19 11:27:48.646798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.406 [2024-11-19 11:27:48.646825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.406 qpair failed and we were unable to recover it. 00:25:53.406 [2024-11-19 11:27:48.646987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.406 [2024-11-19 11:27:48.647017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.406 qpair failed and we were unable to recover it. 00:25:53.406 [2024-11-19 11:27:48.647172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.406 [2024-11-19 11:27:48.647201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.406 qpair failed and we were unable to recover it. 00:25:53.406 [2024-11-19 11:27:48.647332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.406 [2024-11-19 11:27:48.647367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.406 qpair failed and we were unable to recover it. 00:25:53.406 [2024-11-19 11:27:48.647488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.406 [2024-11-19 11:27:48.647515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.406 qpair failed and we were unable to recover it. 00:25:53.406 [2024-11-19 11:27:48.647643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.406 [2024-11-19 11:27:48.647684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.406 qpair failed and we were unable to recover it. 00:25:53.406 [2024-11-19 11:27:48.647831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.406 [2024-11-19 11:27:48.647855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.406 qpair failed and we were unable to recover it. 00:25:53.406 [2024-11-19 11:27:48.647952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.406 [2024-11-19 11:27:48.647977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.406 qpair failed and we were unable to recover it. 00:25:53.406 [2024-11-19 11:27:48.648129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.406 [2024-11-19 11:27:48.648155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.406 qpair failed and we were unable to recover it. 00:25:53.406 [2024-11-19 11:27:48.648278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.406 [2024-11-19 11:27:48.648307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.406 qpair failed and we were unable to recover it. 00:25:53.406 [2024-11-19 11:27:48.648452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.406 [2024-11-19 11:27:48.648478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.406 qpair failed and we were unable to recover it. 00:25:53.406 [2024-11-19 11:27:48.648570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.406 [2024-11-19 11:27:48.648596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.406 qpair failed and we were unable to recover it. 00:25:53.406 [2024-11-19 11:27:48.648721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.406 [2024-11-19 11:27:48.648746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.406 qpair failed and we were unable to recover it. 00:25:53.406 [2024-11-19 11:27:48.648919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.406 [2024-11-19 11:27:48.648943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.406 qpair failed and we were unable to recover it. 00:25:53.406 [2024-11-19 11:27:48.649109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.406 [2024-11-19 11:27:48.649134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.406 qpair failed and we were unable to recover it. 00:25:53.406 [2024-11-19 11:27:48.649245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.406 [2024-11-19 11:27:48.649270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.406 qpair failed and we were unable to recover it. 00:25:53.406 [2024-11-19 11:27:48.649397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.406 [2024-11-19 11:27:48.649424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.406 qpair failed and we were unable to recover it. 00:25:53.406 [2024-11-19 11:27:48.649521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.406 [2024-11-19 11:27:48.649547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.406 qpair failed and we were unable to recover it. 00:25:53.406 [2024-11-19 11:27:48.649685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.406 [2024-11-19 11:27:48.649710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.406 qpair failed and we were unable to recover it. 00:25:53.406 [2024-11-19 11:27:48.649803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.406 [2024-11-19 11:27:48.649828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.406 qpair failed and we were unable to recover it. 00:25:53.406 [2024-11-19 11:27:48.649952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.406 [2024-11-19 11:27:48.649977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.406 qpair failed and we were unable to recover it. 00:25:53.406 [2024-11-19 11:27:48.650091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.406 [2024-11-19 11:27:48.650121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.406 qpair failed and we were unable to recover it. 00:25:53.406 [2024-11-19 11:27:48.650311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.406 [2024-11-19 11:27:48.650340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.406 qpair failed and we were unable to recover it. 00:25:53.406 [2024-11-19 11:27:48.650471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.406 [2024-11-19 11:27:48.650497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.406 qpair failed and we were unable to recover it. 00:25:53.406 [2024-11-19 11:27:48.650666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.406 [2024-11-19 11:27:48.650703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.406 qpair failed and we were unable to recover it. 00:25:53.406 [2024-11-19 11:27:48.650860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.406 [2024-11-19 11:27:48.650887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.406 qpair failed and we were unable to recover it. 00:25:53.406 [2024-11-19 11:27:48.651023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.406 [2024-11-19 11:27:48.651048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.406 qpair failed and we were unable to recover it. 00:25:53.406 [2024-11-19 11:27:48.651147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.406 [2024-11-19 11:27:48.651173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.406 qpair failed and we were unable to recover it. 00:25:53.406 [2024-11-19 11:27:48.651319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.406 [2024-11-19 11:27:48.651345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.406 qpair failed and we were unable to recover it. 00:25:53.407 [2024-11-19 11:27:48.651450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.407 [2024-11-19 11:27:48.651475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.407 qpair failed and we were unable to recover it. 00:25:53.407 [2024-11-19 11:27:48.651580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.407 [2024-11-19 11:27:48.651604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.407 qpair failed and we were unable to recover it. 00:25:53.407 [2024-11-19 11:27:48.651735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.407 [2024-11-19 11:27:48.651761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.407 qpair failed and we were unable to recover it. 00:25:53.407 [2024-11-19 11:27:48.651921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.407 [2024-11-19 11:27:48.651969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.407 qpair failed and we were unable to recover it. 00:25:53.407 [2024-11-19 11:27:48.652091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.407 [2024-11-19 11:27:48.652122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.407 qpair failed and we were unable to recover it. 00:25:53.407 [2024-11-19 11:27:48.652279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.407 [2024-11-19 11:27:48.652309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.407 qpair failed and we were unable to recover it. 00:25:53.407 [2024-11-19 11:27:48.652427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.407 [2024-11-19 11:27:48.652452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.407 qpair failed and we were unable to recover it. 00:25:53.407 [2024-11-19 11:27:48.652547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.407 [2024-11-19 11:27:48.652571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.407 qpair failed and we were unable to recover it. 00:25:53.407 [2024-11-19 11:27:48.652709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.407 [2024-11-19 11:27:48.652734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.407 qpair failed and we were unable to recover it. 00:25:53.407 [2024-11-19 11:27:48.652874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.407 [2024-11-19 11:27:48.652912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.407 qpair failed and we were unable to recover it. 00:25:53.407 [2024-11-19 11:27:48.653048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.407 [2024-11-19 11:27:48.653073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.407 qpair failed and we were unable to recover it. 00:25:53.407 [2024-11-19 11:27:48.653166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.407 [2024-11-19 11:27:48.653192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.407 qpair failed and we were unable to recover it. 00:25:53.407 [2024-11-19 11:27:48.653339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.407 [2024-11-19 11:27:48.653376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.407 qpair failed and we were unable to recover it. 00:25:53.407 [2024-11-19 11:27:48.653499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.407 [2024-11-19 11:27:48.653524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.407 qpair failed and we were unable to recover it. 00:25:53.407 [2024-11-19 11:27:48.653630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.407 [2024-11-19 11:27:48.653654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.407 qpair failed and we were unable to recover it. 00:25:53.407 [2024-11-19 11:27:48.653773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.407 [2024-11-19 11:27:48.653803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.407 qpair failed and we were unable to recover it. 00:25:53.407 [2024-11-19 11:27:48.653913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.407 [2024-11-19 11:27:48.653939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.407 qpair failed and we were unable to recover it. 00:25:53.407 [2024-11-19 11:27:48.654099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.407 [2024-11-19 11:27:48.654124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.407 qpair failed and we were unable to recover it. 00:25:53.407 [2024-11-19 11:27:48.654243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.407 [2024-11-19 11:27:48.654284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.407 qpair failed and we were unable to recover it. 00:25:53.407 [2024-11-19 11:27:48.654395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.407 [2024-11-19 11:27:48.654420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.407 qpair failed and we were unable to recover it. 00:25:53.407 [2024-11-19 11:27:48.654514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.407 [2024-11-19 11:27:48.654539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.407 qpair failed and we were unable to recover it. 00:25:53.407 [2024-11-19 11:27:48.654635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.407 [2024-11-19 11:27:48.654680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.407 qpair failed and we were unable to recover it. 00:25:53.407 [2024-11-19 11:27:48.654868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.407 [2024-11-19 11:27:48.654893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.407 qpair failed and we were unable to recover it. 00:25:53.407 [2024-11-19 11:27:48.655018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.407 [2024-11-19 11:27:48.655057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.407 qpair failed and we were unable to recover it. 00:25:53.407 [2024-11-19 11:27:48.655197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.407 [2024-11-19 11:27:48.655228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.407 qpair failed and we were unable to recover it. 00:25:53.407 [2024-11-19 11:27:48.655369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.407 [2024-11-19 11:27:48.655413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.407 qpair failed and we were unable to recover it. 00:25:53.407 [2024-11-19 11:27:48.655500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.407 [2024-11-19 11:27:48.655526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.407 qpair failed and we were unable to recover it. 00:25:53.407 [2024-11-19 11:27:48.655653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.407 [2024-11-19 11:27:48.655677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.407 qpair failed and we were unable to recover it. 00:25:53.407 [2024-11-19 11:27:48.655848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.407 [2024-11-19 11:27:48.655871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.407 qpair failed and we were unable to recover it. 00:25:53.407 [2024-11-19 11:27:48.656053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.407 [2024-11-19 11:27:48.656078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.407 qpair failed and we were unable to recover it. 00:25:53.407 [2024-11-19 11:27:48.656224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.407 [2024-11-19 11:27:48.656250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.407 qpair failed and we were unable to recover it. 00:25:53.407 [2024-11-19 11:27:48.656390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.407 [2024-11-19 11:27:48.656416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.407 qpair failed and we were unable to recover it. 00:25:53.407 [2024-11-19 11:27:48.656544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.407 [2024-11-19 11:27:48.656569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.407 qpair failed and we were unable to recover it. 00:25:53.407 [2024-11-19 11:27:48.656717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.407 [2024-11-19 11:27:48.656752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.407 qpair failed and we were unable to recover it. 00:25:53.407 [2024-11-19 11:27:48.656909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.407 [2024-11-19 11:27:48.656932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.407 qpair failed and we were unable to recover it. 00:25:53.407 [2024-11-19 11:27:48.657058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.407 [2024-11-19 11:27:48.657097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.407 qpair failed and we were unable to recover it. 00:25:53.407 [2024-11-19 11:27:48.657216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.407 [2024-11-19 11:27:48.657247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.407 qpair failed and we were unable to recover it. 00:25:53.407 [2024-11-19 11:27:48.657377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.407 [2024-11-19 11:27:48.657421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.407 qpair failed and we were unable to recover it. 00:25:53.407 [2024-11-19 11:27:48.657525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.407 [2024-11-19 11:27:48.657552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.407 qpair failed and we were unable to recover it. 00:25:53.407 [2024-11-19 11:27:48.657707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.407 [2024-11-19 11:27:48.657748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.407 qpair failed and we were unable to recover it. 00:25:53.407 [2024-11-19 11:27:48.657913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.407 [2024-11-19 11:27:48.657936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.407 qpair failed and we were unable to recover it. 00:25:53.407 [2024-11-19 11:27:48.658104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.407 [2024-11-19 11:27:48.658128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.407 qpair failed and we were unable to recover it. 00:25:53.407 [2024-11-19 11:27:48.658308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.407 [2024-11-19 11:27:48.658332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.407 qpair failed and we were unable to recover it. 00:25:53.408 [2024-11-19 11:27:48.658488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.408 [2024-11-19 11:27:48.658515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.408 qpair failed and we were unable to recover it. 00:25:53.408 [2024-11-19 11:27:48.658648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.408 [2024-11-19 11:27:48.658673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.408 qpair failed and we were unable to recover it. 00:25:53.408 [2024-11-19 11:27:48.658844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.408 [2024-11-19 11:27:48.658869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.408 qpair failed and we were unable to recover it. 00:25:53.408 [2024-11-19 11:27:48.659003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.408 [2024-11-19 11:27:48.659043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.408 qpair failed and we were unable to recover it. 00:25:53.408 [2024-11-19 11:27:48.659184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.408 [2024-11-19 11:27:48.659226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.408 qpair failed and we were unable to recover it. 00:25:53.408 [2024-11-19 11:27:48.659358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.408 [2024-11-19 11:27:48.659391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.408 qpair failed and we were unable to recover it. 00:25:53.408 [2024-11-19 11:27:48.659514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.408 [2024-11-19 11:27:48.659539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.408 qpair failed and we were unable to recover it. 00:25:53.408 [2024-11-19 11:27:48.659657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.408 [2024-11-19 11:27:48.659682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.408 qpair failed and we were unable to recover it. 00:25:53.408 [2024-11-19 11:27:48.659845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.408 [2024-11-19 11:27:48.659869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.408 qpair failed and we were unable to recover it. 00:25:53.408 [2024-11-19 11:27:48.660013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.408 [2024-11-19 11:27:48.660041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.408 qpair failed and we were unable to recover it. 00:25:53.408 [2024-11-19 11:27:48.660181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.408 [2024-11-19 11:27:48.660212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.408 qpair failed and we were unable to recover it. 00:25:53.408 [2024-11-19 11:27:48.660345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.408 [2024-11-19 11:27:48.660385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.408 qpair failed and we were unable to recover it. 00:25:53.408 [2024-11-19 11:27:48.660516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.408 [2024-11-19 11:27:48.660546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.408 qpair failed and we were unable to recover it. 00:25:53.408 [2024-11-19 11:27:48.660707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.408 [2024-11-19 11:27:48.660732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.408 qpair failed and we were unable to recover it. 00:25:53.408 [2024-11-19 11:27:48.660865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.408 [2024-11-19 11:27:48.660889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.408 qpair failed and we were unable to recover it. 00:25:53.408 [2024-11-19 11:27:48.661020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.408 [2024-11-19 11:27:48.661056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.408 qpair failed and we were unable to recover it. 00:25:53.408 [2024-11-19 11:27:48.661159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.408 [2024-11-19 11:27:48.661184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.408 qpair failed and we were unable to recover it. 00:25:53.408 [2024-11-19 11:27:48.661309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.408 [2024-11-19 11:27:48.661338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.408 qpair failed and we were unable to recover it. 00:25:53.408 [2024-11-19 11:27:48.661458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.408 [2024-11-19 11:27:48.661483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.408 qpair failed and we were unable to recover it. 00:25:53.408 [2024-11-19 11:27:48.661571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.408 [2024-11-19 11:27:48.661597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.408 qpair failed and we were unable to recover it. 00:25:53.408 [2024-11-19 11:27:48.661743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.408 [2024-11-19 11:27:48.661768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.408 qpair failed and we were unable to recover it. 00:25:53.408 [2024-11-19 11:27:48.661915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.408 [2024-11-19 11:27:48.661939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.408 qpair failed and we were unable to recover it. 00:25:53.408 [2024-11-19 11:27:48.662094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.408 [2024-11-19 11:27:48.662119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.408 qpair failed and we were unable to recover it. 00:25:53.408 [2024-11-19 11:27:48.662288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.408 [2024-11-19 11:27:48.662313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.408 qpair failed and we were unable to recover it. 00:25:53.408 [2024-11-19 11:27:48.662434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.408 [2024-11-19 11:27:48.662460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.408 qpair failed and we were unable to recover it. 00:25:53.408 [2024-11-19 11:27:48.662577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.408 [2024-11-19 11:27:48.662603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.408 qpair failed and we were unable to recover it. 00:25:53.408 [2024-11-19 11:27:48.662811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.408 [2024-11-19 11:27:48.662836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.408 qpair failed and we were unable to recover it. 00:25:53.409 [2024-11-19 11:27:48.663016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.409 [2024-11-19 11:27:48.663040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.409 qpair failed and we were unable to recover it. 00:25:53.409 [2024-11-19 11:27:48.663193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.409 [2024-11-19 11:27:48.663232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.409 qpair failed and we were unable to recover it. 00:25:53.409 [2024-11-19 11:27:48.663387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.409 [2024-11-19 11:27:48.663414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.409 qpair failed and we were unable to recover it. 00:25:53.409 [2024-11-19 11:27:48.663543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.409 [2024-11-19 11:27:48.663567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.409 qpair failed and we were unable to recover it. 00:25:53.409 [2024-11-19 11:27:48.663734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.409 [2024-11-19 11:27:48.663759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.409 qpair failed and we were unable to recover it. 00:25:53.409 [2024-11-19 11:27:48.663901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.409 [2024-11-19 11:27:48.663926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.409 qpair failed and we were unable to recover it. 00:25:53.409 [2024-11-19 11:27:48.664047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.409 [2024-11-19 11:27:48.664076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.409 qpair failed and we were unable to recover it. 00:25:53.409 [2024-11-19 11:27:48.664244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.409 [2024-11-19 11:27:48.664274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.409 qpair failed and we were unable to recover it. 00:25:53.409 [2024-11-19 11:27:48.664386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.409 [2024-11-19 11:27:48.664429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.409 qpair failed and we were unable to recover it. 00:25:53.409 [2024-11-19 11:27:48.664552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.409 [2024-11-19 11:27:48.664577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.409 qpair failed and we were unable to recover it. 00:25:53.409 [2024-11-19 11:27:48.664711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.409 [2024-11-19 11:27:48.664760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.409 qpair failed and we were unable to recover it. 00:25:53.409 [2024-11-19 11:27:48.664891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.409 [2024-11-19 11:27:48.664916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.409 qpair failed and we were unable to recover it. 00:25:53.409 [2024-11-19 11:27:48.665042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.409 [2024-11-19 11:27:48.665067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.409 qpair failed and we were unable to recover it. 00:25:53.409 [2024-11-19 11:27:48.665188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.409 [2024-11-19 11:27:48.665215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.409 qpair failed and we were unable to recover it. 00:25:53.409 [2024-11-19 11:27:48.665334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.409 [2024-11-19 11:27:48.665359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.409 qpair failed and we were unable to recover it. 00:25:53.409 [2024-11-19 11:27:48.665463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.409 [2024-11-19 11:27:48.665489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.409 qpair failed and we were unable to recover it. 00:25:53.409 [2024-11-19 11:27:48.665579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.409 [2024-11-19 11:27:48.665604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.409 qpair failed and we were unable to recover it. 00:25:53.409 [2024-11-19 11:27:48.665767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.409 [2024-11-19 11:27:48.665793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.409 qpair failed and we were unable to recover it. 00:25:53.409 [2024-11-19 11:27:48.665914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.409 [2024-11-19 11:27:48.665944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.409 qpair failed and we were unable to recover it. 00:25:53.409 [2024-11-19 11:27:48.666063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.409 [2024-11-19 11:27:48.666093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.409 qpair failed and we were unable to recover it. 00:25:53.409 [2024-11-19 11:27:48.666247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.409 [2024-11-19 11:27:48.666277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.409 qpair failed and we were unable to recover it. 00:25:53.409 [2024-11-19 11:27:48.666397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.409 [2024-11-19 11:27:48.666429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.409 qpair failed and we were unable to recover it. 00:25:53.409 [2024-11-19 11:27:48.666561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.409 [2024-11-19 11:27:48.666590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.409 qpair failed and we were unable to recover it. 00:25:53.409 [2024-11-19 11:27:48.666728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.409 [2024-11-19 11:27:48.666759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.409 qpair failed and we were unable to recover it. 00:25:53.409 [2024-11-19 11:27:48.666936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.409 [2024-11-19 11:27:48.666967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.409 qpair failed and we were unable to recover it. 00:25:53.409 [2024-11-19 11:27:48.667135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.409 [2024-11-19 11:27:48.667169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.409 qpair failed and we were unable to recover it. 00:25:53.409 [2024-11-19 11:27:48.667298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.409 [2024-11-19 11:27:48.667328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.409 qpair failed and we were unable to recover it. 00:25:53.409 [2024-11-19 11:27:48.667523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.409 [2024-11-19 11:27:48.667580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.409 qpair failed and we were unable to recover it. 00:25:53.409 [2024-11-19 11:27:48.667687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.409 [2024-11-19 11:27:48.667718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.409 qpair failed and we were unable to recover it. 00:25:53.409 [2024-11-19 11:27:48.667880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.409 [2024-11-19 11:27:48.667916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.409 qpair failed and we were unable to recover it. 00:25:53.409 [2024-11-19 11:27:48.668077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.409 [2024-11-19 11:27:48.668109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.409 qpair failed and we were unable to recover it. 00:25:53.409 [2024-11-19 11:27:48.668214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.409 [2024-11-19 11:27:48.668245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.409 qpair failed and we were unable to recover it. 00:25:53.409 [2024-11-19 11:27:48.668379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.409 [2024-11-19 11:27:48.668410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.409 qpair failed and we were unable to recover it. 00:25:53.409 [2024-11-19 11:27:48.668556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.409 [2024-11-19 11:27:48.668604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.409 qpair failed and we were unable to recover it. 00:25:53.409 [2024-11-19 11:27:48.668740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.409 [2024-11-19 11:27:48.668770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.409 qpair failed and we were unable to recover it. 00:25:53.409 [2024-11-19 11:27:48.668900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.409 [2024-11-19 11:27:48.668930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.409 qpair failed and we were unable to recover it. 00:25:53.410 [2024-11-19 11:27:48.669034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.410 [2024-11-19 11:27:48.669065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.410 qpair failed and we were unable to recover it. 00:25:53.410 [2024-11-19 11:27:48.669221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.410 [2024-11-19 11:27:48.669251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.410 qpair failed and we were unable to recover it. 00:25:53.410 [2024-11-19 11:27:48.669370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.410 [2024-11-19 11:27:48.669408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.410 qpair failed and we were unable to recover it. 00:25:53.410 [2024-11-19 11:27:48.669542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.410 [2024-11-19 11:27:48.669573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.410 qpair failed and we were unable to recover it. 00:25:53.410 [2024-11-19 11:27:48.669707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.410 [2024-11-19 11:27:48.669738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.410 qpair failed and we were unable to recover it. 00:25:53.410 [2024-11-19 11:27:48.669990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.410 [2024-11-19 11:27:48.670022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.410 qpair failed and we were unable to recover it. 00:25:53.410 [2024-11-19 11:27:48.670148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.410 [2024-11-19 11:27:48.670178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.410 qpair failed and we were unable to recover it. 00:25:53.410 [2024-11-19 11:27:48.670358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.410 [2024-11-19 11:27:48.670398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.410 qpair failed and we were unable to recover it. 00:25:53.410 [2024-11-19 11:27:48.670529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.410 [2024-11-19 11:27:48.670560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.410 qpair failed and we were unable to recover it. 00:25:53.410 [2024-11-19 11:27:48.670700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.410 [2024-11-19 11:27:48.670730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.410 qpair failed and we were unable to recover it. 00:25:53.410 [2024-11-19 11:27:48.670872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.410 [2024-11-19 11:27:48.670903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.410 qpair failed and we were unable to recover it. 00:25:53.410 [2024-11-19 11:27:48.671063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.410 [2024-11-19 11:27:48.671094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.410 qpair failed and we were unable to recover it. 00:25:53.410 [2024-11-19 11:27:48.671226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.410 [2024-11-19 11:27:48.671256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.410 qpair failed and we were unable to recover it. 00:25:53.410 [2024-11-19 11:27:48.671417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.410 [2024-11-19 11:27:48.671449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.410 qpair failed and we were unable to recover it. 00:25:53.410 [2024-11-19 11:27:48.671559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.410 [2024-11-19 11:27:48.671589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.410 qpair failed and we were unable to recover it. 00:25:53.410 [2024-11-19 11:27:48.671759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.410 [2024-11-19 11:27:48.671802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.410 qpair failed and we were unable to recover it. 00:25:53.410 [2024-11-19 11:27:48.672004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.410 [2024-11-19 11:27:48.672035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.410 qpair failed and we were unable to recover it. 00:25:53.410 [2024-11-19 11:27:48.672175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.410 [2024-11-19 11:27:48.672205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.410 qpair failed and we were unable to recover it. 00:25:53.410 [2024-11-19 11:27:48.672317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.410 [2024-11-19 11:27:48.672347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.410 qpair failed and we were unable to recover it. 00:25:53.410 [2024-11-19 11:27:48.672491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.410 [2024-11-19 11:27:48.672521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.410 qpair failed and we were unable to recover it. 00:25:53.410 [2024-11-19 11:27:48.672679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.410 [2024-11-19 11:27:48.672709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.410 qpair failed and we were unable to recover it. 00:25:53.410 [2024-11-19 11:27:48.672862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.410 [2024-11-19 11:27:48.672892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.410 qpair failed and we were unable to recover it. 00:25:53.410 [2024-11-19 11:27:48.673067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.410 [2024-11-19 11:27:48.673098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.410 qpair failed and we were unable to recover it. 00:25:53.410 [2024-11-19 11:27:48.673219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.410 [2024-11-19 11:27:48.673249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.410 qpair failed and we were unable to recover it. 00:25:53.410 [2024-11-19 11:27:48.673385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.410 [2024-11-19 11:27:48.673416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.410 qpair failed and we were unable to recover it. 00:25:53.410 [2024-11-19 11:27:48.673516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.410 [2024-11-19 11:27:48.673547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.410 qpair failed and we were unable to recover it. 00:25:53.410 [2024-11-19 11:27:48.673705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.411 [2024-11-19 11:27:48.673734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.411 qpair failed and we were unable to recover it. 00:25:53.411 [2024-11-19 11:27:48.673887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.411 [2024-11-19 11:27:48.673918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.411 qpair failed and we were unable to recover it. 00:25:53.411 [2024-11-19 11:27:48.674058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.411 [2024-11-19 11:27:48.674089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.411 qpair failed and we were unable to recover it. 00:25:53.411 [2024-11-19 11:27:48.674261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.411 [2024-11-19 11:27:48.674312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.411 qpair failed and we were unable to recover it. 00:25:53.411 [2024-11-19 11:27:48.674464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.411 [2024-11-19 11:27:48.674495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.411 qpair failed and we were unable to recover it. 00:25:53.411 [2024-11-19 11:27:48.674637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.411 [2024-11-19 11:27:48.674668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.411 qpair failed and we were unable to recover it. 00:25:53.411 [2024-11-19 11:27:48.674855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.411 [2024-11-19 11:27:48.674887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.411 qpair failed and we were unable to recover it. 00:25:53.411 [2024-11-19 11:27:48.675016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.411 [2024-11-19 11:27:48.675046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.411 qpair failed and we were unable to recover it. 00:25:53.411 [2024-11-19 11:27:48.675207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.411 [2024-11-19 11:27:48.675245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.411 qpair failed and we were unable to recover it. 00:25:53.411 [2024-11-19 11:27:48.675360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.411 [2024-11-19 11:27:48.675398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.411 qpair failed and we were unable to recover it. 00:25:53.411 [2024-11-19 11:27:48.675541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.411 [2024-11-19 11:27:48.675572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.411 qpair failed and we were unable to recover it. 00:25:53.411 [2024-11-19 11:27:48.675729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.411 [2024-11-19 11:27:48.675768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.411 qpair failed and we were unable to recover it. 00:25:53.411 [2024-11-19 11:27:48.675906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.411 [2024-11-19 11:27:48.675936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.411 qpair failed and we were unable to recover it. 00:25:53.411 [2024-11-19 11:27:48.676104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.411 [2024-11-19 11:27:48.676135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.411 qpair failed and we were unable to recover it. 00:25:53.411 [2024-11-19 11:27:48.676317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.411 [2024-11-19 11:27:48.676348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.411 qpair failed and we were unable to recover it. 00:25:53.411 [2024-11-19 11:27:48.676526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.411 [2024-11-19 11:27:48.676558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.411 qpair failed and we were unable to recover it. 00:25:53.411 [2024-11-19 11:27:48.676683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.411 [2024-11-19 11:27:48.676713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.411 qpair failed and we were unable to recover it. 00:25:53.411 [2024-11-19 11:27:48.676877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.411 [2024-11-19 11:27:48.676908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.411 qpair failed and we were unable to recover it. 00:25:53.411 [2024-11-19 11:27:48.677050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.411 [2024-11-19 11:27:48.677081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.411 qpair failed and we were unable to recover it. 00:25:53.411 [2024-11-19 11:27:48.677204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.411 [2024-11-19 11:27:48.677235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.411 qpair failed and we were unable to recover it. 00:25:53.411 [2024-11-19 11:27:48.677337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.411 [2024-11-19 11:27:48.677377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.411 qpair failed and we were unable to recover it. 00:25:53.411 [2024-11-19 11:27:48.677526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.411 [2024-11-19 11:27:48.677555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.411 qpair failed and we were unable to recover it. 00:25:53.411 [2024-11-19 11:27:48.677731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.411 [2024-11-19 11:27:48.677761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.411 qpair failed and we were unable to recover it. 00:25:53.411 [2024-11-19 11:27:48.677916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.411 [2024-11-19 11:27:48.677945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.411 qpair failed and we were unable to recover it. 00:25:53.411 [2024-11-19 11:27:48.678114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.412 [2024-11-19 11:27:48.678144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.412 qpair failed and we were unable to recover it. 00:25:53.412 [2024-11-19 11:27:48.678311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.412 [2024-11-19 11:27:48.678340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.412 qpair failed and we were unable to recover it. 00:25:53.412 [2024-11-19 11:27:48.678456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.412 [2024-11-19 11:27:48.678486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.412 qpair failed and we were unable to recover it. 00:25:53.412 [2024-11-19 11:27:48.678635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.412 [2024-11-19 11:27:48.678664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.412 qpair failed and we were unable to recover it. 00:25:53.412 [2024-11-19 11:27:48.678794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.412 [2024-11-19 11:27:48.678824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.412 qpair failed and we were unable to recover it. 00:25:53.412 [2024-11-19 11:27:48.678962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.412 [2024-11-19 11:27:48.678991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.412 qpair failed and we were unable to recover it. 00:25:53.412 [2024-11-19 11:27:48.679130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.412 [2024-11-19 11:27:48.679165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.412 qpair failed and we were unable to recover it. 00:25:53.412 [2024-11-19 11:27:48.679310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.412 [2024-11-19 11:27:48.679340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.412 qpair failed and we were unable to recover it. 00:25:53.412 [2024-11-19 11:27:48.679480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.412 [2024-11-19 11:27:48.679510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.412 qpair failed and we were unable to recover it. 00:25:53.412 [2024-11-19 11:27:48.679660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.412 [2024-11-19 11:27:48.679690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.412 qpair failed and we were unable to recover it. 00:25:53.412 [2024-11-19 11:27:48.679850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.412 [2024-11-19 11:27:48.679880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.412 qpair failed and we were unable to recover it. 00:25:53.412 [2024-11-19 11:27:48.679978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.412 [2024-11-19 11:27:48.680014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.412 qpair failed and we were unable to recover it. 00:25:53.412 [2024-11-19 11:27:48.680211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.412 [2024-11-19 11:27:48.680241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.412 qpair failed and we were unable to recover it. 00:25:53.412 [2024-11-19 11:27:48.680377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.412 [2024-11-19 11:27:48.680423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.412 qpair failed and we were unable to recover it. 00:25:53.412 [2024-11-19 11:27:48.680608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.412 [2024-11-19 11:27:48.680636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.412 qpair failed and we were unable to recover it. 00:25:53.412 [2024-11-19 11:27:48.680761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.412 [2024-11-19 11:27:48.680790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.412 qpair failed and we were unable to recover it. 00:25:53.412 [2024-11-19 11:27:48.680922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.412 [2024-11-19 11:27:48.680951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.412 qpair failed and we were unable to recover it. 00:25:53.412 [2024-11-19 11:27:48.681079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.412 [2024-11-19 11:27:48.681108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.412 qpair failed and we were unable to recover it. 00:25:53.412 [2024-11-19 11:27:48.681257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.412 [2024-11-19 11:27:48.681286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.412 qpair failed and we were unable to recover it. 00:25:53.412 [2024-11-19 11:27:48.681411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.412 [2024-11-19 11:27:48.681441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.412 qpair failed and we were unable to recover it. 00:25:53.412 [2024-11-19 11:27:48.681552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.412 [2024-11-19 11:27:48.681581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.412 qpair failed and we were unable to recover it. 00:25:53.412 [2024-11-19 11:27:48.681708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.412 [2024-11-19 11:27:48.681737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.412 qpair failed and we were unable to recover it. 00:25:53.412 [2024-11-19 11:27:48.681872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.412 [2024-11-19 11:27:48.681901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.412 qpair failed and we were unable to recover it. 00:25:53.412 [2024-11-19 11:27:48.682028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.412 [2024-11-19 11:27:48.682056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.412 qpair failed and we were unable to recover it. 00:25:53.412 [2024-11-19 11:27:48.682201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.413 [2024-11-19 11:27:48.682230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.413 qpair failed and we were unable to recover it. 00:25:53.413 [2024-11-19 11:27:48.682351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.413 [2024-11-19 11:27:48.682386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.413 qpair failed and we were unable to recover it. 00:25:53.413 [2024-11-19 11:27:48.682558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.413 [2024-11-19 11:27:48.682587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.413 qpair failed and we were unable to recover it. 00:25:53.413 [2024-11-19 11:27:48.682715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.413 [2024-11-19 11:27:48.682743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.413 qpair failed and we were unable to recover it. 00:25:53.413 [2024-11-19 11:27:48.682866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.413 [2024-11-19 11:27:48.682905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.413 qpair failed and we were unable to recover it. 00:25:53.413 [2024-11-19 11:27:48.683073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.413 [2024-11-19 11:27:48.683101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.413 qpair failed and we were unable to recover it. 00:25:53.413 [2024-11-19 11:27:48.683223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.413 [2024-11-19 11:27:48.683252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.413 qpair failed and we were unable to recover it. 00:25:53.413 [2024-11-19 11:27:48.683375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.413 [2024-11-19 11:27:48.683404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.413 qpair failed and we were unable to recover it. 00:25:53.413 [2024-11-19 11:27:48.683541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.413 [2024-11-19 11:27:48.683569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.413 qpair failed and we were unable to recover it. 00:25:53.413 [2024-11-19 11:27:48.683695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.413 [2024-11-19 11:27:48.683728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.413 qpair failed and we were unable to recover it. 00:25:53.413 [2024-11-19 11:27:48.683863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.413 [2024-11-19 11:27:48.683892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.413 qpair failed and we were unable to recover it. 00:25:53.413 [2024-11-19 11:27:48.684065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.413 [2024-11-19 11:27:48.684094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.413 qpair failed and we were unable to recover it. 00:25:53.413 [2024-11-19 11:27:48.684249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.413 [2024-11-19 11:27:48.684278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.413 qpair failed and we were unable to recover it. 00:25:53.413 [2024-11-19 11:27:48.684418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.413 [2024-11-19 11:27:48.684446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.413 qpair failed and we were unable to recover it. 00:25:53.413 [2024-11-19 11:27:48.684591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.413 [2024-11-19 11:27:48.684618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.413 qpair failed and we were unable to recover it. 00:25:53.413 [2024-11-19 11:27:48.684717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.413 [2024-11-19 11:27:48.684745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.413 qpair failed and we were unable to recover it. 00:25:53.413 [2024-11-19 11:27:48.684913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.413 [2024-11-19 11:27:48.684942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.413 qpair failed and we were unable to recover it. 00:25:53.413 [2024-11-19 11:27:48.685069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.413 [2024-11-19 11:27:48.685096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.413 qpair failed and we were unable to recover it. 00:25:53.413 [2024-11-19 11:27:48.685241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.414 [2024-11-19 11:27:48.685269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.414 qpair failed and we were unable to recover it. 00:25:53.414 [2024-11-19 11:27:48.685395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.414 [2024-11-19 11:27:48.685424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.414 qpair failed and we were unable to recover it. 00:25:53.414 [2024-11-19 11:27:48.685557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.414 [2024-11-19 11:27:48.685585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.414 qpair failed and we were unable to recover it. 00:25:53.414 [2024-11-19 11:27:48.685723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.414 [2024-11-19 11:27:48.685751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.414 qpair failed and we were unable to recover it. 00:25:53.414 [2024-11-19 11:27:48.685879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.414 [2024-11-19 11:27:48.685911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.414 qpair failed and we were unable to recover it. 00:25:53.414 [2024-11-19 11:27:48.686036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.414 [2024-11-19 11:27:48.686063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.414 qpair failed and we were unable to recover it. 00:25:53.414 [2024-11-19 11:27:48.686212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.414 [2024-11-19 11:27:48.686241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.414 qpair failed and we were unable to recover it. 00:25:53.414 [2024-11-19 11:27:48.686392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.414 [2024-11-19 11:27:48.686420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.414 qpair failed and we were unable to recover it. 00:25:53.414 [2024-11-19 11:27:48.686534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.414 [2024-11-19 11:27:48.686561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.414 qpair failed and we were unable to recover it. 00:25:53.414 [2024-11-19 11:27:48.686746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.414 [2024-11-19 11:27:48.686774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.414 qpair failed and we were unable to recover it. 00:25:53.414 [2024-11-19 11:27:48.686908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.414 [2024-11-19 11:27:48.686935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.414 qpair failed and we were unable to recover it. 00:25:53.414 [2024-11-19 11:27:48.687104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.414 [2024-11-19 11:27:48.687131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.414 qpair failed and we were unable to recover it. 00:25:53.414 [2024-11-19 11:27:48.687283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.414 [2024-11-19 11:27:48.687320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.414 qpair failed and we were unable to recover it. 00:25:53.414 [2024-11-19 11:27:48.687459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.414 [2024-11-19 11:27:48.687486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.414 qpair failed and we were unable to recover it. 00:25:53.414 [2024-11-19 11:27:48.687639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.414 [2024-11-19 11:27:48.687666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.414 qpair failed and we were unable to recover it. 00:25:53.414 [2024-11-19 11:27:48.687786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.414 [2024-11-19 11:27:48.687812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.414 qpair failed and we were unable to recover it. 00:25:53.414 [2024-11-19 11:27:48.687961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.414 [2024-11-19 11:27:48.687988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.414 qpair failed and we were unable to recover it. 00:25:53.414 [2024-11-19 11:27:48.688090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.414 [2024-11-19 11:27:48.688116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.414 qpair failed and we were unable to recover it. 00:25:53.414 [2024-11-19 11:27:48.688277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.414 [2024-11-19 11:27:48.688303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.414 qpair failed and we were unable to recover it. 00:25:53.414 [2024-11-19 11:27:48.688420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.414 [2024-11-19 11:27:48.688447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.414 qpair failed and we were unable to recover it. 00:25:53.414 [2024-11-19 11:27:48.688578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.414 [2024-11-19 11:27:48.688604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.414 qpair failed and we were unable to recover it. 00:25:53.414 [2024-11-19 11:27:48.688773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.414 [2024-11-19 11:27:48.688798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.414 qpair failed and we were unable to recover it. 00:25:53.414 [2024-11-19 11:27:48.688991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.414 [2024-11-19 11:27:48.689018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.414 qpair failed and we were unable to recover it. 00:25:53.414 [2024-11-19 11:27:48.689201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.414 [2024-11-19 11:27:48.689227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.414 qpair failed and we were unable to recover it. 00:25:53.414 [2024-11-19 11:27:48.689391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.415 [2024-11-19 11:27:48.689419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.415 qpair failed and we were unable to recover it. 00:25:53.415 [2024-11-19 11:27:48.689578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.415 [2024-11-19 11:27:48.689605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.415 qpair failed and we were unable to recover it. 00:25:53.415 [2024-11-19 11:27:48.689755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.415 [2024-11-19 11:27:48.689781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.415 qpair failed and we were unable to recover it. 00:25:53.415 [2024-11-19 11:27:48.689934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.415 [2024-11-19 11:27:48.689968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.415 qpair failed and we were unable to recover it. 00:25:53.415 [2024-11-19 11:27:48.690119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.415 [2024-11-19 11:27:48.690146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.415 qpair failed and we were unable to recover it. 00:25:53.415 [2024-11-19 11:27:48.690270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.415 [2024-11-19 11:27:48.690295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.415 qpair failed and we were unable to recover it. 00:25:53.415 [2024-11-19 11:27:48.690433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.415 [2024-11-19 11:27:48.690467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.415 qpair failed and we were unable to recover it. 00:25:53.415 [2024-11-19 11:27:48.690627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.415 [2024-11-19 11:27:48.690653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.415 qpair failed and we were unable to recover it. 00:25:53.415 [2024-11-19 11:27:48.690802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.415 [2024-11-19 11:27:48.690828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.415 qpair failed and we were unable to recover it. 00:25:53.415 [2024-11-19 11:27:48.690942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.415 [2024-11-19 11:27:48.690967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.415 qpair failed and we were unable to recover it. 00:25:53.415 [2024-11-19 11:27:48.691126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.415 [2024-11-19 11:27:48.691154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.415 qpair failed and we were unable to recover it. 00:25:53.415 [2024-11-19 11:27:48.691328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.415 [2024-11-19 11:27:48.691353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.415 qpair failed and we were unable to recover it. 00:25:53.415 [2024-11-19 11:27:48.691483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.415 [2024-11-19 11:27:48.691509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.415 qpair failed and we were unable to recover it. 00:25:53.415 [2024-11-19 11:27:48.691649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.415 [2024-11-19 11:27:48.691673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.415 qpair failed and we were unable to recover it. 00:25:53.415 [2024-11-19 11:27:48.691817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.415 [2024-11-19 11:27:48.691843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.415 qpair failed and we were unable to recover it. 00:25:53.415 [2024-11-19 11:27:48.692024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.415 [2024-11-19 11:27:48.692049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.415 qpair failed and we were unable to recover it. 00:25:53.415 [2024-11-19 11:27:48.692169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.415 [2024-11-19 11:27:48.692194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.415 qpair failed and we were unable to recover it. 00:25:53.415 [2024-11-19 11:27:48.692327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.415 [2024-11-19 11:27:48.692352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.415 qpair failed and we were unable to recover it. 00:25:53.415 [2024-11-19 11:27:48.692525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.415 [2024-11-19 11:27:48.692551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.415 qpair failed and we were unable to recover it. 00:25:53.415 [2024-11-19 11:27:48.692674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.415 [2024-11-19 11:27:48.692700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.415 qpair failed and we were unable to recover it. 00:25:53.415 [2024-11-19 11:27:48.692814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.415 [2024-11-19 11:27:48.692844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.415 qpair failed and we were unable to recover it. 00:25:53.415 [2024-11-19 11:27:48.693020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.415 [2024-11-19 11:27:48.693045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.415 qpair failed and we were unable to recover it. 00:25:53.415 [2024-11-19 11:27:48.693202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.415 [2024-11-19 11:27:48.693236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.415 qpair failed and we were unable to recover it. 00:25:53.415 [2024-11-19 11:27:48.693394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.415 [2024-11-19 11:27:48.693420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.415 qpair failed and we were unable to recover it. 00:25:53.415 [2024-11-19 11:27:48.693571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.415 [2024-11-19 11:27:48.693597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.415 qpair failed and we were unable to recover it. 00:25:53.415 [2024-11-19 11:27:48.693696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.416 [2024-11-19 11:27:48.693720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.416 qpair failed and we were unable to recover it. 00:25:53.416 [2024-11-19 11:27:48.693845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.416 [2024-11-19 11:27:48.693870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.416 qpair failed and we were unable to recover it. 00:25:53.416 [2024-11-19 11:27:48.694024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.416 [2024-11-19 11:27:48.694049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.416 qpair failed and we were unable to recover it. 00:25:53.416 [2024-11-19 11:27:48.694139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.416 [2024-11-19 11:27:48.694165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.416 qpair failed and we were unable to recover it. 00:25:53.416 [2024-11-19 11:27:48.694310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.416 [2024-11-19 11:27:48.694335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.416 qpair failed and we were unable to recover it. 00:25:53.416 [2024-11-19 11:27:48.694510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.416 [2024-11-19 11:27:48.694535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.416 qpair failed and we were unable to recover it. 00:25:53.416 [2024-11-19 11:27:48.694687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.416 [2024-11-19 11:27:48.694711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.416 qpair failed and we were unable to recover it. 00:25:53.416 [2024-11-19 11:27:48.694886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.416 [2024-11-19 11:27:48.694919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.416 qpair failed and we were unable to recover it. 00:25:53.416 [2024-11-19 11:27:48.695086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.416 [2024-11-19 11:27:48.695114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.416 qpair failed and we were unable to recover it. 00:25:53.416 [2024-11-19 11:27:48.695272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.416 [2024-11-19 11:27:48.695296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.416 qpair failed and we were unable to recover it. 00:25:53.416 [2024-11-19 11:27:48.695477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.416 [2024-11-19 11:27:48.695503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.416 qpair failed and we were unable to recover it. 00:25:53.416 [2024-11-19 11:27:48.695639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.416 [2024-11-19 11:27:48.695665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.416 qpair failed and we were unable to recover it. 00:25:53.416 [2024-11-19 11:27:48.695858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.416 [2024-11-19 11:27:48.695882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.416 qpair failed and we were unable to recover it. 00:25:53.416 [2024-11-19 11:27:48.696011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.416 [2024-11-19 11:27:48.696036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.416 qpair failed and we were unable to recover it. 00:25:53.416 [2024-11-19 11:27:48.696202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.416 [2024-11-19 11:27:48.696227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.416 qpair failed and we were unable to recover it. 00:25:53.416 [2024-11-19 11:27:48.696357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.416 [2024-11-19 11:27:48.696388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.416 qpair failed and we were unable to recover it. 00:25:53.416 [2024-11-19 11:27:48.696508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.416 [2024-11-19 11:27:48.696533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.416 qpair failed and we were unable to recover it. 00:25:53.416 [2024-11-19 11:27:48.696697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.416 [2024-11-19 11:27:48.696730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.416 qpair failed and we were unable to recover it. 00:25:53.416 [2024-11-19 11:27:48.696928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.416 [2024-11-19 11:27:48.696954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.416 qpair failed and we were unable to recover it. 00:25:53.416 [2024-11-19 11:27:48.697125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.416 [2024-11-19 11:27:48.697150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.416 qpair failed and we were unable to recover it. 00:25:53.416 [2024-11-19 11:27:48.697329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.416 [2024-11-19 11:27:48.697354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.416 qpair failed and we were unable to recover it. 00:25:53.416 [2024-11-19 11:27:48.697509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.416 [2024-11-19 11:27:48.697535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.416 qpair failed and we were unable to recover it. 00:25:53.416 [2024-11-19 11:27:48.697667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.416 [2024-11-19 11:27:48.697708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.416 qpair failed and we were unable to recover it. 00:25:53.416 [2024-11-19 11:27:48.697851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.416 [2024-11-19 11:27:48.697898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.416 qpair failed and we were unable to recover it. 00:25:53.416 [2024-11-19 11:27:48.698080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.416 [2024-11-19 11:27:48.698105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.416 qpair failed and we were unable to recover it. 00:25:53.416 [2024-11-19 11:27:48.698229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.416 [2024-11-19 11:27:48.698255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.416 qpair failed and we were unable to recover it. 00:25:53.416 [2024-11-19 11:27:48.698397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.416 [2024-11-19 11:27:48.698422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.416 qpair failed and we were unable to recover it. 00:25:53.416 [2024-11-19 11:27:48.698567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.416 [2024-11-19 11:27:48.698592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.416 qpair failed and we were unable to recover it. 00:25:53.416 [2024-11-19 11:27:48.698719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.416 [2024-11-19 11:27:48.698759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.417 qpair failed and we were unable to recover it. 00:25:53.417 [2024-11-19 11:27:48.698872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.417 [2024-11-19 11:27:48.698897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.417 qpair failed and we were unable to recover it. 00:25:53.417 [2024-11-19 11:27:48.699020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.417 [2024-11-19 11:27:48.699045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.417 qpair failed and we were unable to recover it. 00:25:53.417 [2024-11-19 11:27:48.699193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.417 [2024-11-19 11:27:48.699218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.417 qpair failed and we were unable to recover it. 00:25:53.417 [2024-11-19 11:27:48.699373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.417 [2024-11-19 11:27:48.699412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.417 qpair failed and we were unable to recover it. 00:25:53.417 [2024-11-19 11:27:48.699531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.417 [2024-11-19 11:27:48.699556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.417 qpair failed and we were unable to recover it. 00:25:53.417 [2024-11-19 11:27:48.699721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.417 [2024-11-19 11:27:48.699746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.417 qpair failed and we were unable to recover it. 00:25:53.417 [2024-11-19 11:27:48.699911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.417 [2024-11-19 11:27:48.699939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.417 qpair failed and we were unable to recover it. 00:25:53.417 [2024-11-19 11:27:48.700099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.417 [2024-11-19 11:27:48.700124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.417 qpair failed and we were unable to recover it. 00:25:53.417 [2024-11-19 11:27:48.700215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.417 [2024-11-19 11:27:48.700239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.417 qpair failed and we were unable to recover it. 00:25:53.417 [2024-11-19 11:27:48.700410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.417 [2024-11-19 11:27:48.700435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.417 qpair failed and we were unable to recover it. 00:25:53.417 [2024-11-19 11:27:48.700536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.417 [2024-11-19 11:27:48.700563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.417 qpair failed and we were unable to recover it. 00:25:53.417 [2024-11-19 11:27:48.700705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.417 [2024-11-19 11:27:48.700730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.417 qpair failed and we were unable to recover it. 00:25:53.417 [2024-11-19 11:27:48.700869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.417 [2024-11-19 11:27:48.700894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.417 qpair failed and we were unable to recover it. 00:25:53.417 [2024-11-19 11:27:48.701032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.417 [2024-11-19 11:27:48.701056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.417 qpair failed and we were unable to recover it. 00:25:53.417 [2024-11-19 11:27:48.701230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.417 [2024-11-19 11:27:48.701269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.417 qpair failed and we were unable to recover it. 00:25:53.417 [2024-11-19 11:27:48.701410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.417 [2024-11-19 11:27:48.701435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.417 qpair failed and we were unable to recover it. 00:25:53.417 [2024-11-19 11:27:48.701531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.417 [2024-11-19 11:27:48.701555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.417 qpair failed and we were unable to recover it. 00:25:53.417 [2024-11-19 11:27:48.701695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.417 [2024-11-19 11:27:48.701734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.417 qpair failed and we were unable to recover it. 00:25:53.417 [2024-11-19 11:27:48.701901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.417 [2024-11-19 11:27:48.701926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.417 qpair failed and we were unable to recover it. 00:25:53.417 [2024-11-19 11:27:48.702126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.417 [2024-11-19 11:27:48.702151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.417 qpair failed and we were unable to recover it. 00:25:53.417 [2024-11-19 11:27:48.702295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.417 [2024-11-19 11:27:48.702319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.417 qpair failed and we were unable to recover it. 00:25:53.417 [2024-11-19 11:27:48.702524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.417 [2024-11-19 11:27:48.702548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.417 qpair failed and we were unable to recover it. 00:25:53.417 [2024-11-19 11:27:48.702687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.417 [2024-11-19 11:27:48.702728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.417 qpair failed and we were unable to recover it. 00:25:53.417 [2024-11-19 11:27:48.702937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.417 [2024-11-19 11:27:48.702963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.417 qpair failed and we were unable to recover it. 00:25:53.417 [2024-11-19 11:27:48.703066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.417 [2024-11-19 11:27:48.703090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.417 qpair failed and we were unable to recover it. 00:25:53.417 [2024-11-19 11:27:48.703251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.417 [2024-11-19 11:27:48.703275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.417 qpair failed and we were unable to recover it. 00:25:53.417 [2024-11-19 11:27:48.703428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.417 [2024-11-19 11:27:48.703454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.417 qpair failed and we were unable to recover it. 00:25:53.417 [2024-11-19 11:27:48.703582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.417 [2024-11-19 11:27:48.703622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.417 qpair failed and we were unable to recover it. 00:25:53.417 [2024-11-19 11:27:48.703785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.417 [2024-11-19 11:27:48.703808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.417 qpair failed and we were unable to recover it. 00:25:53.417 [2024-11-19 11:27:48.703920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.417 [2024-11-19 11:27:48.703945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.417 qpair failed and we were unable to recover it. 00:25:53.417 [2024-11-19 11:27:48.704088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.417 [2024-11-19 11:27:48.704128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.417 qpair failed and we were unable to recover it. 00:25:53.417 [2024-11-19 11:27:48.704264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.417 [2024-11-19 11:27:48.704303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.417 qpair failed and we were unable to recover it. 00:25:53.417 [2024-11-19 11:27:48.704456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.417 [2024-11-19 11:27:48.704482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.418 qpair failed and we were unable to recover it. 00:25:53.418 [2024-11-19 11:27:48.704609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.418 [2024-11-19 11:27:48.704645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.418 qpair failed and we were unable to recover it. 00:25:53.418 [2024-11-19 11:27:48.704863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.418 [2024-11-19 11:27:48.704887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.418 qpair failed and we were unable to recover it. 00:25:53.418 [2024-11-19 11:27:48.705012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.418 [2024-11-19 11:27:48.705036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.418 qpair failed and we were unable to recover it. 00:25:53.418 [2024-11-19 11:27:48.705188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.418 [2024-11-19 11:27:48.705218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.418 qpair failed and we were unable to recover it. 00:25:53.418 [2024-11-19 11:27:48.705406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.418 [2024-11-19 11:27:48.705443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.418 qpair failed and we were unable to recover it. 00:25:53.418 [2024-11-19 11:27:48.705594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.418 [2024-11-19 11:27:48.705620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.418 qpair failed and we were unable to recover it. 00:25:53.418 [2024-11-19 11:27:48.705805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.418 [2024-11-19 11:27:48.705829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.418 qpair failed and we were unable to recover it. 00:25:53.418 [2024-11-19 11:27:48.706011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.418 [2024-11-19 11:27:48.706047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.418 qpair failed and we were unable to recover it. 00:25:53.418 [2024-11-19 11:27:48.706224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.418 [2024-11-19 11:27:48.706250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.418 qpair failed and we were unable to recover it. 00:25:53.418 [2024-11-19 11:27:48.706436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.418 [2024-11-19 11:27:48.706462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.418 qpair failed and we were unable to recover it. 00:25:53.418 [2024-11-19 11:27:48.706633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.418 [2024-11-19 11:27:48.706657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.418 qpair failed and we were unable to recover it. 00:25:53.418 [2024-11-19 11:27:48.706818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.418 [2024-11-19 11:27:48.706857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.418 qpair failed and we were unable to recover it. 00:25:53.418 [2024-11-19 11:27:48.706983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.418 [2024-11-19 11:27:48.707008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.418 qpair failed and we were unable to recover it. 00:25:53.418 [2024-11-19 11:27:48.707154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.418 [2024-11-19 11:27:48.707183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.418 qpair failed and we were unable to recover it. 00:25:53.418 [2024-11-19 11:27:48.707335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.418 [2024-11-19 11:27:48.707391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.418 qpair failed and we were unable to recover it. 00:25:53.418 [2024-11-19 11:27:48.707550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.418 [2024-11-19 11:27:48.707585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.418 qpair failed and we were unable to recover it. 00:25:53.418 [2024-11-19 11:27:48.707731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.418 [2024-11-19 11:27:48.707755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.418 qpair failed and we were unable to recover it. 00:25:53.418 [2024-11-19 11:27:48.707899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.418 [2024-11-19 11:27:48.707925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.418 qpair failed and we were unable to recover it. 00:25:53.418 [2024-11-19 11:27:48.708101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.418 [2024-11-19 11:27:48.708127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.418 qpair failed and we were unable to recover it. 00:25:53.418 [2024-11-19 11:27:48.708333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.418 [2024-11-19 11:27:48.708380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.418 qpair failed and we were unable to recover it. 00:25:53.418 [2024-11-19 11:27:48.708504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.418 [2024-11-19 11:27:48.708535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.418 qpair failed and we were unable to recover it. 00:25:53.418 [2024-11-19 11:27:48.708670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.418 [2024-11-19 11:27:48.708710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.418 qpair failed and we were unable to recover it. 00:25:53.418 [2024-11-19 11:27:48.708833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.418 [2024-11-19 11:27:48.708871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.418 qpair failed and we were unable to recover it. 00:25:53.418 [2024-11-19 11:27:48.709016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.418 [2024-11-19 11:27:48.709040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.418 qpair failed and we were unable to recover it. 00:25:53.418 [2024-11-19 11:27:48.709199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.418 [2024-11-19 11:27:48.709223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.418 qpair failed and we were unable to recover it. 00:25:53.418 [2024-11-19 11:27:48.709417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.418 [2024-11-19 11:27:48.709458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.418 qpair failed and we were unable to recover it. 00:25:53.418 [2024-11-19 11:27:48.709597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.418 [2024-11-19 11:27:48.709630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.418 qpair failed and we were unable to recover it. 00:25:53.418 [2024-11-19 11:27:48.709854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.418 [2024-11-19 11:27:48.709880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.418 qpair failed and we were unable to recover it. 00:25:53.418 [2024-11-19 11:27:48.710016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.418 [2024-11-19 11:27:48.710041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.418 qpair failed and we were unable to recover it. 00:25:53.418 [2024-11-19 11:27:48.710150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.418 [2024-11-19 11:27:48.710176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.418 qpair failed and we were unable to recover it. 00:25:53.418 [2024-11-19 11:27:48.710326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.418 [2024-11-19 11:27:48.710371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.418 qpair failed and we were unable to recover it. 00:25:53.418 [2024-11-19 11:27:48.710505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.418 [2024-11-19 11:27:48.710528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.418 qpair failed and we were unable to recover it. 00:25:53.418 [2024-11-19 11:27:48.710661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.418 [2024-11-19 11:27:48.710693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.418 qpair failed and we were unable to recover it. 00:25:53.418 [2024-11-19 11:27:48.710844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.418 [2024-11-19 11:27:48.710884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.418 qpair failed and we were unable to recover it. 00:25:53.418 [2024-11-19 11:27:48.711026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.418 [2024-11-19 11:27:48.711051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.418 qpair failed and we were unable to recover it. 00:25:53.418 [2024-11-19 11:27:48.711218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.419 [2024-11-19 11:27:48.711257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.419 qpair failed and we were unable to recover it. 00:25:53.419 [2024-11-19 11:27:48.711405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.419 [2024-11-19 11:27:48.711430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.419 qpair failed and we were unable to recover it. 00:25:53.419 [2024-11-19 11:27:48.711570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.419 [2024-11-19 11:27:48.711594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.419 qpair failed and we were unable to recover it. 00:25:53.419 [2024-11-19 11:27:48.711722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.419 [2024-11-19 11:27:48.711748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.419 qpair failed and we were unable to recover it. 00:25:53.419 [2024-11-19 11:27:48.711885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.419 [2024-11-19 11:27:48.711923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.419 qpair failed and we were unable to recover it. 00:25:53.419 [2024-11-19 11:27:48.712102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.419 [2024-11-19 11:27:48.712126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.419 qpair failed and we were unable to recover it. 00:25:53.419 [2024-11-19 11:27:48.712269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.419 [2024-11-19 11:27:48.712309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.419 qpair failed and we were unable to recover it. 00:25:53.419 [2024-11-19 11:27:48.712480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.419 [2024-11-19 11:27:48.712506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.419 qpair failed and we were unable to recover it. 00:25:53.419 [2024-11-19 11:27:48.712654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.419 [2024-11-19 11:27:48.712693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.419 qpair failed and we were unable to recover it. 00:25:53.419 [2024-11-19 11:27:48.712889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.419 [2024-11-19 11:27:48.712915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.419 qpair failed and we were unable to recover it. 00:25:53.419 [2024-11-19 11:27:48.713096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.419 [2024-11-19 11:27:48.713120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.419 qpair failed and we were unable to recover it. 00:25:53.419 [2024-11-19 11:27:48.713266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.419 [2024-11-19 11:27:48.713290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.419 qpair failed and we were unable to recover it. 00:25:53.419 [2024-11-19 11:27:48.713425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.419 [2024-11-19 11:27:48.713450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.419 qpair failed and we were unable to recover it. 00:25:53.419 [2024-11-19 11:27:48.713628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.419 [2024-11-19 11:27:48.713669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.419 qpair failed and we were unable to recover it. 00:25:53.419 [2024-11-19 11:27:48.713823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.419 [2024-11-19 11:27:48.713848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.419 qpair failed and we were unable to recover it. 00:25:53.419 [2024-11-19 11:27:48.713982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.419 [2024-11-19 11:27:48.714007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.419 qpair failed and we were unable to recover it. 00:25:53.419 [2024-11-19 11:27:48.714176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.419 [2024-11-19 11:27:48.714215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.419 qpair failed and we were unable to recover it. 00:25:53.419 [2024-11-19 11:27:48.714325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.419 [2024-11-19 11:27:48.714351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.419 qpair failed and we were unable to recover it. 00:25:53.419 [2024-11-19 11:27:48.714461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.419 [2024-11-19 11:27:48.714491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.419 qpair failed and we were unable to recover it. 00:25:53.419 [2024-11-19 11:27:48.714592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.419 [2024-11-19 11:27:48.714628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.419 qpair failed and we were unable to recover it. 00:25:53.419 [2024-11-19 11:27:48.714886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.419 [2024-11-19 11:27:48.714909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.419 qpair failed and we were unable to recover it. 00:25:53.419 [2024-11-19 11:27:48.715095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.419 [2024-11-19 11:27:48.715119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.419 qpair failed and we were unable to recover it. 00:25:53.419 [2024-11-19 11:27:48.715314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.419 [2024-11-19 11:27:48.715339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.419 qpair failed and we were unable to recover it. 00:25:53.419 [2024-11-19 11:27:48.715454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.419 [2024-11-19 11:27:48.715479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.419 qpair failed and we were unable to recover it. 00:25:53.419 [2024-11-19 11:27:48.715573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.419 [2024-11-19 11:27:48.715609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.419 qpair failed and we were unable to recover it. 00:25:53.419 [2024-11-19 11:27:48.715760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.419 [2024-11-19 11:27:48.715785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.419 qpair failed and we were unable to recover it. 00:25:53.419 [2024-11-19 11:27:48.715987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.419 [2024-11-19 11:27:48.716012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.419 qpair failed and we were unable to recover it. 00:25:53.419 [2024-11-19 11:27:48.716153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.419 [2024-11-19 11:27:48.716177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.419 qpair failed and we were unable to recover it. 00:25:53.419 [2024-11-19 11:27:48.716374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.419 [2024-11-19 11:27:48.716400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.419 qpair failed and we were unable to recover it. 00:25:53.419 [2024-11-19 11:27:48.716542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.419 [2024-11-19 11:27:48.716567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.419 qpair failed and we were unable to recover it. 00:25:53.419 [2024-11-19 11:27:48.716712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.419 [2024-11-19 11:27:48.716753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.419 qpair failed and we were unable to recover it. 00:25:53.420 [2024-11-19 11:27:48.716912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.420 [2024-11-19 11:27:48.716937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.420 qpair failed and we were unable to recover it. 00:25:53.420 [2024-11-19 11:27:48.717125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.420 [2024-11-19 11:27:48.717158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.420 qpair failed and we were unable to recover it. 00:25:53.420 [2024-11-19 11:27:48.717308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.420 [2024-11-19 11:27:48.717333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.420 qpair failed and we were unable to recover it. 00:25:53.420 [2024-11-19 11:27:48.717514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.420 [2024-11-19 11:27:48.717540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.420 qpair failed and we were unable to recover it. 00:25:53.420 [2024-11-19 11:27:48.717650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.420 [2024-11-19 11:27:48.717675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.420 qpair failed and we were unable to recover it. 00:25:53.420 [2024-11-19 11:27:48.717815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.420 [2024-11-19 11:27:48.717856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.420 qpair failed and we were unable to recover it. 00:25:53.420 [2024-11-19 11:27:48.718010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.420 [2024-11-19 11:27:48.718035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.420 qpair failed and we were unable to recover it. 00:25:53.420 [2024-11-19 11:27:48.718230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.420 [2024-11-19 11:27:48.718255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.420 qpair failed and we were unable to recover it. 00:25:53.420 [2024-11-19 11:27:48.718392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.420 [2024-11-19 11:27:48.718422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.420 qpair failed and we were unable to recover it. 00:25:53.420 [2024-11-19 11:27:48.718550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.420 [2024-11-19 11:27:48.718574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.420 qpair failed and we were unable to recover it. 00:25:53.420 [2024-11-19 11:27:48.718762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.420 [2024-11-19 11:27:48.718787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.420 qpair failed and we were unable to recover it. 00:25:53.420 [2024-11-19 11:27:48.719061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.420 [2024-11-19 11:27:48.719085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.420 qpair failed and we were unable to recover it. 00:25:53.420 [2024-11-19 11:27:48.719253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.420 [2024-11-19 11:27:48.719293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.420 qpair failed and we were unable to recover it. 00:25:53.420 [2024-11-19 11:27:48.719426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.420 [2024-11-19 11:27:48.719451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.420 qpair failed and we were unable to recover it. 00:25:53.420 [2024-11-19 11:27:48.719567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.420 [2024-11-19 11:27:48.719592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.420 qpair failed and we were unable to recover it. 00:25:53.420 [2024-11-19 11:27:48.719737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.420 [2024-11-19 11:27:48.719763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.420 qpair failed and we were unable to recover it. 00:25:53.420 [2024-11-19 11:27:48.719902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.420 [2024-11-19 11:27:48.719942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.420 qpair failed and we were unable to recover it. 00:25:53.420 [2024-11-19 11:27:48.720039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.420 [2024-11-19 11:27:48.720063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.420 qpair failed and we were unable to recover it. 00:25:53.420 [2024-11-19 11:27:48.720247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.420 [2024-11-19 11:27:48.720271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.420 qpair failed and we were unable to recover it. 00:25:53.420 [2024-11-19 11:27:48.720431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.420 [2024-11-19 11:27:48.720457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.420 qpair failed and we were unable to recover it. 00:25:53.420 [2024-11-19 11:27:48.720621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.420 [2024-11-19 11:27:48.720646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.420 qpair failed and we were unable to recover it. 00:25:53.420 [2024-11-19 11:27:48.720838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.420 [2024-11-19 11:27:48.720862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.420 qpair failed and we were unable to recover it. 00:25:53.420 [2024-11-19 11:27:48.721008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.420 [2024-11-19 11:27:48.721034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.420 qpair failed and we were unable to recover it. 00:25:53.420 [2024-11-19 11:27:48.721168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.420 [2024-11-19 11:27:48.721194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.420 qpair failed and we were unable to recover it. 00:25:53.420 [2024-11-19 11:27:48.721286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.420 [2024-11-19 11:27:48.721312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.420 qpair failed and we were unable to recover it. 00:25:53.420 [2024-11-19 11:27:48.721429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.420 [2024-11-19 11:27:48.721455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.420 qpair failed and we were unable to recover it. 00:25:53.420 [2024-11-19 11:27:48.721553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.420 [2024-11-19 11:27:48.721579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.420 qpair failed and we were unable to recover it. 00:25:53.420 [2024-11-19 11:27:48.721722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.420 [2024-11-19 11:27:48.721750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.420 qpair failed and we were unable to recover it. 00:25:53.420 [2024-11-19 11:27:48.721900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.420 [2024-11-19 11:27:48.721940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.420 qpair failed and we were unable to recover it. 00:25:53.420 [2024-11-19 11:27:48.722088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.420 [2024-11-19 11:27:48.722129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.420 qpair failed and we were unable to recover it. 00:25:53.420 [2024-11-19 11:27:48.722268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.420 [2024-11-19 11:27:48.722293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.420 qpair failed and we were unable to recover it. 00:25:53.420 [2024-11-19 11:27:48.722459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.420 [2024-11-19 11:27:48.722486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.420 qpair failed and we were unable to recover it. 00:25:53.420 [2024-11-19 11:27:48.722569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.420 [2024-11-19 11:27:48.722595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.420 qpair failed and we were unable to recover it. 00:25:53.420 [2024-11-19 11:27:48.722807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.420 [2024-11-19 11:27:48.722831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.420 qpair failed and we were unable to recover it. 00:25:53.420 [2024-11-19 11:27:48.722974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.420 [2024-11-19 11:27:48.722999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.420 qpair failed and we were unable to recover it. 00:25:53.420 [2024-11-19 11:27:48.723120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.420 [2024-11-19 11:27:48.723146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.420 qpair failed and we were unable to recover it. 00:25:53.421 [2024-11-19 11:27:48.723276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-11-19 11:27:48.723301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.421 qpair failed and we were unable to recover it. 00:25:53.421 [2024-11-19 11:27:48.723463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-11-19 11:27:48.723490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.421 qpair failed and we were unable to recover it. 00:25:53.421 [2024-11-19 11:27:48.723621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-11-19 11:27:48.723670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.421 qpair failed and we were unable to recover it. 00:25:53.421 [2024-11-19 11:27:48.723810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-11-19 11:27:48.723851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.421 qpair failed and we were unable to recover it. 00:25:53.421 [2024-11-19 11:27:48.723989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-11-19 11:27:48.724015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.421 qpair failed and we were unable to recover it. 00:25:53.421 [2024-11-19 11:27:48.724178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-11-19 11:27:48.724229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.421 qpair failed and we were unable to recover it. 00:25:53.421 [2024-11-19 11:27:48.724374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-11-19 11:27:48.724413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.421 qpair failed and we were unable to recover it. 00:25:53.421 [2024-11-19 11:27:48.724537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-11-19 11:27:48.724567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.421 qpair failed and we were unable to recover it. 00:25:53.421 [2024-11-19 11:27:48.724710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-11-19 11:27:48.724736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.421 qpair failed and we were unable to recover it. 00:25:53.421 [2024-11-19 11:27:48.724908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-11-19 11:27:48.724932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.421 qpair failed and we were unable to recover it. 00:25:53.421 [2024-11-19 11:27:48.725074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-11-19 11:27:48.725099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.421 qpair failed and we were unable to recover it. 00:25:53.421 [2024-11-19 11:27:48.725223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-11-19 11:27:48.725248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.421 qpair failed and we were unable to recover it. 00:25:53.421 [2024-11-19 11:27:48.725384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-11-19 11:27:48.725410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.421 qpair failed and we were unable to recover it. 00:25:53.421 [2024-11-19 11:27:48.725533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-11-19 11:27:48.725559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.421 qpair failed and we were unable to recover it. 00:25:53.421 [2024-11-19 11:27:48.725692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-11-19 11:27:48.725718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.421 qpair failed and we were unable to recover it. 00:25:53.421 [2024-11-19 11:27:48.725896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-11-19 11:27:48.725922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.421 qpair failed and we were unable to recover it. 00:25:53.421 [2024-11-19 11:27:48.726046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-11-19 11:27:48.726071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.421 qpair failed and we were unable to recover it. 00:25:53.421 [2024-11-19 11:27:48.726227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-11-19 11:27:48.726252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.421 qpair failed and we were unable to recover it. 00:25:53.421 [2024-11-19 11:27:48.726400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-11-19 11:27:48.726426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.421 qpair failed and we were unable to recover it. 00:25:53.421 [2024-11-19 11:27:48.726544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-11-19 11:27:48.726570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.421 qpair failed and we were unable to recover it. 00:25:53.421 [2024-11-19 11:27:48.726700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-11-19 11:27:48.726726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.421 qpair failed and we were unable to recover it. 00:25:53.421 [2024-11-19 11:27:48.726919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-11-19 11:27:48.726944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.421 qpair failed and we were unable to recover it. 00:25:53.421 [2024-11-19 11:27:48.727057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-11-19 11:27:48.727081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.421 qpair failed and we were unable to recover it. 00:25:53.421 [2024-11-19 11:27:48.727254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-11-19 11:27:48.727280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.421 qpair failed and we were unable to recover it. 00:25:53.421 [2024-11-19 11:27:48.727424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-11-19 11:27:48.727451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.421 qpair failed and we were unable to recover it. 00:25:53.421 [2024-11-19 11:27:48.727564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-11-19 11:27:48.727590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.421 qpair failed and we were unable to recover it. 00:25:53.421 [2024-11-19 11:27:48.727709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-11-19 11:27:48.727737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.421 qpair failed and we were unable to recover it. 00:25:53.421 [2024-11-19 11:27:48.727918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-11-19 11:27:48.727947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.421 qpair failed and we were unable to recover it. 00:25:53.421 [2024-11-19 11:27:48.728116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-11-19 11:27:48.728141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.421 qpair failed and we were unable to recover it. 00:25:53.421 [2024-11-19 11:27:48.728311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-11-19 11:27:48.728335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.421 qpair failed and we were unable to recover it. 00:25:53.421 [2024-11-19 11:27:48.728459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-11-19 11:27:48.728485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.421 qpair failed and we were unable to recover it. 00:25:53.421 [2024-11-19 11:27:48.728661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-11-19 11:27:48.728706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.421 qpair failed and we were unable to recover it. 00:25:53.421 [2024-11-19 11:27:48.728849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-11-19 11:27:48.728873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.421 qpair failed and we were unable to recover it. 00:25:53.421 [2024-11-19 11:27:48.729002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-11-19 11:27:48.729027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.421 qpair failed and we were unable to recover it. 00:25:53.421 [2024-11-19 11:27:48.729155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-11-19 11:27:48.729180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.421 qpair failed and we were unable to recover it. 00:25:53.421 [2024-11-19 11:27:48.729329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-11-19 11:27:48.729353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.422 qpair failed and we were unable to recover it. 00:25:53.422 [2024-11-19 11:27:48.729510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-11-19 11:27:48.729536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.422 qpair failed and we were unable to recover it. 00:25:53.422 [2024-11-19 11:27:48.729666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-11-19 11:27:48.729692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.422 qpair failed and we were unable to recover it. 00:25:53.422 [2024-11-19 11:27:48.729879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-11-19 11:27:48.729903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.422 qpair failed and we were unable to recover it. 00:25:53.422 [2024-11-19 11:27:48.730121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-11-19 11:27:48.730147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.422 qpair failed and we were unable to recover it. 00:25:53.422 [2024-11-19 11:27:48.730314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-11-19 11:27:48.730338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.422 qpair failed and we were unable to recover it. 00:25:53.422 [2024-11-19 11:27:48.730512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-11-19 11:27:48.730537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.422 qpair failed and we were unable to recover it. 00:25:53.422 [2024-11-19 11:27:48.730684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-11-19 11:27:48.730722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.422 qpair failed and we were unable to recover it. 00:25:53.422 [2024-11-19 11:27:48.730907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-11-19 11:27:48.730931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.422 qpair failed and we were unable to recover it. 00:25:53.422 [2024-11-19 11:27:48.731105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-11-19 11:27:48.731131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.422 qpair failed and we were unable to recover it. 00:25:53.422 [2024-11-19 11:27:48.731274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-11-19 11:27:48.731298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.422 qpair failed and we were unable to recover it. 00:25:53.422 [2024-11-19 11:27:48.731411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-11-19 11:27:48.731437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.422 qpair failed and we were unable to recover it. 00:25:53.422 [2024-11-19 11:27:48.731572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-11-19 11:27:48.731600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.422 qpair failed and we were unable to recover it. 00:25:53.422 [2024-11-19 11:27:48.731717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-11-19 11:27:48.731754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.422 qpair failed and we were unable to recover it. 00:25:53.422 [2024-11-19 11:27:48.731913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-11-19 11:27:48.731936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.422 qpair failed and we were unable to recover it. 00:25:53.422 [2024-11-19 11:27:48.732096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-11-19 11:27:48.732120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.422 qpair failed and we were unable to recover it. 00:25:53.422 [2024-11-19 11:27:48.732263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-11-19 11:27:48.732303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.422 qpair failed and we were unable to recover it. 00:25:53.422 [2024-11-19 11:27:48.732452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-11-19 11:27:48.732479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.422 qpair failed and we were unable to recover it. 00:25:53.422 [2024-11-19 11:27:48.732574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-11-19 11:27:48.732598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.422 qpair failed and we were unable to recover it. 00:25:53.422 [2024-11-19 11:27:48.732775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-11-19 11:27:48.732800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.422 qpair failed and we were unable to recover it. 00:25:53.422 [2024-11-19 11:27:48.732948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-11-19 11:27:48.732972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.422 qpair failed and we were unable to recover it. 00:25:53.422 [2024-11-19 11:27:48.733129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-11-19 11:27:48.733169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.422 qpair failed and we were unable to recover it. 00:25:53.422 [2024-11-19 11:27:48.733348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-11-19 11:27:48.733394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.422 qpair failed and we were unable to recover it. 00:25:53.422 [2024-11-19 11:27:48.733522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-11-19 11:27:48.733547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.422 qpair failed and we were unable to recover it. 00:25:53.422 [2024-11-19 11:27:48.733751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-11-19 11:27:48.733776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.422 qpair failed and we were unable to recover it. 00:25:53.422 [2024-11-19 11:27:48.733919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-11-19 11:27:48.733944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.422 qpair failed and we were unable to recover it. 00:25:53.422 [2024-11-19 11:27:48.734124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-11-19 11:27:48.734148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.422 qpair failed and we were unable to recover it. 00:25:53.422 [2024-11-19 11:27:48.734286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-11-19 11:27:48.734325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.422 qpair failed and we were unable to recover it. 00:25:53.422 [2024-11-19 11:27:48.734517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-11-19 11:27:48.734544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.422 qpair failed and we were unable to recover it. 00:25:53.422 [2024-11-19 11:27:48.734656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-11-19 11:27:48.734681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.422 qpair failed and we were unable to recover it. 00:25:53.422 [2024-11-19 11:27:48.734806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-11-19 11:27:48.734831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.422 qpair failed and we were unable to recover it. 00:25:53.422 [2024-11-19 11:27:48.735012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-11-19 11:27:48.735037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.422 qpair failed and we were unable to recover it. 00:25:53.422 [2024-11-19 11:27:48.735180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-11-19 11:27:48.735220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.422 qpair failed and we were unable to recover it. 00:25:53.422 [2024-11-19 11:27:48.735420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-11-19 11:27:48.735446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.422 qpair failed and we were unable to recover it. 00:25:53.422 [2024-11-19 11:27:48.735596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-11-19 11:27:48.735621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.422 qpair failed and we were unable to recover it. 00:25:53.422 [2024-11-19 11:27:48.735811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-11-19 11:27:48.735836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.422 qpair failed and we were unable to recover it. 00:25:53.422 [2024-11-19 11:27:48.736009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.423 [2024-11-19 11:27:48.736053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.423 qpair failed and we were unable to recover it. 00:25:53.423 [2024-11-19 11:27:48.736229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.423 [2024-11-19 11:27:48.736260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.423 qpair failed and we were unable to recover it. 00:25:53.423 [2024-11-19 11:27:48.736379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.423 [2024-11-19 11:27:48.736406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.423 qpair failed and we were unable to recover it. 00:25:53.423 [2024-11-19 11:27:48.736558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.423 [2024-11-19 11:27:48.736584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.423 qpair failed and we were unable to recover it. 00:25:53.423 [2024-11-19 11:27:48.736765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.423 [2024-11-19 11:27:48.736789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.423 qpair failed and we were unable to recover it. 00:25:53.423 [2024-11-19 11:27:48.736965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.423 [2024-11-19 11:27:48.736990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.423 qpair failed and we were unable to recover it. 00:25:53.423 [2024-11-19 11:27:48.737150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.423 [2024-11-19 11:27:48.737174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.423 qpair failed and we were unable to recover it. 00:25:53.423 [2024-11-19 11:27:48.737294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.423 [2024-11-19 11:27:48.737317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.423 qpair failed and we were unable to recover it. 00:25:53.423 [2024-11-19 11:27:48.737479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.423 [2024-11-19 11:27:48.737505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.423 qpair failed and we were unable to recover it. 00:25:53.423 [2024-11-19 11:27:48.737614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.423 [2024-11-19 11:27:48.737639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.423 qpair failed and we were unable to recover it. 00:25:53.423 [2024-11-19 11:27:48.737792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.423 [2024-11-19 11:27:48.737831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.423 qpair failed and we were unable to recover it. 00:25:53.423 [2024-11-19 11:27:48.737992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.423 [2024-11-19 11:27:48.738016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.423 qpair failed and we were unable to recover it. 00:25:53.423 [2024-11-19 11:27:48.738134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.423 [2024-11-19 11:27:48.738159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.423 qpair failed and we were unable to recover it. 00:25:53.423 [2024-11-19 11:27:48.738301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.423 [2024-11-19 11:27:48.738328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.423 qpair failed and we were unable to recover it. 00:25:53.423 [2024-11-19 11:27:48.738488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.423 [2024-11-19 11:27:48.738520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.423 qpair failed and we were unable to recover it. 00:25:53.423 [2024-11-19 11:27:48.738694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.423 [2024-11-19 11:27:48.738719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.423 qpair failed and we were unable to recover it. 00:25:53.423 [2024-11-19 11:27:48.738876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.423 [2024-11-19 11:27:48.738900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.423 qpair failed and we were unable to recover it. 00:25:53.423 [2024-11-19 11:27:48.739094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.423 [2024-11-19 11:27:48.739133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.423 qpair failed and we were unable to recover it. 00:25:53.423 [2024-11-19 11:27:48.739294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.423 [2024-11-19 11:27:48.739317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.423 qpair failed and we were unable to recover it. 00:25:53.423 [2024-11-19 11:27:48.739490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.423 [2024-11-19 11:27:48.739516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.423 qpair failed and we were unable to recover it. 00:25:53.423 [2024-11-19 11:27:48.739619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.423 [2024-11-19 11:27:48.739644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.423 qpair failed and we were unable to recover it. 00:25:53.423 [2024-11-19 11:27:48.739775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.423 [2024-11-19 11:27:48.739801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.423 qpair failed and we were unable to recover it. 00:25:53.423 [2024-11-19 11:27:48.740001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.423 [2024-11-19 11:27:48.740039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.423 qpair failed and we were unable to recover it. 00:25:53.423 [2024-11-19 11:27:48.740179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.423 [2024-11-19 11:27:48.740205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.423 qpair failed and we were unable to recover it. 00:25:53.423 [2024-11-19 11:27:48.740295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.423 [2024-11-19 11:27:48.740334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.423 qpair failed and we were unable to recover it. 00:25:53.423 [2024-11-19 11:27:48.740497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.423 [2024-11-19 11:27:48.740522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.423 qpair failed and we were unable to recover it. 00:25:53.423 [2024-11-19 11:27:48.740651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.423 [2024-11-19 11:27:48.740687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.423 qpair failed and we were unable to recover it. 00:25:53.423 [2024-11-19 11:27:48.740867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.423 [2024-11-19 11:27:48.740891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.423 qpair failed and we were unable to recover it. 00:25:53.423 [2024-11-19 11:27:48.741060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.423 [2024-11-19 11:27:48.741100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.423 qpair failed and we were unable to recover it. 00:25:53.423 [2024-11-19 11:27:48.741306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.423 [2024-11-19 11:27:48.741330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.423 qpair failed and we were unable to recover it. 00:25:53.423 [2024-11-19 11:27:48.741476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.423 [2024-11-19 11:27:48.741515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.423 qpair failed and we were unable to recover it. 00:25:53.423 [2024-11-19 11:27:48.741691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.423 [2024-11-19 11:27:48.741716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.423 qpair failed and we were unable to recover it. 00:25:53.423 [2024-11-19 11:27:48.741910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.423 [2024-11-19 11:27:48.741935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.423 qpair failed and we were unable to recover it. 00:25:53.423 [2024-11-19 11:27:48.742099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.423 [2024-11-19 11:27:48.742123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.423 qpair failed and we were unable to recover it. 00:25:53.423 [2024-11-19 11:27:48.742317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.423 [2024-11-19 11:27:48.742355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.423 qpair failed and we were unable to recover it. 00:25:53.423 [2024-11-19 11:27:48.742502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.423 [2024-11-19 11:27:48.742526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.423 qpair failed and we were unable to recover it. 00:25:53.424 [2024-11-19 11:27:48.742789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.424 [2024-11-19 11:27:48.742814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.424 qpair failed and we were unable to recover it. 00:25:53.424 [2024-11-19 11:27:48.743022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.424 [2024-11-19 11:27:48.743047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.424 qpair failed and we were unable to recover it. 00:25:53.424 [2024-11-19 11:27:48.743173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.424 [2024-11-19 11:27:48.743197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.424 qpair failed and we were unable to recover it. 00:25:53.424 [2024-11-19 11:27:48.743375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.424 [2024-11-19 11:27:48.743399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.424 qpair failed and we were unable to recover it. 00:25:53.424 [2024-11-19 11:27:48.743531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.424 [2024-11-19 11:27:48.743561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.424 qpair failed and we were unable to recover it. 00:25:53.424 [2024-11-19 11:27:48.743685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.424 [2024-11-19 11:27:48.743710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.424 qpair failed and we were unable to recover it. 00:25:53.424 [2024-11-19 11:27:48.743900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.424 [2024-11-19 11:27:48.743924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.424 qpair failed and we were unable to recover it. 00:25:53.424 [2024-11-19 11:27:48.744102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.424 [2024-11-19 11:27:48.744126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.424 qpair failed and we were unable to recover it. 00:25:53.424 [2024-11-19 11:27:48.744332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.424 [2024-11-19 11:27:48.744379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.424 qpair failed and we were unable to recover it. 00:25:53.424 [2024-11-19 11:27:48.744509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.424 [2024-11-19 11:27:48.744534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.424 qpair failed and we were unable to recover it. 00:25:53.424 [2024-11-19 11:27:48.744750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.424 [2024-11-19 11:27:48.744773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.424 qpair failed and we were unable to recover it. 00:25:53.424 [2024-11-19 11:27:48.744938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.424 [2024-11-19 11:27:48.744961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.424 qpair failed and we were unable to recover it. 00:25:53.424 [2024-11-19 11:27:48.745171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.424 [2024-11-19 11:27:48.745194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.424 qpair failed and we were unable to recover it. 00:25:53.424 [2024-11-19 11:27:48.745323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.424 [2024-11-19 11:27:48.745347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.424 qpair failed and we were unable to recover it. 00:25:53.424 [2024-11-19 11:27:48.745516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.424 [2024-11-19 11:27:48.745540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.424 qpair failed and we were unable to recover it. 00:25:53.424 [2024-11-19 11:27:48.745700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.424 [2024-11-19 11:27:48.745723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.424 qpair failed and we were unable to recover it. 00:25:53.424 [2024-11-19 11:27:48.745904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.424 [2024-11-19 11:27:48.745927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.424 qpair failed and we were unable to recover it. 00:25:53.424 [2024-11-19 11:27:48.746115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.424 [2024-11-19 11:27:48.746139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.424 qpair failed and we were unable to recover it. 00:25:53.424 [2024-11-19 11:27:48.746336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.424 [2024-11-19 11:27:48.746359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.424 qpair failed and we were unable to recover it. 00:25:53.424 [2024-11-19 11:27:48.746506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.424 [2024-11-19 11:27:48.746531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.424 qpair failed and we were unable to recover it. 00:25:53.424 [2024-11-19 11:27:48.746729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.424 [2024-11-19 11:27:48.746768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.424 qpair failed and we were unable to recover it. 00:25:53.424 [2024-11-19 11:27:48.746981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.424 [2024-11-19 11:27:48.747004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.424 qpair failed and we were unable to recover it. 00:25:53.424 [2024-11-19 11:27:48.747190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.424 [2024-11-19 11:27:48.747212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.424 qpair failed and we were unable to recover it. 00:25:53.424 [2024-11-19 11:27:48.747372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.424 [2024-11-19 11:27:48.747397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.424 qpair failed and we were unable to recover it. 00:25:53.424 [2024-11-19 11:27:48.747566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.424 [2024-11-19 11:27:48.747592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.424 qpair failed and we were unable to recover it. 00:25:53.424 [2024-11-19 11:27:48.747803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.424 [2024-11-19 11:27:48.747827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.424 qpair failed and we were unable to recover it. 00:25:53.424 [2024-11-19 11:27:48.747934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.424 [2024-11-19 11:27:48.747956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.424 qpair failed and we were unable to recover it. 00:25:53.424 [2024-11-19 11:27:48.748182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.424 [2024-11-19 11:27:48.748205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.424 qpair failed and we were unable to recover it. 00:25:53.424 [2024-11-19 11:27:48.748422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.424 [2024-11-19 11:27:48.748449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.424 qpair failed and we were unable to recover it. 00:25:53.424 [2024-11-19 11:27:48.748576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.424 [2024-11-19 11:27:48.748602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.424 qpair failed and we were unable to recover it. 00:25:53.424 [2024-11-19 11:27:48.748845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.424 [2024-11-19 11:27:48.748868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.424 qpair failed and we were unable to recover it. 00:25:53.424 [2024-11-19 11:27:48.749051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.425 [2024-11-19 11:27:48.749076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.425 qpair failed and we were unable to recover it. 00:25:53.425 [2024-11-19 11:27:48.749263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.425 [2024-11-19 11:27:48.749286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.425 qpair failed and we were unable to recover it. 00:25:53.425 [2024-11-19 11:27:48.749480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.425 [2024-11-19 11:27:48.749505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.425 qpair failed and we were unable to recover it. 00:25:53.425 [2024-11-19 11:27:48.749623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.425 [2024-11-19 11:27:48.749662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.425 qpair failed and we were unable to recover it. 00:25:53.425 [2024-11-19 11:27:48.749799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.425 [2024-11-19 11:27:48.749822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.425 qpair failed and we were unable to recover it. 00:25:53.425 [2024-11-19 11:27:48.750049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.425 [2024-11-19 11:27:48.750073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.425 qpair failed and we were unable to recover it. 00:25:53.425 [2024-11-19 11:27:48.750259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.425 [2024-11-19 11:27:48.750282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.425 qpair failed and we were unable to recover it. 00:25:53.425 [2024-11-19 11:27:48.750455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.425 [2024-11-19 11:27:48.750480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.425 qpair failed and we were unable to recover it. 00:25:53.425 [2024-11-19 11:27:48.750654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.425 [2024-11-19 11:27:48.750680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.425 qpair failed and we were unable to recover it. 00:25:53.425 [2024-11-19 11:27:48.750854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.425 [2024-11-19 11:27:48.750877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.425 qpair failed and we were unable to recover it. 00:25:53.425 [2024-11-19 11:27:48.751108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.425 [2024-11-19 11:27:48.751132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.425 qpair failed and we were unable to recover it. 00:25:53.425 [2024-11-19 11:27:48.751327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.425 [2024-11-19 11:27:48.751368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.425 qpair failed and we were unable to recover it. 00:25:53.425 [2024-11-19 11:27:48.751482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.425 [2024-11-19 11:27:48.751506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.425 qpair failed and we were unable to recover it. 00:25:53.425 [2024-11-19 11:27:48.751757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.425 [2024-11-19 11:27:48.751785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.425 qpair failed and we were unable to recover it. 00:25:53.425 [2024-11-19 11:27:48.751972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.425 [2024-11-19 11:27:48.751996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.425 qpair failed and we were unable to recover it. 00:25:53.425 [2024-11-19 11:27:48.752182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.425 [2024-11-19 11:27:48.752204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.425 qpair failed and we were unable to recover it. 00:25:53.425 [2024-11-19 11:27:48.752360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.425 [2024-11-19 11:27:48.752392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.425 qpair failed and we were unable to recover it. 00:25:53.425 [2024-11-19 11:27:48.752558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.425 [2024-11-19 11:27:48.752584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.425 qpair failed and we were unable to recover it. 00:25:53.425 [2024-11-19 11:27:48.752711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.425 [2024-11-19 11:27:48.752758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.425 qpair failed and we were unable to recover it. 00:25:53.425 [2024-11-19 11:27:48.752953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.425 [2024-11-19 11:27:48.752976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.425 qpair failed and we were unable to recover it. 00:25:53.425 [2024-11-19 11:27:48.753139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.425 [2024-11-19 11:27:48.753163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.425 qpair failed and we were unable to recover it. 00:25:53.425 [2024-11-19 11:27:48.753264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.425 [2024-11-19 11:27:48.753289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.425 qpair failed and we were unable to recover it. 00:25:53.425 [2024-11-19 11:27:48.753478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.425 [2024-11-19 11:27:48.753518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.425 qpair failed and we were unable to recover it. 00:25:53.425 [2024-11-19 11:27:48.753671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.425 [2024-11-19 11:27:48.753710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.425 qpair failed and we were unable to recover it. 00:25:53.425 [2024-11-19 11:27:48.753915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.425 [2024-11-19 11:27:48.753939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.425 qpair failed and we were unable to recover it. 00:25:53.425 [2024-11-19 11:27:48.754163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.425 [2024-11-19 11:27:48.754187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.425 qpair failed and we were unable to recover it. 00:25:53.425 [2024-11-19 11:27:48.754376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.425 [2024-11-19 11:27:48.754401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.425 qpair failed and we were unable to recover it. 00:25:53.425 [2024-11-19 11:27:48.754555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.425 [2024-11-19 11:27:48.754579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.425 qpair failed and we were unable to recover it. 00:25:53.425 [2024-11-19 11:27:48.754717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.425 [2024-11-19 11:27:48.754740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.425 qpair failed and we were unable to recover it. 00:25:53.425 [2024-11-19 11:27:48.754910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.425 [2024-11-19 11:27:48.754950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.425 qpair failed and we were unable to recover it. 00:25:53.425 [2024-11-19 11:27:48.755153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.425 [2024-11-19 11:27:48.755177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.425 qpair failed and we were unable to recover it. 00:25:53.425 [2024-11-19 11:27:48.755346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.425 [2024-11-19 11:27:48.755390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.425 qpair failed and we were unable to recover it. 00:25:53.425 [2024-11-19 11:27:48.755532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.425 [2024-11-19 11:27:48.755555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.425 qpair failed and we were unable to recover it. 00:25:53.425 [2024-11-19 11:27:48.755720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.425 [2024-11-19 11:27:48.755744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.425 qpair failed and we were unable to recover it. 00:25:53.425 [2024-11-19 11:27:48.755952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.425 [2024-11-19 11:27:48.755976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.425 qpair failed and we were unable to recover it. 00:25:53.425 [2024-11-19 11:27:48.756138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.425 [2024-11-19 11:27:48.756161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.425 qpair failed and we were unable to recover it. 00:25:53.425 [2024-11-19 11:27:48.756326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.426 [2024-11-19 11:27:48.756368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.426 qpair failed and we were unable to recover it. 00:25:53.426 [2024-11-19 11:27:48.756515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.426 [2024-11-19 11:27:48.756541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.426 qpair failed and we were unable to recover it. 00:25:53.426 [2024-11-19 11:27:48.756720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.426 [2024-11-19 11:27:48.756759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.426 qpair failed and we were unable to recover it. 00:25:53.426 [2024-11-19 11:27:48.757031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.426 [2024-11-19 11:27:48.757056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.426 qpair failed and we were unable to recover it. 00:25:53.426 [2024-11-19 11:27:48.757222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.426 [2024-11-19 11:27:48.757246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.426 qpair failed and we were unable to recover it. 00:25:53.426 [2024-11-19 11:27:48.757473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.426 [2024-11-19 11:27:48.757498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.426 qpair failed and we were unable to recover it. 00:25:53.426 [2024-11-19 11:27:48.757645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.426 [2024-11-19 11:27:48.757683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.426 qpair failed and we were unable to recover it. 00:25:53.426 [2024-11-19 11:27:48.757859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.426 [2024-11-19 11:27:48.757883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.426 qpair failed and we were unable to recover it. 00:25:53.426 [2024-11-19 11:27:48.758097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.426 [2024-11-19 11:27:48.758121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.426 qpair failed and we were unable to recover it. 00:25:53.426 [2024-11-19 11:27:48.758239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.426 [2024-11-19 11:27:48.758262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.426 qpair failed and we were unable to recover it. 00:25:53.426 [2024-11-19 11:27:48.758470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.426 [2024-11-19 11:27:48.758494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.426 qpair failed and we were unable to recover it. 00:25:53.426 [2024-11-19 11:27:48.758608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.426 [2024-11-19 11:27:48.758632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.426 qpair failed and we were unable to recover it. 00:25:53.426 [2024-11-19 11:27:48.758749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.426 [2024-11-19 11:27:48.758773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.426 qpair failed and we were unable to recover it. 00:25:53.426 [2024-11-19 11:27:48.758995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.426 [2024-11-19 11:27:48.759035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.426 qpair failed and we were unable to recover it. 00:25:53.426 [2024-11-19 11:27:48.759270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.426 [2024-11-19 11:27:48.759294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.426 qpair failed and we were unable to recover it. 00:25:53.426 [2024-11-19 11:27:48.759464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.426 [2024-11-19 11:27:48.759488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.426 qpair failed and we were unable to recover it. 00:25:53.426 [2024-11-19 11:27:48.759677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.426 [2024-11-19 11:27:48.759700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.426 qpair failed and we were unable to recover it. 00:25:53.426 [2024-11-19 11:27:48.759900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.426 [2024-11-19 11:27:48.759928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.426 qpair failed and we were unable to recover it. 00:25:53.426 [2024-11-19 11:27:48.760121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.426 [2024-11-19 11:27:48.760145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.426 qpair failed and we were unable to recover it. 00:25:53.426 [2024-11-19 11:27:48.760373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.426 [2024-11-19 11:27:48.760399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.426 qpair failed and we were unable to recover it. 00:25:53.426 [2024-11-19 11:27:48.760556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.426 [2024-11-19 11:27:48.760582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.426 qpair failed and we were unable to recover it. 00:25:53.426 [2024-11-19 11:27:48.760766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.426 [2024-11-19 11:27:48.760795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.426 qpair failed and we were unable to recover it. 00:25:53.426 [2024-11-19 11:27:48.761182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.426 [2024-11-19 11:27:48.761205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.426 qpair failed and we were unable to recover it. 00:25:53.426 [2024-11-19 11:27:48.761415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.426 [2024-11-19 11:27:48.761440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.426 qpair failed and we were unable to recover it. 00:25:53.426 [2024-11-19 11:27:48.761567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.426 [2024-11-19 11:27:48.761592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.426 qpair failed and we were unable to recover it. 00:25:53.426 [2024-11-19 11:27:48.761806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.426 [2024-11-19 11:27:48.761830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.426 qpair failed and we were unable to recover it. 00:25:53.426 [2024-11-19 11:27:48.762015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.426 [2024-11-19 11:27:48.762038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.426 qpair failed and we were unable to recover it. 00:25:53.426 [2024-11-19 11:27:48.762190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.426 [2024-11-19 11:27:48.762214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.426 qpair failed and we were unable to recover it. 00:25:53.426 [2024-11-19 11:27:48.762449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.426 [2024-11-19 11:27:48.762475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.426 qpair failed and we were unable to recover it. 00:25:53.426 [2024-11-19 11:27:48.762614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.426 [2024-11-19 11:27:48.762639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.426 qpair failed and we were unable to recover it. 00:25:53.426 [2024-11-19 11:27:48.762750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.426 [2024-11-19 11:27:48.762789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.426 qpair failed and we were unable to recover it. 00:25:53.426 [2024-11-19 11:27:48.762914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.426 [2024-11-19 11:27:48.762953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.426 qpair failed and we were unable to recover it. 00:25:53.426 [2024-11-19 11:27:48.763123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.426 [2024-11-19 11:27:48.763147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.426 qpair failed and we were unable to recover it. 00:25:53.426 [2024-11-19 11:27:48.763349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.426 [2024-11-19 11:27:48.763395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.426 qpair failed and we were unable to recover it. 00:25:53.426 [2024-11-19 11:27:48.763538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.426 [2024-11-19 11:27:48.763563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.426 qpair failed and we were unable to recover it. 00:25:53.426 [2024-11-19 11:27:48.763776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.426 [2024-11-19 11:27:48.763811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.426 qpair failed and we were unable to recover it. 00:25:53.427 [2024-11-19 11:27:48.763913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.427 [2024-11-19 11:27:48.763939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.427 qpair failed and we were unable to recover it. 00:25:53.427 [2024-11-19 11:27:48.764157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.427 [2024-11-19 11:27:48.764183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.427 qpair failed and we were unable to recover it. 00:25:53.427 [2024-11-19 11:27:48.764298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.427 [2024-11-19 11:27:48.764324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.427 qpair failed and we were unable to recover it. 00:25:53.427 [2024-11-19 11:27:48.764506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.427 [2024-11-19 11:27:48.764532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.427 qpair failed and we were unable to recover it. 00:25:53.427 [2024-11-19 11:27:48.764733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.427 [2024-11-19 11:27:48.764758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.427 qpair failed and we were unable to recover it. 00:25:53.427 [2024-11-19 11:27:48.765001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.427 [2024-11-19 11:27:48.765027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.427 qpair failed and we were unable to recover it. 00:25:53.427 [2024-11-19 11:27:48.765167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.427 [2024-11-19 11:27:48.765202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.427 qpair failed and we were unable to recover it. 00:25:53.427 [2024-11-19 11:27:48.765377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.427 [2024-11-19 11:27:48.765404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:53.427 qpair failed and we were unable to recover it. 00:25:53.427 [2024-11-19 11:27:48.765658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.427 [2024-11-19 11:27:48.765699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.427 qpair failed and we were unable to recover it. 00:25:53.427 [2024-11-19 11:27:48.765846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.427 [2024-11-19 11:27:48.765872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.427 qpair failed and we were unable to recover it. 00:25:53.427 [2024-11-19 11:27:48.765997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.427 [2024-11-19 11:27:48.766033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.427 qpair failed and we were unable to recover it. 00:25:53.427 [2024-11-19 11:27:48.766184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.427 [2024-11-19 11:27:48.766207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.427 qpair failed and we were unable to recover it. 00:25:53.427 [2024-11-19 11:27:48.766422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.427 [2024-11-19 11:27:48.766460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.427 qpair failed and we were unable to recover it. 00:25:53.427 [2024-11-19 11:27:48.766578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.427 [2024-11-19 11:27:48.766604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.427 qpair failed and we were unable to recover it. 00:25:53.427 [2024-11-19 11:27:48.766745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.427 [2024-11-19 11:27:48.766769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.427 qpair failed and we were unable to recover it. 00:25:53.427 [2024-11-19 11:27:48.766984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.427 [2024-11-19 11:27:48.767007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.427 qpair failed and we were unable to recover it. 00:25:53.427 [2024-11-19 11:27:48.767209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.427 [2024-11-19 11:27:48.767233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.427 qpair failed and we were unable to recover it. 00:25:53.427 [2024-11-19 11:27:48.767406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.427 [2024-11-19 11:27:48.767433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.427 qpair failed and we were unable to recover it. 00:25:53.427 [2024-11-19 11:27:48.767585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.427 [2024-11-19 11:27:48.767610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.427 qpair failed and we were unable to recover it. 00:25:53.427 [2024-11-19 11:27:48.767759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.427 [2024-11-19 11:27:48.767782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.427 qpair failed and we were unable to recover it. 00:25:53.427 [2024-11-19 11:27:48.767972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.427 [2024-11-19 11:27:48.768003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.427 qpair failed and we were unable to recover it. 00:25:53.427 [2024-11-19 11:27:48.768208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.427 [2024-11-19 11:27:48.768238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.427 qpair failed and we were unable to recover it. 00:25:53.427 [2024-11-19 11:27:48.768385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.427 [2024-11-19 11:27:48.768411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.427 qpair failed and we were unable to recover it. 00:25:53.427 [2024-11-19 11:27:48.768578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.427 [2024-11-19 11:27:48.768603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.427 qpair failed and we were unable to recover it. 00:25:53.427 [2024-11-19 11:27:48.768834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.427 [2024-11-19 11:27:48.768858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.427 qpair failed and we were unable to recover it. 00:25:53.427 [2024-11-19 11:27:48.769048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.427 [2024-11-19 11:27:48.769082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.427 qpair failed and we were unable to recover it. 00:25:53.427 [2024-11-19 11:27:48.769223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.427 [2024-11-19 11:27:48.769246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.427 qpair failed and we were unable to recover it. 00:25:53.427 [2024-11-19 11:27:48.769394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.427 [2024-11-19 11:27:48.769420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.427 qpair failed and we were unable to recover it. 00:25:53.427 [2024-11-19 11:27:48.769601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.427 [2024-11-19 11:27:48.769626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.427 qpair failed and we were unable to recover it. 00:25:53.427 [2024-11-19 11:27:48.769836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.427 [2024-11-19 11:27:48.769860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.427 qpair failed and we were unable to recover it. 00:25:53.427 [2024-11-19 11:27:48.769982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.427 [2024-11-19 11:27:48.770006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.427 qpair failed and we were unable to recover it. 00:25:53.427 [2024-11-19 11:27:48.770227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.427 [2024-11-19 11:27:48.770250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.427 qpair failed and we were unable to recover it. 00:25:53.427 [2024-11-19 11:27:48.770380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.427 [2024-11-19 11:27:48.770405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.427 qpair failed and we were unable to recover it. 00:25:53.427 [2024-11-19 11:27:48.770607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.427 [2024-11-19 11:27:48.770632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.427 qpair failed and we were unable to recover it. 00:25:53.427 [2024-11-19 11:27:48.770788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.427 [2024-11-19 11:27:48.770812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.427 qpair failed and we were unable to recover it. 00:25:53.427 [2024-11-19 11:27:48.770962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.427 [2024-11-19 11:27:48.770985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.427 qpair failed and we were unable to recover it. 00:25:53.428 [2024-11-19 11:27:48.771226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.428 [2024-11-19 11:27:48.771249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.428 qpair failed and we were unable to recover it. 00:25:53.428 [2024-11-19 11:27:48.771515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.428 [2024-11-19 11:27:48.771543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.428 qpair failed and we were unable to recover it. 00:25:53.428 [2024-11-19 11:27:48.771728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.428 [2024-11-19 11:27:48.771751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.428 qpair failed and we were unable to recover it. 00:25:53.428 [2024-11-19 11:27:48.771920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.428 [2024-11-19 11:27:48.771943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.428 qpair failed and we were unable to recover it. 00:25:53.428 [2024-11-19 11:27:48.772121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.428 [2024-11-19 11:27:48.772145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.428 qpair failed and we were unable to recover it. 00:25:53.428 [2024-11-19 11:27:48.772286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.428 [2024-11-19 11:27:48.772310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.428 qpair failed and we were unable to recover it. 00:25:53.428 [2024-11-19 11:27:48.772522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.428 [2024-11-19 11:27:48.772548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.428 qpair failed and we were unable to recover it. 00:25:53.428 [2024-11-19 11:27:48.772761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.428 [2024-11-19 11:27:48.772784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.428 qpair failed and we were unable to recover it. 00:25:53.428 [2024-11-19 11:27:48.773013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.428 [2024-11-19 11:27:48.773037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.428 qpair failed and we were unable to recover it. 00:25:53.428 [2024-11-19 11:27:48.773260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.428 [2024-11-19 11:27:48.773283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.428 qpair failed and we were unable to recover it. 00:25:53.428 [2024-11-19 11:27:48.773494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.428 [2024-11-19 11:27:48.773520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.428 qpair failed and we were unable to recover it. 00:25:53.428 [2024-11-19 11:27:48.773742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.428 [2024-11-19 11:27:48.773765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.428 qpair failed and we were unable to recover it. 00:25:53.428 [2024-11-19 11:27:48.773987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.428 [2024-11-19 11:27:48.774015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.428 qpair failed and we were unable to recover it. 00:25:53.428 [2024-11-19 11:27:48.774239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.428 [2024-11-19 11:27:48.774263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.428 qpair failed and we were unable to recover it. 00:25:53.428 [2024-11-19 11:27:48.774389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.428 [2024-11-19 11:27:48.774425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.428 qpair failed and we were unable to recover it. 00:25:53.428 [2024-11-19 11:27:48.774606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.428 [2024-11-19 11:27:48.774631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.428 qpair failed and we were unable to recover it. 00:25:53.428 [2024-11-19 11:27:48.774798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.428 [2024-11-19 11:27:48.774822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.428 qpair failed and we were unable to recover it. 00:25:53.428 [2024-11-19 11:27:48.774974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.428 [2024-11-19 11:27:48.774999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.428 qpair failed and we were unable to recover it. 00:25:53.428 [2024-11-19 11:27:48.775150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.428 [2024-11-19 11:27:48.775174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.428 qpair failed and we were unable to recover it. 00:25:53.428 [2024-11-19 11:27:48.775333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.428 [2024-11-19 11:27:48.775356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.428 qpair failed and we were unable to recover it. 00:25:53.428 [2024-11-19 11:27:48.775531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.428 [2024-11-19 11:27:48.775557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.428 qpair failed and we were unable to recover it. 00:25:53.428 [2024-11-19 11:27:48.775642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.428 [2024-11-19 11:27:48.775667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.428 qpair failed and we were unable to recover it. 00:25:53.428 [2024-11-19 11:27:48.775793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.428 [2024-11-19 11:27:48.775831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.428 qpair failed and we were unable to recover it. 00:25:53.428 [2024-11-19 11:27:48.776050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.428 [2024-11-19 11:27:48.776074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.428 qpair failed and we were unable to recover it. 00:25:53.428 [2024-11-19 11:27:48.776283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.428 [2024-11-19 11:27:48.776310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.428 qpair failed and we were unable to recover it. 00:25:53.428 [2024-11-19 11:27:48.776522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.428 [2024-11-19 11:27:48.776551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.428 qpair failed and we were unable to recover it. 00:25:53.428 [2024-11-19 11:27:48.776779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.428 [2024-11-19 11:27:48.776804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.428 qpair failed and we were unable to recover it. 00:25:53.428 [2024-11-19 11:27:48.776939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.428 [2024-11-19 11:27:48.776962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.428 qpair failed and we were unable to recover it. 00:25:53.428 [2024-11-19 11:27:48.777178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.428 [2024-11-19 11:27:48.777202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.428 qpair failed and we were unable to recover it. 00:25:53.428 [2024-11-19 11:27:48.777380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.428 [2024-11-19 11:27:48.777406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.428 qpair failed and we were unable to recover it. 00:25:53.428 [2024-11-19 11:27:48.777575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.428 [2024-11-19 11:27:48.777600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.428 qpair failed and we were unable to recover it. 00:25:53.428 [2024-11-19 11:27:48.777738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.428 [2024-11-19 11:27:48.777761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.428 qpair failed and we were unable to recover it. 00:25:53.428 [2024-11-19 11:27:48.777958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.428 [2024-11-19 11:27:48.777981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.428 qpair failed and we were unable to recover it. 00:25:53.428 [2024-11-19 11:27:48.778178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.428 [2024-11-19 11:27:48.778202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.428 qpair failed and we were unable to recover it. 00:25:53.428 [2024-11-19 11:27:48.778321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.429 [2024-11-19 11:27:48.778359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.429 qpair failed and we were unable to recover it. 00:25:53.429 [2024-11-19 11:27:48.778605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.429 [2024-11-19 11:27:48.778630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.429 qpair failed and we were unable to recover it. 00:25:53.429 [2024-11-19 11:27:48.778819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.429 [2024-11-19 11:27:48.778843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.429 qpair failed and we were unable to recover it. 00:25:53.429 [2024-11-19 11:27:48.779076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.429 [2024-11-19 11:27:48.779100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.429 qpair failed and we were unable to recover it. 00:25:53.429 [2024-11-19 11:27:48.779273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.429 [2024-11-19 11:27:48.779296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.429 qpair failed and we were unable to recover it. 00:25:53.429 [2024-11-19 11:27:48.779446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.429 [2024-11-19 11:27:48.779472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.429 qpair failed and we were unable to recover it. 00:25:53.429 [2024-11-19 11:27:48.779655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.429 [2024-11-19 11:27:48.779678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.429 qpair failed and we were unable to recover it. 00:25:53.429 [2024-11-19 11:27:48.779789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.429 [2024-11-19 11:27:48.779813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.429 qpair failed and we were unable to recover it. 00:25:53.429 [2024-11-19 11:27:48.779956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.429 [2024-11-19 11:27:48.779979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.429 qpair failed and we were unable to recover it. 00:25:53.429 [2024-11-19 11:27:48.780131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.429 [2024-11-19 11:27:48.780154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.429 qpair failed and we were unable to recover it. 00:25:53.429 [2024-11-19 11:27:48.780379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.429 [2024-11-19 11:27:48.780419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.429 qpair failed and we were unable to recover it. 00:25:53.429 [2024-11-19 11:27:48.780546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.429 [2024-11-19 11:27:48.780571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.429 qpair failed and we were unable to recover it. 00:25:53.429 [2024-11-19 11:27:48.780722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.429 [2024-11-19 11:27:48.780746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.429 qpair failed and we were unable to recover it. 00:25:53.429 [2024-11-19 11:27:48.780951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.429 [2024-11-19 11:27:48.780975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.429 qpair failed and we were unable to recover it. 00:25:53.429 [2024-11-19 11:27:48.781144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.429 [2024-11-19 11:27:48.781167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.429 qpair failed and we were unable to recover it. 00:25:53.429 [2024-11-19 11:27:48.781348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.429 [2024-11-19 11:27:48.781387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.429 qpair failed and we were unable to recover it. 00:25:53.429 [2024-11-19 11:27:48.781569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.429 [2024-11-19 11:27:48.781596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.429 qpair failed and we were unable to recover it. 00:25:53.429 [2024-11-19 11:27:48.781820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.429 [2024-11-19 11:27:48.781843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.429 qpair failed and we were unable to recover it. 00:25:53.429 [2024-11-19 11:27:48.781983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.429 [2024-11-19 11:27:48.782011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.429 qpair failed and we were unable to recover it. 00:25:53.429 [2024-11-19 11:27:48.782239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.429 [2024-11-19 11:27:48.782262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.429 qpair failed and we were unable to recover it. 00:25:53.429 [2024-11-19 11:27:48.782432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.429 [2024-11-19 11:27:48.782456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.429 qpair failed and we were unable to recover it. 00:25:53.429 [2024-11-19 11:27:48.782666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.429 [2024-11-19 11:27:48.782689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.429 qpair failed and we were unable to recover it. 00:25:53.429 [2024-11-19 11:27:48.782866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.429 [2024-11-19 11:27:48.782897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.429 qpair failed and we were unable to recover it. 00:25:53.429 [2024-11-19 11:27:48.783132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.429 [2024-11-19 11:27:48.783155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.429 qpair failed and we were unable to recover it. 00:25:53.429 [2024-11-19 11:27:48.783360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.429 [2024-11-19 11:27:48.783391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.429 qpair failed and we were unable to recover it. 00:25:53.429 [2024-11-19 11:27:48.783587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.429 [2024-11-19 11:27:48.783611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.429 qpair failed and we were unable to recover it. 00:25:53.429 [2024-11-19 11:27:48.783743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.429 [2024-11-19 11:27:48.783771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.429 qpair failed and we were unable to recover it. 00:25:53.429 [2024-11-19 11:27:48.783952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.429 [2024-11-19 11:27:48.783977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.429 qpair failed and we were unable to recover it. 00:25:53.429 [2024-11-19 11:27:48.784169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.429 [2024-11-19 11:27:48.784192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.429 qpair failed and we were unable to recover it. 00:25:53.429 [2024-11-19 11:27:48.784391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.429 [2024-11-19 11:27:48.784431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.429 qpair failed and we were unable to recover it. 00:25:53.429 [2024-11-19 11:27:48.784556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.429 [2024-11-19 11:27:48.784582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.429 qpair failed and we were unable to recover it. 00:25:53.429 [2024-11-19 11:27:48.784818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.429 [2024-11-19 11:27:48.784842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.429 qpair failed and we were unable to recover it. 00:25:53.429 [2024-11-19 11:27:48.784968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.430 [2024-11-19 11:27:48.784992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.430 qpair failed and we were unable to recover it. 00:25:53.430 [2024-11-19 11:27:48.785184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.430 [2024-11-19 11:27:48.785207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.430 qpair failed and we were unable to recover it. 00:25:53.430 [2024-11-19 11:27:48.785438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.430 [2024-11-19 11:27:48.785463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.430 qpair failed and we were unable to recover it. 00:25:53.430 [2024-11-19 11:27:48.785604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.430 [2024-11-19 11:27:48.785628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.430 qpair failed and we were unable to recover it. 00:25:53.430 [2024-11-19 11:27:48.785824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.430 [2024-11-19 11:27:48.785848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.430 qpair failed and we were unable to recover it. 00:25:53.430 [2024-11-19 11:27:48.786058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.430 [2024-11-19 11:27:48.786081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.430 qpair failed and we were unable to recover it. 00:25:53.430 [2024-11-19 11:27:48.786273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.430 [2024-11-19 11:27:48.786297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.430 qpair failed and we were unable to recover it. 00:25:53.430 [2024-11-19 11:27:48.786445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.430 [2024-11-19 11:27:48.786469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.430 qpair failed and we were unable to recover it. 00:25:53.430 [2024-11-19 11:27:48.786619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.430 [2024-11-19 11:27:48.786643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.430 qpair failed and we were unable to recover it. 00:25:53.430 [2024-11-19 11:27:48.786872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.430 [2024-11-19 11:27:48.786894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.430 qpair failed and we were unable to recover it. 00:25:53.430 [2024-11-19 11:27:48.787040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.430 [2024-11-19 11:27:48.787064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.430 qpair failed and we were unable to recover it. 00:25:53.430 [2024-11-19 11:27:48.787259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.430 [2024-11-19 11:27:48.787283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.430 qpair failed and we were unable to recover it. 00:25:53.430 [2024-11-19 11:27:48.787481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.430 [2024-11-19 11:27:48.787505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.430 qpair failed and we were unable to recover it. 00:25:53.430 [2024-11-19 11:27:48.787670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.430 [2024-11-19 11:27:48.787693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.430 qpair failed and we were unable to recover it. 00:25:53.430 [2024-11-19 11:27:48.787903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.430 [2024-11-19 11:27:48.787927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.430 qpair failed and we were unable to recover it. 00:25:53.430 [2024-11-19 11:27:48.788143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.430 [2024-11-19 11:27:48.788166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.430 qpair failed and we were unable to recover it. 00:25:53.430 [2024-11-19 11:27:48.788360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.430 [2024-11-19 11:27:48.788393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.430 qpair failed and we were unable to recover it. 00:25:53.430 [2024-11-19 11:27:48.788564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.430 [2024-11-19 11:27:48.788589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.430 qpair failed and we were unable to recover it. 00:25:53.430 [2024-11-19 11:27:48.788803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.430 [2024-11-19 11:27:48.788827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.430 qpair failed and we were unable to recover it. 00:25:53.430 [2024-11-19 11:27:48.789014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.430 [2024-11-19 11:27:48.789037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.430 qpair failed and we were unable to recover it. 00:25:53.430 [2024-11-19 11:27:48.789244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.430 [2024-11-19 11:27:48.789268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.430 qpair failed and we were unable to recover it. 00:25:53.430 [2024-11-19 11:27:48.789416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.430 [2024-11-19 11:27:48.789441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.430 qpair failed and we were unable to recover it. 00:25:53.430 [2024-11-19 11:27:48.789569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.430 [2024-11-19 11:27:48.789594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.430 qpair failed and we were unable to recover it. 00:25:53.430 [2024-11-19 11:27:48.789742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.430 [2024-11-19 11:27:48.789766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.430 qpair failed and we were unable to recover it. 00:25:53.430 [2024-11-19 11:27:48.789873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.430 [2024-11-19 11:27:48.789897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.430 qpair failed and we were unable to recover it. 00:25:53.430 [2024-11-19 11:27:48.790011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.430 [2024-11-19 11:27:48.790035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.430 qpair failed and we were unable to recover it. 00:25:53.430 [2024-11-19 11:27:48.790155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.430 [2024-11-19 11:27:48.790182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.430 qpair failed and we were unable to recover it. 00:25:53.430 [2024-11-19 11:27:48.790353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.430 [2024-11-19 11:27:48.790396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.430 qpair failed and we were unable to recover it. 00:25:53.430 [2024-11-19 11:27:48.790518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.430 [2024-11-19 11:27:48.790542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.430 qpair failed and we were unable to recover it. 00:25:53.430 [2024-11-19 11:27:48.790714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.430 [2024-11-19 11:27:48.790736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.430 qpair failed and we were unable to recover it. 00:25:53.430 [2024-11-19 11:27:48.790959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.430 [2024-11-19 11:27:48.790983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.430 qpair failed and we were unable to recover it. 00:25:53.430 [2024-11-19 11:27:48.791161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.430 [2024-11-19 11:27:48.791185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.430 qpair failed and we were unable to recover it. 00:25:53.430 [2024-11-19 11:27:48.791374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.430 [2024-11-19 11:27:48.791398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.430 qpair failed and we were unable to recover it. 00:25:53.430 [2024-11-19 11:27:48.791557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.430 [2024-11-19 11:27:48.791580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.430 qpair failed and we were unable to recover it. 00:25:53.430 [2024-11-19 11:27:48.791730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.430 [2024-11-19 11:27:48.791754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.430 qpair failed and we were unable to recover it. 00:25:53.430 [2024-11-19 11:27:48.791982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.430 [2024-11-19 11:27:48.792006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.430 qpair failed and we were unable to recover it. 00:25:53.430 [2024-11-19 11:27:48.792166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.431 [2024-11-19 11:27:48.792190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.431 qpair failed and we were unable to recover it. 00:25:53.431 [2024-11-19 11:27:48.792350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.431 [2024-11-19 11:27:48.792393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.431 qpair failed and we were unable to recover it. 00:25:53.431 [2024-11-19 11:27:48.792583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.431 [2024-11-19 11:27:48.792609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.431 qpair failed and we were unable to recover it. 00:25:53.431 [2024-11-19 11:27:48.792770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.431 [2024-11-19 11:27:48.792793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.431 qpair failed and we were unable to recover it. 00:25:53.431 [2024-11-19 11:27:48.792940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.431 [2024-11-19 11:27:48.792963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.431 qpair failed and we were unable to recover it. 00:25:53.431 [2024-11-19 11:27:48.793143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.431 [2024-11-19 11:27:48.793166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.431 qpair failed and we were unable to recover it. 00:25:53.431 [2024-11-19 11:27:48.793293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.431 [2024-11-19 11:27:48.793316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.431 qpair failed and we were unable to recover it. 00:25:53.431 [2024-11-19 11:27:48.793513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.431 [2024-11-19 11:27:48.793539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.431 qpair failed and we were unable to recover it. 00:25:53.431 [2024-11-19 11:27:48.793773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.431 [2024-11-19 11:27:48.793797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.431 qpair failed and we were unable to recover it. 00:25:53.431 [2024-11-19 11:27:48.794008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.431 [2024-11-19 11:27:48.794031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.431 qpair failed and we were unable to recover it. 00:25:53.431 [2024-11-19 11:27:48.794196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.431 [2024-11-19 11:27:48.794219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.431 qpair failed and we were unable to recover it. 00:25:53.431 [2024-11-19 11:27:48.794398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.431 [2024-11-19 11:27:48.794423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.431 qpair failed and we were unable to recover it. 00:25:53.431 [2024-11-19 11:27:48.794629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.431 [2024-11-19 11:27:48.794652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.431 qpair failed and we were unable to recover it. 00:25:53.431 [2024-11-19 11:27:48.794844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.431 [2024-11-19 11:27:48.794867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.431 qpair failed and we were unable to recover it. 00:25:53.431 [2024-11-19 11:27:48.795046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.431 [2024-11-19 11:27:48.795070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.431 qpair failed and we were unable to recover it. 00:25:53.431 [2024-11-19 11:27:48.795230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.431 [2024-11-19 11:27:48.795254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.431 qpair failed and we were unable to recover it. 00:25:53.431 [2024-11-19 11:27:48.795441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.431 [2024-11-19 11:27:48.795466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.431 qpair failed and we were unable to recover it. 00:25:53.431 [2024-11-19 11:27:48.795655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.431 [2024-11-19 11:27:48.795679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.431 qpair failed and we were unable to recover it. 00:25:53.431 [2024-11-19 11:27:48.795881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.431 [2024-11-19 11:27:48.795905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.431 qpair failed and we were unable to recover it. 00:25:53.431 [2024-11-19 11:27:48.796129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.431 [2024-11-19 11:27:48.796152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.431 qpair failed and we were unable to recover it. 00:25:53.431 [2024-11-19 11:27:48.796317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.431 [2024-11-19 11:27:48.796341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.431 qpair failed and we were unable to recover it. 00:25:53.431 [2024-11-19 11:27:48.796554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.431 [2024-11-19 11:27:48.796579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.431 qpair failed and we were unable to recover it. 00:25:53.431 [2024-11-19 11:27:48.796802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.431 [2024-11-19 11:27:48.796826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.431 qpair failed and we were unable to recover it. 00:25:53.431 [2024-11-19 11:27:48.797044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.431 [2024-11-19 11:27:48.797067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.431 qpair failed and we were unable to recover it. 00:25:53.431 [2024-11-19 11:27:48.797239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.431 [2024-11-19 11:27:48.797262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.431 qpair failed and we were unable to recover it. 00:25:53.431 [2024-11-19 11:27:48.797469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.431 [2024-11-19 11:27:48.797494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.431 qpair failed and we were unable to recover it. 00:25:53.431 [2024-11-19 11:27:48.797696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.431 [2024-11-19 11:27:48.797721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.431 qpair failed and we were unable to recover it. 00:25:53.431 [2024-11-19 11:27:48.797955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.431 [2024-11-19 11:27:48.797979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.431 qpair failed and we were unable to recover it. 00:25:53.431 [2024-11-19 11:27:48.798149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.431 [2024-11-19 11:27:48.798172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.431 qpair failed and we were unable to recover it. 00:25:53.431 [2024-11-19 11:27:48.798357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.431 [2024-11-19 11:27:48.798387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.431 qpair failed and we were unable to recover it. 00:25:53.431 [2024-11-19 11:27:48.798595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.431 [2024-11-19 11:27:48.798625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.431 qpair failed and we were unable to recover it. 00:25:53.431 [2024-11-19 11:27:48.798751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.431 [2024-11-19 11:27:48.798775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.431 qpair failed and we were unable to recover it. 00:25:53.431 [2024-11-19 11:27:48.798953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.431 [2024-11-19 11:27:48.798977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.431 qpair failed and we were unable to recover it. 00:25:53.431 [2024-11-19 11:27:48.799111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.431 [2024-11-19 11:27:48.799134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.431 qpair failed and we were unable to recover it. 00:25:53.431 [2024-11-19 11:27:48.799338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.431 [2024-11-19 11:27:48.799367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.431 qpair failed and we were unable to recover it. 00:25:53.431 [2024-11-19 11:27:48.799502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.431 [2024-11-19 11:27:48.799533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.432 qpair failed and we were unable to recover it. 00:25:53.432 [2024-11-19 11:27:48.799739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.432 [2024-11-19 11:27:48.799763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.432 qpair failed and we were unable to recover it. 00:25:53.432 [2024-11-19 11:27:48.799935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.432 [2024-11-19 11:27:48.799958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.432 qpair failed and we were unable to recover it. 00:25:53.432 [2024-11-19 11:27:48.800160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.432 [2024-11-19 11:27:48.800183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.432 qpair failed and we were unable to recover it. 00:25:53.432 [2024-11-19 11:27:48.800329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.432 [2024-11-19 11:27:48.800352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.432 qpair failed and we were unable to recover it. 00:25:53.432 [2024-11-19 11:27:48.800533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.432 [2024-11-19 11:27:48.800557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.432 qpair failed and we were unable to recover it. 00:25:53.432 [2024-11-19 11:27:48.800742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.432 [2024-11-19 11:27:48.800765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.432 qpair failed and we were unable to recover it. 00:25:53.432 [2024-11-19 11:27:48.800903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.432 [2024-11-19 11:27:48.800926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.432 qpair failed and we were unable to recover it. 00:25:53.432 [2024-11-19 11:27:48.801133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.432 [2024-11-19 11:27:48.801156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.432 qpair failed and we were unable to recover it. 00:25:53.432 [2024-11-19 11:27:48.801330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.432 [2024-11-19 11:27:48.801376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.432 qpair failed and we were unable to recover it. 00:25:53.432 [2024-11-19 11:27:48.801514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.432 [2024-11-19 11:27:48.801538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.432 qpair failed and we were unable to recover it. 00:25:53.432 [2024-11-19 11:27:48.801645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.432 [2024-11-19 11:27:48.801684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.432 qpair failed and we were unable to recover it. 00:25:53.432 [2024-11-19 11:27:48.801840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.432 [2024-11-19 11:27:48.801873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.432 qpair failed and we were unable to recover it. 00:25:53.432 [2024-11-19 11:27:48.802101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.432 [2024-11-19 11:27:48.802124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.432 qpair failed and we were unable to recover it. 00:25:53.432 [2024-11-19 11:27:48.802263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.432 [2024-11-19 11:27:48.802285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.432 qpair failed and we were unable to recover it. 00:25:53.432 [2024-11-19 11:27:48.802428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.432 [2024-11-19 11:27:48.802453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.432 qpair failed and we were unable to recover it. 00:25:53.432 [2024-11-19 11:27:48.802602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.432 [2024-11-19 11:27:48.802640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.432 qpair failed and we were unable to recover it. 00:25:53.432 [2024-11-19 11:27:48.802796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.432 [2024-11-19 11:27:48.802820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.432 qpair failed and we were unable to recover it. 00:25:53.432 [2024-11-19 11:27:48.802952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.432 [2024-11-19 11:27:48.802975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.432 qpair failed and we were unable to recover it. 00:25:53.432 [2024-11-19 11:27:48.803151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.432 [2024-11-19 11:27:48.803174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.432 qpair failed and we were unable to recover it. 00:25:53.432 [2024-11-19 11:27:48.803318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.432 [2024-11-19 11:27:48.803341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.432 qpair failed and we were unable to recover it. 00:25:53.432 [2024-11-19 11:27:48.803569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.432 [2024-11-19 11:27:48.803594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.432 qpair failed and we were unable to recover it. 00:25:53.432 [2024-11-19 11:27:48.803755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.432 [2024-11-19 11:27:48.803777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.432 qpair failed and we were unable to recover it. 00:25:53.432 [2024-11-19 11:27:48.803969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.432 [2024-11-19 11:27:48.804000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.432 qpair failed and we were unable to recover it. 00:25:53.432 [2024-11-19 11:27:48.804170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.432 [2024-11-19 11:27:48.804194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.432 qpair failed and we were unable to recover it. 00:25:53.432 [2024-11-19 11:27:48.804356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.432 [2024-11-19 11:27:48.804385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.432 qpair failed and we were unable to recover it. 00:25:53.432 [2024-11-19 11:27:48.804499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.432 [2024-11-19 11:27:48.804523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.432 qpair failed and we were unable to recover it. 00:25:53.432 [2024-11-19 11:27:48.804709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.432 [2024-11-19 11:27:48.804732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.432 qpair failed and we were unable to recover it. 00:25:53.432 [2024-11-19 11:27:48.804949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.432 [2024-11-19 11:27:48.804972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.432 qpair failed and we were unable to recover it. 00:25:53.432 [2024-11-19 11:27:48.805135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.432 [2024-11-19 11:27:48.805158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.432 qpair failed and we were unable to recover it. 00:25:53.432 [2024-11-19 11:27:48.805285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.432 [2024-11-19 11:27:48.805308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.432 qpair failed and we were unable to recover it. 00:25:53.432 [2024-11-19 11:27:48.805559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.432 [2024-11-19 11:27:48.805584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.432 qpair failed and we were unable to recover it. 00:25:53.432 [2024-11-19 11:27:48.805757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.432 [2024-11-19 11:27:48.805786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.432 qpair failed and we were unable to recover it. 00:25:53.432 [2024-11-19 11:27:48.806008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.432 [2024-11-19 11:27:48.806031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.432 qpair failed and we were unable to recover it. 00:25:53.432 [2024-11-19 11:27:48.806204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.432 [2024-11-19 11:27:48.806227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.432 qpair failed and we were unable to recover it. 00:25:53.432 [2024-11-19 11:27:48.806358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.432 [2024-11-19 11:27:48.806410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.432 qpair failed and we were unable to recover it. 00:25:53.432 [2024-11-19 11:27:48.806572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.433 [2024-11-19 11:27:48.806605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.433 qpair failed and we were unable to recover it. 00:25:53.433 [2024-11-19 11:27:48.806818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.433 [2024-11-19 11:27:48.806841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.433 qpair failed and we were unable to recover it. 00:25:53.433 [2024-11-19 11:27:48.806985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.433 [2024-11-19 11:27:48.807008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.433 qpair failed and we were unable to recover it. 00:25:53.433 [2024-11-19 11:27:48.807193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.433 [2024-11-19 11:27:48.807216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.433 qpair failed and we were unable to recover it. 00:25:53.433 [2024-11-19 11:27:48.807416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.433 [2024-11-19 11:27:48.807441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.433 qpair failed and we were unable to recover it. 00:25:53.433 [2024-11-19 11:27:48.807591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.433 [2024-11-19 11:27:48.807615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.433 qpair failed and we were unable to recover it. 00:25:53.433 [2024-11-19 11:27:48.807751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.433 [2024-11-19 11:27:48.807774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.433 qpair failed and we were unable to recover it. 00:25:53.433 [2024-11-19 11:27:48.808013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.433 [2024-11-19 11:27:48.808037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.433 qpair failed and we were unable to recover it. 00:25:53.433 [2024-11-19 11:27:48.808258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.433 [2024-11-19 11:27:48.808281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.433 qpair failed and we were unable to recover it. 00:25:53.433 [2024-11-19 11:27:48.808500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.433 [2024-11-19 11:27:48.808524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.433 qpair failed and we were unable to recover it. 00:25:53.433 [2024-11-19 11:27:48.808670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.433 [2024-11-19 11:27:48.808693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.433 qpair failed and we were unable to recover it. 00:25:53.433 [2024-11-19 11:27:48.808918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.433 [2024-11-19 11:27:48.808942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.433 qpair failed and we were unable to recover it. 00:25:53.433 [2024-11-19 11:27:48.809162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.433 [2024-11-19 11:27:48.809185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.433 qpair failed and we were unable to recover it. 00:25:53.433 [2024-11-19 11:27:48.809304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.433 [2024-11-19 11:27:48.809337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.433 qpair failed and we were unable to recover it. 00:25:53.433 [2024-11-19 11:27:48.809589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.433 [2024-11-19 11:27:48.809614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.433 qpair failed and we were unable to recover it. 00:25:53.433 [2024-11-19 11:27:48.809790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.433 [2024-11-19 11:27:48.809813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.433 qpair failed and we were unable to recover it. 00:25:53.433 [2024-11-19 11:27:48.810001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.433 [2024-11-19 11:27:48.810025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.433 qpair failed and we were unable to recover it. 00:25:53.433 [2024-11-19 11:27:48.810188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.433 [2024-11-19 11:27:48.810212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.433 qpair failed and we were unable to recover it. 00:25:53.433 [2024-11-19 11:27:48.810401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.433 [2024-11-19 11:27:48.810426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.433 qpair failed and we were unable to recover it. 00:25:53.433 [2024-11-19 11:27:48.810659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.433 [2024-11-19 11:27:48.810698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.433 qpair failed and we were unable to recover it. 00:25:53.433 [2024-11-19 11:27:48.810918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.433 [2024-11-19 11:27:48.810942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.433 qpair failed and we were unable to recover it. 00:25:53.433 [2024-11-19 11:27:48.811117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.433 [2024-11-19 11:27:48.811141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.433 qpair failed and we were unable to recover it. 00:25:53.433 [2024-11-19 11:27:48.811266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.433 [2024-11-19 11:27:48.811289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.433 qpair failed and we were unable to recover it. 00:25:53.433 [2024-11-19 11:27:48.811415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.433 [2024-11-19 11:27:48.811440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.433 qpair failed and we were unable to recover it. 00:25:53.433 [2024-11-19 11:27:48.811587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.433 [2024-11-19 11:27:48.811612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.433 qpair failed and we were unable to recover it. 00:25:53.433 [2024-11-19 11:27:48.811778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.433 [2024-11-19 11:27:48.811802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.433 qpair failed and we were unable to recover it. 00:25:53.433 [2024-11-19 11:27:48.811982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.433 [2024-11-19 11:27:48.812004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.433 qpair failed and we were unable to recover it. 00:25:53.433 [2024-11-19 11:27:48.812176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.433 [2024-11-19 11:27:48.812200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.433 qpair failed and we were unable to recover it. 00:25:53.433 [2024-11-19 11:27:48.812353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.433 [2024-11-19 11:27:48.812383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.433 qpair failed and we were unable to recover it. 00:25:53.433 [2024-11-19 11:27:48.812517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.433 [2024-11-19 11:27:48.812541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.433 qpair failed and we were unable to recover it. 00:25:53.433 [2024-11-19 11:27:48.812772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.433 [2024-11-19 11:27:48.812795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.433 qpair failed and we were unable to recover it. 00:25:53.433 [2024-11-19 11:27:48.812904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.433 [2024-11-19 11:27:48.812928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.433 qpair failed and we were unable to recover it. 00:25:53.434 [2024-11-19 11:27:48.813165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.434 [2024-11-19 11:27:48.813188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.434 qpair failed and we were unable to recover it. 00:25:53.434 [2024-11-19 11:27:48.813297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.434 [2024-11-19 11:27:48.813321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.434 qpair failed and we were unable to recover it. 00:25:53.434 [2024-11-19 11:27:48.813503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.434 [2024-11-19 11:27:48.813531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.434 qpair failed and we were unable to recover it. 00:25:53.434 [2024-11-19 11:27:48.813745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.434 [2024-11-19 11:27:48.813769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.434 qpair failed and we were unable to recover it. 00:25:53.434 [2024-11-19 11:27:48.813898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.434 [2024-11-19 11:27:48.813922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.434 qpair failed and we were unable to recover it. 00:25:53.434 [2024-11-19 11:27:48.814076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.434 [2024-11-19 11:27:48.814110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.434 qpair failed and we were unable to recover it. 00:25:53.434 [2024-11-19 11:27:48.814293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.434 [2024-11-19 11:27:48.814325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.434 qpair failed and we were unable to recover it. 00:25:53.434 [2024-11-19 11:27:48.814497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.434 [2024-11-19 11:27:48.814526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.434 qpair failed and we were unable to recover it. 00:25:53.434 [2024-11-19 11:27:48.814695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.434 [2024-11-19 11:27:48.814719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.434 qpair failed and we were unable to recover it. 00:25:53.434 [2024-11-19 11:27:48.814831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.434 [2024-11-19 11:27:48.814854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.434 qpair failed and we were unable to recover it. 00:25:53.434 [2024-11-19 11:27:48.814995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.434 [2024-11-19 11:27:48.815033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.434 qpair failed and we were unable to recover it. 00:25:53.434 [2024-11-19 11:27:48.815194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.434 [2024-11-19 11:27:48.815218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.434 qpair failed and we were unable to recover it. 00:25:53.434 [2024-11-19 11:27:48.815413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.434 [2024-11-19 11:27:48.815438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.434 qpair failed and we were unable to recover it. 00:25:53.434 [2024-11-19 11:27:48.815627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.434 [2024-11-19 11:27:48.815652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.434 qpair failed and we were unable to recover it. 00:25:53.434 [2024-11-19 11:27:48.815826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.434 [2024-11-19 11:27:48.815850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.434 qpair failed and we were unable to recover it. 00:25:53.434 [2024-11-19 11:27:48.816029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.434 [2024-11-19 11:27:48.816054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.434 qpair failed and we were unable to recover it. 00:25:53.434 [2024-11-19 11:27:48.816222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.434 [2024-11-19 11:27:48.816247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.434 qpair failed and we were unable to recover it. 00:25:53.434 [2024-11-19 11:27:48.816394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.434 [2024-11-19 11:27:48.816419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.434 qpair failed and we were unable to recover it. 00:25:53.434 [2024-11-19 11:27:48.816544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.434 [2024-11-19 11:27:48.816568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.434 qpair failed and we were unable to recover it. 00:25:53.434 [2024-11-19 11:27:48.816695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.434 [2024-11-19 11:27:48.816720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.434 qpair failed and we were unable to recover it. 00:25:53.434 [2024-11-19 11:27:48.816859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.434 [2024-11-19 11:27:48.816884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.434 qpair failed and we were unable to recover it. 00:25:53.434 [2024-11-19 11:27:48.817021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.434 [2024-11-19 11:27:48.817045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.434 qpair failed and we were unable to recover it. 00:25:53.434 [2024-11-19 11:27:48.817165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.434 [2024-11-19 11:27:48.817190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.434 qpair failed and we were unable to recover it. 00:25:53.434 [2024-11-19 11:27:48.817380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.434 [2024-11-19 11:27:48.817420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.434 qpair failed and we were unable to recover it. 00:25:53.434 [2024-11-19 11:27:48.817644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.434 [2024-11-19 11:27:48.817670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.434 qpair failed and we were unable to recover it. 00:25:53.434 [2024-11-19 11:27:48.817831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.434 [2024-11-19 11:27:48.817856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.434 qpair failed and we were unable to recover it. 00:25:53.434 [2024-11-19 11:27:48.817952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.434 [2024-11-19 11:27:48.817990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.434 qpair failed and we were unable to recover it. 00:25:53.434 [2024-11-19 11:27:48.818140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.434 [2024-11-19 11:27:48.818165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.434 qpair failed and we were unable to recover it. 00:25:53.434 [2024-11-19 11:27:48.818307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.434 [2024-11-19 11:27:48.818332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.434 qpair failed and we were unable to recover it. 00:25:53.434 [2024-11-19 11:27:48.818515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.434 [2024-11-19 11:27:48.818554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.434 qpair failed and we were unable to recover it. 00:25:53.434 [2024-11-19 11:27:48.818667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.434 [2024-11-19 11:27:48.818692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.434 qpair failed and we were unable to recover it. 00:25:53.434 [2024-11-19 11:27:48.818809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.434 [2024-11-19 11:27:48.818833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.434 qpair failed and we were unable to recover it. 00:25:53.434 [2024-11-19 11:27:48.818972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.434 [2024-11-19 11:27:48.818997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.434 qpair failed and we were unable to recover it. 00:25:53.434 [2024-11-19 11:27:48.819165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.434 [2024-11-19 11:27:48.819190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.434 qpair failed and we were unable to recover it. 00:25:53.434 [2024-11-19 11:27:48.819393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.434 [2024-11-19 11:27:48.819417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.434 qpair failed and we were unable to recover it. 00:25:53.434 [2024-11-19 11:27:48.819633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.434 [2024-11-19 11:27:48.819658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.435 qpair failed and we were unable to recover it. 00:25:53.435 [2024-11-19 11:27:48.819904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.435 [2024-11-19 11:27:48.819929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.435 qpair failed and we were unable to recover it. 00:25:53.435 [2024-11-19 11:27:48.820129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.435 [2024-11-19 11:27:48.820153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.435 qpair failed and we were unable to recover it. 00:25:53.435 [2024-11-19 11:27:48.820312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.435 [2024-11-19 11:27:48.820336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.435 qpair failed and we were unable to recover it. 00:25:53.435 [2024-11-19 11:27:48.820508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.435 [2024-11-19 11:27:48.820534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.435 qpair failed and we were unable to recover it. 00:25:53.435 [2024-11-19 11:27:48.820704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.435 [2024-11-19 11:27:48.820728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.435 qpair failed and we were unable to recover it. 00:25:53.435 [2024-11-19 11:27:48.820896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.435 [2024-11-19 11:27:48.820919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.435 qpair failed and we were unable to recover it. 00:25:53.435 [2024-11-19 11:27:48.821124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.435 [2024-11-19 11:27:48.821147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.435 qpair failed and we were unable to recover it. 00:25:53.435 [2024-11-19 11:27:48.821317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.435 [2024-11-19 11:27:48.821341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.435 qpair failed and we were unable to recover it. 00:25:53.435 [2024-11-19 11:27:48.821525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.435 [2024-11-19 11:27:48.821551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.435 qpair failed and we were unable to recover it. 00:25:53.435 [2024-11-19 11:27:48.821771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.435 [2024-11-19 11:27:48.821795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.435 qpair failed and we were unable to recover it. 00:25:53.435 [2024-11-19 11:27:48.822001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.435 [2024-11-19 11:27:48.822039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.435 qpair failed and we were unable to recover it. 00:25:53.435 [2024-11-19 11:27:48.822220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.435 [2024-11-19 11:27:48.822248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.435 qpair failed and we were unable to recover it. 00:25:53.435 [2024-11-19 11:27:48.822456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.435 [2024-11-19 11:27:48.822481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.435 qpair failed and we were unable to recover it. 00:25:53.435 [2024-11-19 11:27:48.822672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.435 [2024-11-19 11:27:48.822697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.435 qpair failed and we were unable to recover it. 00:25:53.435 [2024-11-19 11:27:48.822866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.435 [2024-11-19 11:27:48.822889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.435 qpair failed and we were unable to recover it. 00:25:53.435 [2024-11-19 11:27:48.823085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.435 [2024-11-19 11:27:48.823108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.435 qpair failed and we were unable to recover it. 00:25:53.435 [2024-11-19 11:27:48.823281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.435 [2024-11-19 11:27:48.823305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.435 qpair failed and we were unable to recover it. 00:25:53.435 [2024-11-19 11:27:48.823457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.435 [2024-11-19 11:27:48.823481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.435 qpair failed and we were unable to recover it. 00:25:53.435 [2024-11-19 11:27:48.823689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.435 [2024-11-19 11:27:48.823713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.435 qpair failed and we were unable to recover it. 00:25:53.435 [2024-11-19 11:27:48.823867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.435 [2024-11-19 11:27:48.823891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.435 qpair failed and we were unable to recover it. 00:25:53.435 [2024-11-19 11:27:48.824070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.435 [2024-11-19 11:27:48.824095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.435 qpair failed and we were unable to recover it. 00:25:53.435 [2024-11-19 11:27:48.824216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.435 [2024-11-19 11:27:48.824241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.435 qpair failed and we were unable to recover it. 00:25:53.435 [2024-11-19 11:27:48.824317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.435 [2024-11-19 11:27:48.824342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.435 qpair failed and we were unable to recover it. 00:25:53.435 [2024-11-19 11:27:48.824496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.435 [2024-11-19 11:27:48.824521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.435 qpair failed and we were unable to recover it. 00:25:53.435 [2024-11-19 11:27:48.824660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.435 [2024-11-19 11:27:48.824700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:53.435 qpair failed and we were unable to recover it. 00:25:53.435 [2024-11-19 11:27:48.824844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.435 [2024-11-19 11:27:48.824892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.435 qpair failed and we were unable to recover it. 00:25:53.435 [2024-11-19 11:27:48.825110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.435 [2024-11-19 11:27:48.825137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.435 qpair failed and we were unable to recover it. 00:25:53.435 [2024-11-19 11:27:48.825373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.435 [2024-11-19 11:27:48.825400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.435 qpair failed and we were unable to recover it. 00:25:53.435 [2024-11-19 11:27:48.825605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.435 [2024-11-19 11:27:48.825631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.435 qpair failed and we were unable to recover it. 00:25:53.435 [2024-11-19 11:27:48.825804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.435 [2024-11-19 11:27:48.825830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.435 qpair failed and we were unable to recover it. 00:25:53.435 [2024-11-19 11:27:48.826000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.435 [2024-11-19 11:27:48.826026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.435 qpair failed and we were unable to recover it. 00:25:53.435 [2024-11-19 11:27:48.826199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.435 [2024-11-19 11:27:48.826224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.435 qpair failed and we were unable to recover it. 00:25:53.435 [2024-11-19 11:27:48.826419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.435 [2024-11-19 11:27:48.826445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.435 qpair failed and we were unable to recover it. 00:25:53.435 [2024-11-19 11:27:48.826615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.435 [2024-11-19 11:27:48.826641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.435 qpair failed and we were unable to recover it. 00:25:53.435 [2024-11-19 11:27:48.826874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.435 [2024-11-19 11:27:48.826898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.435 qpair failed and we were unable to recover it. 00:25:53.435 [2024-11-19 11:27:48.827128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.436 [2024-11-19 11:27:48.827152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.436 qpair failed and we were unable to recover it. 00:25:53.436 [2024-11-19 11:27:48.827320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.436 [2024-11-19 11:27:48.827346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.436 qpair failed and we were unable to recover it. 00:25:53.436 [2024-11-19 11:27:48.827534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.436 [2024-11-19 11:27:48.827560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.436 qpair failed and we were unable to recover it. 00:25:53.436 [2024-11-19 11:27:48.827702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.436 [2024-11-19 11:27:48.827748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.436 qpair failed and we were unable to recover it. 00:25:53.436 [2024-11-19 11:27:48.827971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.436 [2024-11-19 11:27:48.827996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.436 qpair failed and we were unable to recover it. 00:25:53.436 [2024-11-19 11:27:48.828178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.436 [2024-11-19 11:27:48.828203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.436 qpair failed and we were unable to recover it. 00:25:53.436 [2024-11-19 11:27:48.828352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.436 [2024-11-19 11:27:48.828383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.436 qpair failed and we were unable to recover it. 00:25:53.436 [2024-11-19 11:27:48.828611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.436 [2024-11-19 11:27:48.828636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.436 qpair failed and we were unable to recover it. 00:25:53.436 [2024-11-19 11:27:48.828829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.436 [2024-11-19 11:27:48.828853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.436 qpair failed and we were unable to recover it. 00:25:53.436 [2024-11-19 11:27:48.829105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.436 [2024-11-19 11:27:48.829131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.436 qpair failed and we were unable to recover it. 00:25:53.436 [2024-11-19 11:27:48.829308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.436 [2024-11-19 11:27:48.829333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.436 qpair failed and we were unable to recover it. 00:25:53.436 [2024-11-19 11:27:48.829518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.436 [2024-11-19 11:27:48.829543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.436 qpair failed and we were unable to recover it. 00:25:53.719 [2024-11-19 11:27:48.829714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.719 [2024-11-19 11:27:48.829740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.719 qpair failed and we were unable to recover it. 00:25:53.719 [2024-11-19 11:27:48.829949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.719 [2024-11-19 11:27:48.829974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.719 qpair failed and we were unable to recover it. 00:25:53.719 [2024-11-19 11:27:48.830149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.719 [2024-11-19 11:27:48.830175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.719 qpair failed and we were unable to recover it. 00:25:53.719 [2024-11-19 11:27:48.830298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.719 [2024-11-19 11:27:48.830323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.719 qpair failed and we were unable to recover it. 00:25:53.719 [2024-11-19 11:27:48.830538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.719 [2024-11-19 11:27:48.830563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.719 qpair failed and we were unable to recover it. 00:25:53.719 [2024-11-19 11:27:48.830788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.719 [2024-11-19 11:27:48.830813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.719 qpair failed and we were unable to recover it. 00:25:53.719 [2024-11-19 11:27:48.830998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.719 [2024-11-19 11:27:48.831024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.719 qpair failed and we were unable to recover it. 00:25:53.719 [2024-11-19 11:27:48.831196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.719 [2024-11-19 11:27:48.831221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.719 qpair failed and we were unable to recover it. 00:25:53.719 [2024-11-19 11:27:48.831394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.719 [2024-11-19 11:27:48.831420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.719 qpair failed and we were unable to recover it. 00:25:53.719 [2024-11-19 11:27:48.831597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.719 [2024-11-19 11:27:48.831622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.719 qpair failed and we were unable to recover it. 00:25:53.719 [2024-11-19 11:27:48.831780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.719 [2024-11-19 11:27:48.831805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.719 qpair failed and we were unable to recover it. 00:25:53.719 [2024-11-19 11:27:48.831983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.719 [2024-11-19 11:27:48.832009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.719 qpair failed and we were unable to recover it. 00:25:53.719 [2024-11-19 11:27:48.832188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.719 [2024-11-19 11:27:48.832214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.719 qpair failed and we were unable to recover it. 00:25:53.719 [2024-11-19 11:27:48.832421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.719 [2024-11-19 11:27:48.832447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.719 qpair failed and we were unable to recover it. 00:25:53.719 [2024-11-19 11:27:48.832562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.719 [2024-11-19 11:27:48.832587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.719 qpair failed and we were unable to recover it. 00:25:53.719 [2024-11-19 11:27:48.832755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.719 [2024-11-19 11:27:48.832781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.719 qpair failed and we were unable to recover it. 00:25:53.719 [2024-11-19 11:27:48.832958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.719 [2024-11-19 11:27:48.832983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.719 qpair failed and we were unable to recover it. 00:25:53.719 [2024-11-19 11:27:48.833201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.719 [2024-11-19 11:27:48.833227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.719 qpair failed and we were unable to recover it. 00:25:53.719 [2024-11-19 11:27:48.833396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.719 [2024-11-19 11:27:48.833427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.719 qpair failed and we were unable to recover it. 00:25:53.719 [2024-11-19 11:27:48.833555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.719 [2024-11-19 11:27:48.833580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.719 qpair failed and we were unable to recover it. 00:25:53.719 [2024-11-19 11:27:48.833795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.719 [2024-11-19 11:27:48.833820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.719 qpair failed and we were unable to recover it. 00:25:53.719 [2024-11-19 11:27:48.833982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.719 [2024-11-19 11:27:48.834008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.719 qpair failed and we were unable to recover it. 00:25:53.719 [2024-11-19 11:27:48.834180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.719 [2024-11-19 11:27:48.834205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.719 qpair failed and we were unable to recover it. 00:25:53.719 [2024-11-19 11:27:48.834325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.719 [2024-11-19 11:27:48.834350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.719 qpair failed and we were unable to recover it. 00:25:53.719 [2024-11-19 11:27:48.834544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.719 [2024-11-19 11:27:48.834570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.719 qpair failed and we were unable to recover it. 00:25:53.719 [2024-11-19 11:27:48.834779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.719 [2024-11-19 11:27:48.834804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.719 qpair failed and we were unable to recover it. 00:25:53.719 [2024-11-19 11:27:48.834950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.719 [2024-11-19 11:27:48.834975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.719 qpair failed and we were unable to recover it. 00:25:53.719 [2024-11-19 11:27:48.835200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.719 [2024-11-19 11:27:48.835225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.719 qpair failed and we were unable to recover it. 00:25:53.719 [2024-11-19 11:27:48.835415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.719 [2024-11-19 11:27:48.835441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.719 qpair failed and we were unable to recover it. 00:25:53.719 [2024-11-19 11:27:48.835665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.719 [2024-11-19 11:27:48.835689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.719 qpair failed and we were unable to recover it. 00:25:53.719 [2024-11-19 11:27:48.835916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.719 [2024-11-19 11:27:48.835942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.719 qpair failed and we were unable to recover it. 00:25:53.719 [2024-11-19 11:27:48.836124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.719 [2024-11-19 11:27:48.836149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.719 qpair failed and we were unable to recover it. 00:25:53.719 [2024-11-19 11:27:48.836386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.719 [2024-11-19 11:27:48.836413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.719 qpair failed and we were unable to recover it. 00:25:53.719 [2024-11-19 11:27:48.836594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.720 [2024-11-19 11:27:48.836619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.720 qpair failed and we were unable to recover it. 00:25:53.720 [2024-11-19 11:27:48.836780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.720 [2024-11-19 11:27:48.836805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.720 qpair failed and we were unable to recover it. 00:25:53.720 [2024-11-19 11:27:48.836930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.720 [2024-11-19 11:27:48.836970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.720 qpair failed and we were unable to recover it. 00:25:53.720 [2024-11-19 11:27:48.837140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.720 [2024-11-19 11:27:48.837166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.720 qpair failed and we were unable to recover it. 00:25:53.720 [2024-11-19 11:27:48.837390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.720 [2024-11-19 11:27:48.837432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.720 qpair failed and we were unable to recover it. 00:25:53.720 [2024-11-19 11:27:48.837581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.720 [2024-11-19 11:27:48.837606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.720 qpair failed and we were unable to recover it. 00:25:53.720 [2024-11-19 11:27:48.837757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.720 [2024-11-19 11:27:48.837797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.720 qpair failed and we were unable to recover it. 00:25:53.720 [2024-11-19 11:27:48.838017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.720 [2024-11-19 11:27:48.838041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.720 qpair failed and we were unable to recover it. 00:25:53.720 [2024-11-19 11:27:48.838295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.720 [2024-11-19 11:27:48.838320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.720 qpair failed and we were unable to recover it. 00:25:53.720 [2024-11-19 11:27:48.838634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.720 [2024-11-19 11:27:48.838676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.720 qpair failed and we were unable to recover it. 00:25:53.720 [2024-11-19 11:27:48.838869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.720 [2024-11-19 11:27:48.838894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.720 qpair failed and we were unable to recover it. 00:25:53.720 [2024-11-19 11:27:48.839089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.720 [2024-11-19 11:27:48.839115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.720 qpair failed and we were unable to recover it. 00:25:53.720 [2024-11-19 11:27:48.839310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.720 [2024-11-19 11:27:48.839341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.720 qpair failed and we were unable to recover it. 00:25:53.720 [2024-11-19 11:27:48.839586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.720 [2024-11-19 11:27:48.839612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.720 qpair failed and we were unable to recover it. 00:25:53.720 [2024-11-19 11:27:48.839839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.720 [2024-11-19 11:27:48.839863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.720 qpair failed and we were unable to recover it. 00:25:53.720 [2024-11-19 11:27:48.840013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.720 [2024-11-19 11:27:48.840036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.720 qpair failed and we were unable to recover it. 00:25:53.720 [2024-11-19 11:27:48.840234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.720 [2024-11-19 11:27:48.840274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.720 qpair failed and we were unable to recover it. 00:25:53.720 [2024-11-19 11:27:48.840433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.720 [2024-11-19 11:27:48.840459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.720 qpair failed and we were unable to recover it. 00:25:53.720 [2024-11-19 11:27:48.840674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.720 [2024-11-19 11:27:48.840699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.720 qpair failed and we were unable to recover it. 00:25:53.720 [2024-11-19 11:27:48.840879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.720 [2024-11-19 11:27:48.840904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.720 qpair failed and we were unable to recover it. 00:25:53.720 [2024-11-19 11:27:48.841103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.720 [2024-11-19 11:27:48.841127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.720 qpair failed and we were unable to recover it. 00:25:53.720 [2024-11-19 11:27:48.841384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.720 [2024-11-19 11:27:48.841410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.720 qpair failed and we were unable to recover it. 00:25:53.720 [2024-11-19 11:27:48.841596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.720 [2024-11-19 11:27:48.841622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.720 qpair failed and we were unable to recover it. 00:25:53.720 [2024-11-19 11:27:48.841797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.720 [2024-11-19 11:27:48.841821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.720 qpair failed and we were unable to recover it. 00:25:53.720 [2024-11-19 11:27:48.842006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.720 [2024-11-19 11:27:48.842030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.720 qpair failed and we were unable to recover it. 00:25:53.720 [2024-11-19 11:27:48.842229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.720 [2024-11-19 11:27:48.842254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.720 qpair failed and we were unable to recover it. 00:25:53.720 [2024-11-19 11:27:48.842414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.720 [2024-11-19 11:27:48.842441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.720 qpair failed and we were unable to recover it. 00:25:53.720 [2024-11-19 11:27:48.842549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.720 [2024-11-19 11:27:48.842575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.720 qpair failed and we were unable to recover it. 00:25:53.720 [2024-11-19 11:27:48.842791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.720 [2024-11-19 11:27:48.842830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.720 qpair failed and we were unable to recover it. 00:25:53.720 [2024-11-19 11:27:48.843067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.720 [2024-11-19 11:27:48.843091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.720 qpair failed and we were unable to recover it. 00:25:53.720 [2024-11-19 11:27:48.843278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.720 [2024-11-19 11:27:48.843301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.720 qpair failed and we were unable to recover it. 00:25:53.720 [2024-11-19 11:27:48.843534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.720 [2024-11-19 11:27:48.843559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.720 qpair failed and we were unable to recover it. 00:25:53.720 [2024-11-19 11:27:48.843766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.720 [2024-11-19 11:27:48.843790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.720 qpair failed and we were unable to recover it. 00:25:53.720 [2024-11-19 11:27:48.843991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.720 [2024-11-19 11:27:48.844015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.720 qpair failed and we were unable to recover it. 00:25:53.720 [2024-11-19 11:27:48.844170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.720 [2024-11-19 11:27:48.844194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.720 qpair failed and we were unable to recover it. 00:25:53.720 [2024-11-19 11:27:48.844391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.720 [2024-11-19 11:27:48.844418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.720 qpair failed and we were unable to recover it. 00:25:53.720 [2024-11-19 11:27:48.844582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.720 [2024-11-19 11:27:48.844622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.720 qpair failed and we were unable to recover it. 00:25:53.721 [2024-11-19 11:27:48.844802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.721 [2024-11-19 11:27:48.844826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.721 qpair failed and we were unable to recover it. 00:25:53.721 [2024-11-19 11:27:48.845027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.721 [2024-11-19 11:27:48.845050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.721 qpair failed and we were unable to recover it. 00:25:53.721 [2024-11-19 11:27:48.845240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.721 [2024-11-19 11:27:48.845265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.721 qpair failed and we were unable to recover it. 00:25:53.721 [2024-11-19 11:27:48.845528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.721 [2024-11-19 11:27:48.845554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.721 qpair failed and we were unable to recover it. 00:25:53.721 [2024-11-19 11:27:48.845765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.721 [2024-11-19 11:27:48.845789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.721 qpair failed and we were unable to recover it. 00:25:53.721 [2024-11-19 11:27:48.845941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.721 [2024-11-19 11:27:48.845965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.721 qpair failed and we were unable to recover it. 00:25:53.721 [2024-11-19 11:27:48.846161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.721 [2024-11-19 11:27:48.846186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.721 qpair failed and we were unable to recover it. 00:25:53.721 [2024-11-19 11:27:48.846383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.721 [2024-11-19 11:27:48.846409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.721 qpair failed and we were unable to recover it. 00:25:53.721 [2024-11-19 11:27:48.846623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.721 [2024-11-19 11:27:48.846663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.721 qpair failed and we were unable to recover it. 00:25:53.721 [2024-11-19 11:27:48.846811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.721 [2024-11-19 11:27:48.846834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.721 qpair failed and we were unable to recover it. 00:25:53.721 [2024-11-19 11:27:48.847072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.721 [2024-11-19 11:27:48.847097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.721 qpair failed and we were unable to recover it. 00:25:53.721 [2024-11-19 11:27:48.847309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.721 [2024-11-19 11:27:48.847333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.721 qpair failed and we were unable to recover it. 00:25:53.721 [2024-11-19 11:27:48.847526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.721 [2024-11-19 11:27:48.847552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.721 qpair failed and we were unable to recover it. 00:25:53.721 [2024-11-19 11:27:48.847724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.721 [2024-11-19 11:27:48.847748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.721 qpair failed and we were unable to recover it. 00:25:53.721 [2024-11-19 11:27:48.847979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.721 [2024-11-19 11:27:48.848002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.721 qpair failed and we were unable to recover it. 00:25:53.721 [2024-11-19 11:27:48.848203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.721 [2024-11-19 11:27:48.848228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.721 qpair failed and we were unable to recover it. 00:25:53.721 [2024-11-19 11:27:48.848464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.721 [2024-11-19 11:27:48.848489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.721 qpair failed and we were unable to recover it. 00:25:53.721 [2024-11-19 11:27:48.848704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.721 [2024-11-19 11:27:48.848728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.721 qpair failed and we were unable to recover it. 00:25:53.721 [2024-11-19 11:27:48.848910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.721 [2024-11-19 11:27:48.848934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.721 qpair failed and we were unable to recover it. 00:25:53.721 [2024-11-19 11:27:48.849123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.721 [2024-11-19 11:27:48.849147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.721 qpair failed and we were unable to recover it. 00:25:53.721 [2024-11-19 11:27:48.849347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.721 [2024-11-19 11:27:48.849379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.721 qpair failed and we were unable to recover it. 00:25:53.721 [2024-11-19 11:27:48.849580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.721 [2024-11-19 11:27:48.849605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.721 qpair failed and we were unable to recover it. 00:25:53.721 [2024-11-19 11:27:48.849769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.721 [2024-11-19 11:27:48.849794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.721 qpair failed and we were unable to recover it. 00:25:53.721 [2024-11-19 11:27:48.850022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.721 [2024-11-19 11:27:48.850046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.721 qpair failed and we were unable to recover it. 00:25:53.721 [2024-11-19 11:27:48.850288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.721 [2024-11-19 11:27:48.850312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.721 qpair failed and we were unable to recover it. 00:25:53.721 [2024-11-19 11:27:48.850482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.721 [2024-11-19 11:27:48.850507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.721 qpair failed and we were unable to recover it. 00:25:53.721 [2024-11-19 11:27:48.850683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.721 [2024-11-19 11:27:48.850707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.721 qpair failed and we were unable to recover it. 00:25:53.721 [2024-11-19 11:27:48.850942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.721 [2024-11-19 11:27:48.850966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.721 qpair failed and we were unable to recover it. 00:25:53.721 [2024-11-19 11:27:48.851147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.721 [2024-11-19 11:27:48.851172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.721 qpair failed and we were unable to recover it. 00:25:53.721 [2024-11-19 11:27:48.851373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.721 [2024-11-19 11:27:48.851399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.721 qpair failed and we were unable to recover it. 00:25:53.721 [2024-11-19 11:27:48.851594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.721 [2024-11-19 11:27:48.851619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.721 qpair failed and we were unable to recover it. 00:25:53.721 [2024-11-19 11:27:48.851830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.721 [2024-11-19 11:27:48.851856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.721 qpair failed and we were unable to recover it. 00:25:53.721 [2024-11-19 11:27:48.852006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.721 [2024-11-19 11:27:48.852031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.721 qpair failed and we were unable to recover it. 00:25:53.721 [2024-11-19 11:27:48.852242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.721 [2024-11-19 11:27:48.852266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.721 qpair failed and we were unable to recover it. 00:25:53.721 [2024-11-19 11:27:48.852467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.721 [2024-11-19 11:27:48.852493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.721 qpair failed and we were unable to recover it. 00:25:53.721 [2024-11-19 11:27:48.852673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.721 [2024-11-19 11:27:48.852699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.721 qpair failed and we were unable to recover it. 00:25:53.721 [2024-11-19 11:27:48.852923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.722 [2024-11-19 11:27:48.852947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.722 qpair failed and we were unable to recover it. 00:25:53.722 [2024-11-19 11:27:48.853065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.722 [2024-11-19 11:27:48.853104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.722 qpair failed and we were unable to recover it. 00:25:53.722 [2024-11-19 11:27:48.853263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.722 [2024-11-19 11:27:48.853288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.722 qpair failed and we were unable to recover it. 00:25:53.722 [2024-11-19 11:27:48.853474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.722 [2024-11-19 11:27:48.853500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.722 qpair failed and we were unable to recover it. 00:25:53.722 [2024-11-19 11:27:48.853672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.722 [2024-11-19 11:27:48.853697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.722 qpair failed and we were unable to recover it. 00:25:53.722 [2024-11-19 11:27:48.853870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.722 [2024-11-19 11:27:48.853895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.722 qpair failed and we were unable to recover it. 00:25:53.722 [2024-11-19 11:27:48.854113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.722 [2024-11-19 11:27:48.854138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.722 qpair failed and we were unable to recover it. 00:25:53.722 [2024-11-19 11:27:48.854369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.722 [2024-11-19 11:27:48.854413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.722 qpair failed and we were unable to recover it. 00:25:53.722 [2024-11-19 11:27:48.854561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.722 [2024-11-19 11:27:48.854585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.722 qpair failed and we were unable to recover it. 00:25:53.722 [2024-11-19 11:27:48.854766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.722 [2024-11-19 11:27:48.854789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.722 qpair failed and we were unable to recover it. 00:25:53.722 [2024-11-19 11:27:48.855019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.722 [2024-11-19 11:27:48.855043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.722 qpair failed and we were unable to recover it. 00:25:53.722 [2024-11-19 11:27:48.855280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.722 [2024-11-19 11:27:48.855305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.722 qpair failed and we were unable to recover it. 00:25:53.722 [2024-11-19 11:27:48.855519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.722 [2024-11-19 11:27:48.855546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.722 qpair failed and we were unable to recover it. 00:25:53.722 [2024-11-19 11:27:48.855721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.722 [2024-11-19 11:27:48.855759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.722 qpair failed and we were unable to recover it. 00:25:53.722 [2024-11-19 11:27:48.855948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.722 [2024-11-19 11:27:48.855971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.722 qpair failed and we were unable to recover it. 00:25:53.722 [2024-11-19 11:27:48.856200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.722 [2024-11-19 11:27:48.856224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.722 qpair failed and we were unable to recover it. 00:25:53.722 [2024-11-19 11:27:48.856450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.722 [2024-11-19 11:27:48.856476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.722 qpair failed and we were unable to recover it. 00:25:53.722 [2024-11-19 11:27:48.856717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.722 [2024-11-19 11:27:48.856740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.722 qpair failed and we were unable to recover it. 00:25:53.722 [2024-11-19 11:27:48.856971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.722 [2024-11-19 11:27:48.856996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.722 qpair failed and we were unable to recover it. 00:25:53.722 [2024-11-19 11:27:48.857162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.722 [2024-11-19 11:27:48.857187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.722 qpair failed and we were unable to recover it. 00:25:53.722 [2024-11-19 11:27:48.857385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.722 [2024-11-19 11:27:48.857428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.722 qpair failed and we were unable to recover it. 00:25:53.722 [2024-11-19 11:27:48.857666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.722 [2024-11-19 11:27:48.857691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.722 qpair failed and we were unable to recover it. 00:25:53.722 [2024-11-19 11:27:48.857913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.722 [2024-11-19 11:27:48.857937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.722 qpair failed and we were unable to recover it. 00:25:53.722 [2024-11-19 11:27:48.858123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.722 [2024-11-19 11:27:48.858147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.722 qpair failed and we were unable to recover it. 00:25:53.722 [2024-11-19 11:27:48.858395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.722 [2024-11-19 11:27:48.858421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.722 qpair failed and we were unable to recover it. 00:25:53.722 [2024-11-19 11:27:48.858600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.722 [2024-11-19 11:27:48.858626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.722 qpair failed and we were unable to recover it. 00:25:53.722 [2024-11-19 11:27:48.858810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.722 [2024-11-19 11:27:48.858835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.722 qpair failed and we were unable to recover it. 00:25:53.722 [2024-11-19 11:27:48.858959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.722 [2024-11-19 11:27:48.859000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.722 qpair failed and we were unable to recover it. 00:25:53.722 [2024-11-19 11:27:48.859223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.722 [2024-11-19 11:27:48.859248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.722 qpair failed and we were unable to recover it. 00:25:53.722 [2024-11-19 11:27:48.859423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.722 [2024-11-19 11:27:48.859449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.722 qpair failed and we were unable to recover it. 00:25:53.722 [2024-11-19 11:27:48.859654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.722 [2024-11-19 11:27:48.859679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.722 qpair failed and we were unable to recover it. 00:25:53.722 [2024-11-19 11:27:48.859837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.722 [2024-11-19 11:27:48.859861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.722 qpair failed and we were unable to recover it. 00:25:53.722 [2024-11-19 11:27:48.860081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.722 [2024-11-19 11:27:48.860105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.722 qpair failed and we were unable to recover it. 00:25:53.722 [2024-11-19 11:27:48.860222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.722 [2024-11-19 11:27:48.860245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.722 qpair failed and we were unable to recover it. 00:25:53.722 [2024-11-19 11:27:48.860444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.722 [2024-11-19 11:27:48.860475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.722 qpair failed and we were unable to recover it. 00:25:53.722 [2024-11-19 11:27:48.860688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.722 [2024-11-19 11:27:48.860728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.722 qpair failed and we were unable to recover it. 00:25:53.722 [2024-11-19 11:27:48.860866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.722 [2024-11-19 11:27:48.860890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.723 qpair failed and we were unable to recover it. 00:25:53.723 [2024-11-19 11:27:48.861113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.723 [2024-11-19 11:27:48.861137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.723 qpair failed and we were unable to recover it. 00:25:53.723 [2024-11-19 11:27:48.861347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.723 [2024-11-19 11:27:48.861394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.723 qpair failed and we were unable to recover it. 00:25:53.723 [2024-11-19 11:27:48.861618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.723 [2024-11-19 11:27:48.861643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.723 qpair failed and we were unable to recover it. 00:25:53.723 [2024-11-19 11:27:48.861794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.723 [2024-11-19 11:27:48.861817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.723 qpair failed and we were unable to recover it. 00:25:53.723 [2024-11-19 11:27:48.861989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.723 [2024-11-19 11:27:48.862012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.723 qpair failed and we were unable to recover it. 00:25:53.723 [2024-11-19 11:27:48.862216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.723 [2024-11-19 11:27:48.862241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.723 qpair failed and we were unable to recover it. 00:25:53.723 [2024-11-19 11:27:48.862395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.723 [2024-11-19 11:27:48.862436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.723 qpair failed and we were unable to recover it. 00:25:53.723 [2024-11-19 11:27:48.862625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.723 [2024-11-19 11:27:48.862650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.723 qpair failed and we were unable to recover it. 00:25:53.723 [2024-11-19 11:27:48.862886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.723 [2024-11-19 11:27:48.862909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.723 qpair failed and we were unable to recover it. 00:25:53.723 [2024-11-19 11:27:48.863074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.723 [2024-11-19 11:27:48.863098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.723 qpair failed and we were unable to recover it. 00:25:53.723 [2024-11-19 11:27:48.863288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.723 [2024-11-19 11:27:48.863327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.723 qpair failed and we were unable to recover it. 00:25:53.723 [2024-11-19 11:27:48.863485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.723 [2024-11-19 11:27:48.863511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.723 qpair failed and we were unable to recover it. 00:25:53.723 [2024-11-19 11:27:48.863644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.723 [2024-11-19 11:27:48.863690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.723 qpair failed and we were unable to recover it. 00:25:53.723 [2024-11-19 11:27:48.863813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.723 [2024-11-19 11:27:48.863837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.723 qpair failed and we were unable to recover it. 00:25:53.723 [2024-11-19 11:27:48.864081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.723 [2024-11-19 11:27:48.864105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.723 qpair failed and we were unable to recover it. 00:25:53.723 [2024-11-19 11:27:48.864303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.723 [2024-11-19 11:27:48.864328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.723 qpair failed and we were unable to recover it. 00:25:53.723 [2024-11-19 11:27:48.864554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.723 [2024-11-19 11:27:48.864580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.723 qpair failed and we were unable to recover it. 00:25:53.723 [2024-11-19 11:27:48.864773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.723 [2024-11-19 11:27:48.864796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.723 qpair failed and we were unable to recover it. 00:25:53.723 [2024-11-19 11:27:48.864997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.723 [2024-11-19 11:27:48.865023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.723 qpair failed and we were unable to recover it. 00:25:53.723 [2024-11-19 11:27:48.865250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.723 [2024-11-19 11:27:48.865275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.723 qpair failed and we were unable to recover it. 00:25:53.723 [2024-11-19 11:27:48.865459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.723 [2024-11-19 11:27:48.865485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.723 qpair failed and we were unable to recover it. 00:25:53.723 [2024-11-19 11:27:48.865656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.723 [2024-11-19 11:27:48.865681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.723 qpair failed and we were unable to recover it. 00:25:53.723 [2024-11-19 11:27:48.865908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.723 [2024-11-19 11:27:48.865932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.723 qpair failed and we were unable to recover it. 00:25:53.723 [2024-11-19 11:27:48.866113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.723 [2024-11-19 11:27:48.866137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.723 qpair failed and we were unable to recover it. 00:25:53.723 [2024-11-19 11:27:48.866310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.723 [2024-11-19 11:27:48.866332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.723 qpair failed and we were unable to recover it. 00:25:53.723 [2024-11-19 11:27:48.866570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.723 [2024-11-19 11:27:48.866596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.723 qpair failed and we were unable to recover it. 00:25:53.723 [2024-11-19 11:27:48.866797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.723 [2024-11-19 11:27:48.866821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.723 qpair failed and we were unable to recover it. 00:25:53.723 [2024-11-19 11:27:48.867022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.723 [2024-11-19 11:27:48.867046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.723 qpair failed and we were unable to recover it. 00:25:53.723 [2024-11-19 11:27:48.867262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.723 [2024-11-19 11:27:48.867300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.723 qpair failed and we were unable to recover it. 00:25:53.723 [2024-11-19 11:27:48.867461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.723 [2024-11-19 11:27:48.867487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.723 qpair failed and we were unable to recover it. 00:25:53.723 [2024-11-19 11:27:48.867717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.723 [2024-11-19 11:27:48.867756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.724 qpair failed and we were unable to recover it. 00:25:53.724 [2024-11-19 11:27:48.867984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.724 [2024-11-19 11:27:48.868009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.724 qpair failed and we were unable to recover it. 00:25:53.724 [2024-11-19 11:27:48.868223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.724 [2024-11-19 11:27:48.868247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.724 qpair failed and we were unable to recover it. 00:25:53.724 [2024-11-19 11:27:48.868481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.724 [2024-11-19 11:27:48.868506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.724 qpair failed and we were unable to recover it. 00:25:53.724 [2024-11-19 11:27:48.868725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.724 [2024-11-19 11:27:48.868750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.724 qpair failed and we were unable to recover it. 00:25:53.724 [2024-11-19 11:27:48.868924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.724 [2024-11-19 11:27:48.868949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.724 qpair failed and we were unable to recover it. 00:25:53.724 [2024-11-19 11:27:48.869127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.724 [2024-11-19 11:27:48.869152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.724 qpair failed and we were unable to recover it. 00:25:53.724 [2024-11-19 11:27:48.869339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.724 [2024-11-19 11:27:48.869371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.724 qpair failed and we were unable to recover it. 00:25:53.724 [2024-11-19 11:27:48.869570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.724 [2024-11-19 11:27:48.869596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.724 qpair failed and we were unable to recover it. 00:25:53.724 [2024-11-19 11:27:48.869806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.724 [2024-11-19 11:27:48.869830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.724 qpair failed and we were unable to recover it. 00:25:53.724 [2024-11-19 11:27:48.870036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.724 [2024-11-19 11:27:48.870059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.724 qpair failed and we were unable to recover it. 00:25:53.724 [2024-11-19 11:27:48.870298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.724 [2024-11-19 11:27:48.870324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.724 qpair failed and we were unable to recover it. 00:25:53.724 [2024-11-19 11:27:48.870515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.724 [2024-11-19 11:27:48.870541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.724 qpair failed and we were unable to recover it. 00:25:53.724 [2024-11-19 11:27:48.870759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.724 [2024-11-19 11:27:48.870799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.724 qpair failed and we were unable to recover it. 00:25:53.724 [2024-11-19 11:27:48.870964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.724 [2024-11-19 11:27:48.870988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.724 qpair failed and we were unable to recover it. 00:25:53.724 [2024-11-19 11:27:48.871215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.724 [2024-11-19 11:27:48.871253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.724 qpair failed and we were unable to recover it. 00:25:53.724 [2024-11-19 11:27:48.871493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.724 [2024-11-19 11:27:48.871519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.724 qpair failed and we were unable to recover it. 00:25:53.724 [2024-11-19 11:27:48.871745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.724 [2024-11-19 11:27:48.871768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.724 qpair failed and we were unable to recover it. 00:25:53.724 [2024-11-19 11:27:48.871983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.724 [2024-11-19 11:27:48.872007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.724 qpair failed and we were unable to recover it. 00:25:53.724 [2024-11-19 11:27:48.872232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.724 [2024-11-19 11:27:48.872257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.724 qpair failed and we were unable to recover it. 00:25:53.724 [2024-11-19 11:27:48.872490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.724 [2024-11-19 11:27:48.872517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.724 qpair failed and we were unable to recover it. 00:25:53.724 [2024-11-19 11:27:48.872727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.724 [2024-11-19 11:27:48.872751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.724 qpair failed and we were unable to recover it. 00:25:53.724 [2024-11-19 11:27:48.872964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.724 [2024-11-19 11:27:48.873002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.724 qpair failed and we were unable to recover it. 00:25:53.724 [2024-11-19 11:27:48.873192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.724 [2024-11-19 11:27:48.873216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.724 qpair failed and we were unable to recover it. 00:25:53.724 [2024-11-19 11:27:48.873413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.724 [2024-11-19 11:27:48.873440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.724 qpair failed and we were unable to recover it. 00:25:53.724 [2024-11-19 11:27:48.873673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.724 [2024-11-19 11:27:48.873697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.724 qpair failed and we were unable to recover it. 00:25:53.724 [2024-11-19 11:27:48.873920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.724 [2024-11-19 11:27:48.873944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.724 qpair failed and we were unable to recover it. 00:25:53.724 [2024-11-19 11:27:48.874129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.724 [2024-11-19 11:27:48.874153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.724 qpair failed and we were unable to recover it. 00:25:53.724 [2024-11-19 11:27:48.874351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.724 [2024-11-19 11:27:48.874399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.724 qpair failed and we were unable to recover it. 00:25:53.724 [2024-11-19 11:27:48.874613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.724 [2024-11-19 11:27:48.874638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.724 qpair failed and we were unable to recover it. 00:25:53.724 [2024-11-19 11:27:48.874816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.724 [2024-11-19 11:27:48.874839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.724 qpair failed and we were unable to recover it. 00:25:53.724 [2024-11-19 11:27:48.875016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.724 [2024-11-19 11:27:48.875040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.724 qpair failed and we were unable to recover it. 00:25:53.724 [2024-11-19 11:27:48.875269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.724 [2024-11-19 11:27:48.875293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.724 qpair failed and we were unable to recover it. 00:25:53.724 [2024-11-19 11:27:48.875485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.724 [2024-11-19 11:27:48.875526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.724 qpair failed and we were unable to recover it. 00:25:53.724 [2024-11-19 11:27:48.875713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.724 [2024-11-19 11:27:48.875753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.724 qpair failed and we were unable to recover it. 00:25:53.724 [2024-11-19 11:27:48.875901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.724 [2024-11-19 11:27:48.875930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.724 qpair failed and we were unable to recover it. 00:25:53.724 [2024-11-19 11:27:48.876165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.724 [2024-11-19 11:27:48.876189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.725 qpair failed and we were unable to recover it. 00:25:53.725 [2024-11-19 11:27:48.876434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.725 [2024-11-19 11:27:48.876460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.725 qpair failed and we were unable to recover it. 00:25:53.725 [2024-11-19 11:27:48.876687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.725 [2024-11-19 11:27:48.876713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.725 qpair failed and we were unable to recover it. 00:25:53.725 [2024-11-19 11:27:48.876937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.725 [2024-11-19 11:27:48.876961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.725 qpair failed and we were unable to recover it. 00:25:53.725 [2024-11-19 11:27:48.877181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.725 [2024-11-19 11:27:48.877205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.725 qpair failed and we were unable to recover it. 00:25:53.725 [2024-11-19 11:27:48.877437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.725 [2024-11-19 11:27:48.877463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.725 qpair failed and we were unable to recover it. 00:25:53.725 [2024-11-19 11:27:48.877595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.725 [2024-11-19 11:27:48.877620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.725 qpair failed and we were unable to recover it. 00:25:53.725 [2024-11-19 11:27:48.877817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.725 [2024-11-19 11:27:48.877841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.725 qpair failed and we were unable to recover it. 00:25:53.725 [2024-11-19 11:27:48.878041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.725 [2024-11-19 11:27:48.878065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.725 qpair failed and we were unable to recover it. 00:25:53.725 [2024-11-19 11:27:48.878306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.725 [2024-11-19 11:27:48.878330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.725 qpair failed and we were unable to recover it. 00:25:53.725 [2024-11-19 11:27:48.878512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.725 [2024-11-19 11:27:48.878537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.725 qpair failed and we were unable to recover it. 00:25:53.725 [2024-11-19 11:27:48.878725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.725 [2024-11-19 11:27:48.878750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.725 qpair failed and we were unable to recover it. 00:25:53.725 [2024-11-19 11:27:48.878941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.725 [2024-11-19 11:27:48.878965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.725 qpair failed and we were unable to recover it. 00:25:53.725 [2024-11-19 11:27:48.879209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.725 [2024-11-19 11:27:48.879235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.725 qpair failed and we were unable to recover it. 00:25:53.725 [2024-11-19 11:27:48.879424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.725 [2024-11-19 11:27:48.879466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.725 qpair failed and we were unable to recover it. 00:25:53.725 [2024-11-19 11:27:48.879583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.725 [2024-11-19 11:27:48.879609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.725 qpair failed and we were unable to recover it. 00:25:53.725 [2024-11-19 11:27:48.879818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.725 [2024-11-19 11:27:48.879842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.725 qpair failed and we were unable to recover it. 00:25:53.725 [2024-11-19 11:27:48.880023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.725 [2024-11-19 11:27:48.880047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.725 qpair failed and we were unable to recover it. 00:25:53.725 [2024-11-19 11:27:48.880229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.725 [2024-11-19 11:27:48.880253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.725 qpair failed and we were unable to recover it. 00:25:53.725 [2024-11-19 11:27:48.880416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.725 [2024-11-19 11:27:48.880442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.725 qpair failed and we were unable to recover it. 00:25:53.725 [2024-11-19 11:27:48.880544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.725 [2024-11-19 11:27:48.880570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.725 qpair failed and we were unable to recover it. 00:25:53.725 [2024-11-19 11:27:48.880805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.725 [2024-11-19 11:27:48.880829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.725 qpair failed and we were unable to recover it. 00:25:53.725 [2024-11-19 11:27:48.881044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.725 [2024-11-19 11:27:48.881068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.725 qpair failed and we were unable to recover it. 00:25:53.725 [2024-11-19 11:27:48.881299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.725 [2024-11-19 11:27:48.881323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.725 qpair failed and we were unable to recover it. 00:25:53.725 [2024-11-19 11:27:48.881564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.725 [2024-11-19 11:27:48.881589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.725 qpair failed and we were unable to recover it. 00:25:53.725 [2024-11-19 11:27:48.881763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.725 [2024-11-19 11:27:48.881787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.725 qpair failed and we were unable to recover it. 00:25:53.725 [2024-11-19 11:27:48.881961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.725 [2024-11-19 11:27:48.881989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.725 qpair failed and we were unable to recover it. 00:25:53.725 [2024-11-19 11:27:48.882187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.725 [2024-11-19 11:27:48.882212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.725 qpair failed and we were unable to recover it. 00:25:53.725 [2024-11-19 11:27:48.882444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.725 [2024-11-19 11:27:48.882470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.725 qpair failed and we were unable to recover it. 00:25:53.725 [2024-11-19 11:27:48.882687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.725 [2024-11-19 11:27:48.882711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.725 qpair failed and we were unable to recover it. 00:25:53.725 [2024-11-19 11:27:48.882888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.725 [2024-11-19 11:27:48.882911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.725 qpair failed and we were unable to recover it. 00:25:53.725 [2024-11-19 11:27:48.883086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.725 [2024-11-19 11:27:48.883110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.725 qpair failed and we were unable to recover it. 00:25:53.725 [2024-11-19 11:27:48.883299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.725 [2024-11-19 11:27:48.883323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.725 qpair failed and we were unable to recover it. 00:25:53.725 [2024-11-19 11:27:48.883515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.725 [2024-11-19 11:27:48.883540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.725 qpair failed and we were unable to recover it. 00:25:53.725 [2024-11-19 11:27:48.883760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.725 [2024-11-19 11:27:48.883800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.725 qpair failed and we were unable to recover it. 00:25:53.725 [2024-11-19 11:27:48.884028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.725 [2024-11-19 11:27:48.884054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.725 qpair failed and we were unable to recover it. 00:25:53.725 [2024-11-19 11:27:48.884240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.725 [2024-11-19 11:27:48.884265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.725 qpair failed and we were unable to recover it. 00:25:53.725 [2024-11-19 11:27:48.884481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.726 [2024-11-19 11:27:48.884508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.726 qpair failed and we were unable to recover it. 00:25:53.726 [2024-11-19 11:27:48.884720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.726 [2024-11-19 11:27:48.884745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.726 qpair failed and we were unable to recover it. 00:25:53.726 [2024-11-19 11:27:48.884945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.726 [2024-11-19 11:27:48.884968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.726 qpair failed and we were unable to recover it. 00:25:53.726 [2024-11-19 11:27:48.885159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.726 [2024-11-19 11:27:48.885184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.726 qpair failed and we were unable to recover it. 00:25:53.726 [2024-11-19 11:27:48.885415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.726 [2024-11-19 11:27:48.885441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.726 qpair failed and we were unable to recover it. 00:25:53.726 [2024-11-19 11:27:48.885628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.726 [2024-11-19 11:27:48.885667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.726 qpair failed and we were unable to recover it. 00:25:53.726 [2024-11-19 11:27:48.885884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.726 [2024-11-19 11:27:48.885910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.726 qpair failed and we were unable to recover it. 00:25:53.726 [2024-11-19 11:27:48.886133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.726 [2024-11-19 11:27:48.886157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.726 qpair failed and we were unable to recover it. 00:25:53.726 [2024-11-19 11:27:48.886354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.726 [2024-11-19 11:27:48.886383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.726 qpair failed and we were unable to recover it. 00:25:53.726 [2024-11-19 11:27:48.886508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.726 [2024-11-19 11:27:48.886533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.726 qpair failed and we were unable to recover it. 00:25:53.726 [2024-11-19 11:27:48.886765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.726 [2024-11-19 11:27:48.886790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.726 qpair failed and we were unable to recover it. 00:25:53.726 [2024-11-19 11:27:48.886993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.726 [2024-11-19 11:27:48.887017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.726 qpair failed and we were unable to recover it. 00:25:53.726 [2024-11-19 11:27:48.887221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.726 [2024-11-19 11:27:48.887245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.726 qpair failed and we were unable to recover it. 00:25:53.726 [2024-11-19 11:27:48.887415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.726 [2024-11-19 11:27:48.887441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.726 qpair failed and we were unable to recover it. 00:25:53.726 [2024-11-19 11:27:48.887618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.726 [2024-11-19 11:27:48.887644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.726 qpair failed and we were unable to recover it. 00:25:53.726 [2024-11-19 11:27:48.887870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.726 [2024-11-19 11:27:48.887894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.726 qpair failed and we were unable to recover it. 00:25:53.726 [2024-11-19 11:27:48.888117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.726 [2024-11-19 11:27:48.888145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.726 qpair failed and we were unable to recover it. 00:25:53.726 [2024-11-19 11:27:48.888395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.726 [2024-11-19 11:27:48.888420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.726 qpair failed and we were unable to recover it. 00:25:53.726 [2024-11-19 11:27:48.888573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.726 [2024-11-19 11:27:48.888598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.726 qpair failed and we were unable to recover it. 00:25:53.726 [2024-11-19 11:27:48.888790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.726 [2024-11-19 11:27:48.888814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.726 qpair failed and we were unable to recover it. 00:25:53.726 [2024-11-19 11:27:48.889032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.726 [2024-11-19 11:27:48.889057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.726 qpair failed and we were unable to recover it. 00:25:53.726 [2024-11-19 11:27:48.889241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.726 [2024-11-19 11:27:48.889266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.726 qpair failed and we were unable to recover it. 00:25:53.726 [2024-11-19 11:27:48.889462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.726 [2024-11-19 11:27:48.889489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.726 qpair failed and we were unable to recover it. 00:25:53.726 [2024-11-19 11:27:48.889669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.726 [2024-11-19 11:27:48.889708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.726 qpair failed and we were unable to recover it. 00:25:53.726 [2024-11-19 11:27:48.889882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.726 [2024-11-19 11:27:48.889907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.726 qpair failed and we were unable to recover it. 00:25:53.726 [2024-11-19 11:27:48.890114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.726 [2024-11-19 11:27:48.890138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.726 qpair failed and we were unable to recover it. 00:25:53.726 [2024-11-19 11:27:48.890307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.726 [2024-11-19 11:27:48.890331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.726 qpair failed and we were unable to recover it. 00:25:53.726 [2024-11-19 11:27:48.890532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.726 [2024-11-19 11:27:48.890557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.726 qpair failed and we were unable to recover it. 00:25:53.726 [2024-11-19 11:27:48.890739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.726 [2024-11-19 11:27:48.890764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.726 qpair failed and we were unable to recover it. 00:25:53.726 [2024-11-19 11:27:48.890906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.726 [2024-11-19 11:27:48.890930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.726 qpair failed and we were unable to recover it. 00:25:53.726 [2024-11-19 11:27:48.891167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.726 [2024-11-19 11:27:48.891191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.726 qpair failed and we were unable to recover it. 00:25:53.726 [2024-11-19 11:27:48.891410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.726 [2024-11-19 11:27:48.891436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.726 qpair failed and we were unable to recover it. 00:25:53.726 [2024-11-19 11:27:48.891572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.726 [2024-11-19 11:27:48.891597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.726 qpair failed and we were unable to recover it. 00:25:53.726 [2024-11-19 11:27:48.891786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.726 [2024-11-19 11:27:48.891809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.726 qpair failed and we were unable to recover it. 00:25:53.726 [2024-11-19 11:27:48.891949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.726 [2024-11-19 11:27:48.891973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.726 qpair failed and we were unable to recover it. 00:25:53.726 [2024-11-19 11:27:48.892222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.726 [2024-11-19 11:27:48.892247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.726 qpair failed and we were unable to recover it. 00:25:53.726 [2024-11-19 11:27:48.892397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.726 [2024-11-19 11:27:48.892438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.727 qpair failed and we were unable to recover it. 00:25:53.727 [2024-11-19 11:27:48.892666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.727 [2024-11-19 11:27:48.892691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.727 qpair failed and we were unable to recover it. 00:25:53.727 [2024-11-19 11:27:48.892884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.727 [2024-11-19 11:27:48.892909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.727 qpair failed and we were unable to recover it. 00:25:53.727 [2024-11-19 11:27:48.893151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.727 [2024-11-19 11:27:48.893176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.727 qpair failed and we were unable to recover it. 00:25:53.727 [2024-11-19 11:27:48.893377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.727 [2024-11-19 11:27:48.893403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.727 qpair failed and we were unable to recover it. 00:25:53.727 [2024-11-19 11:27:48.893610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.727 [2024-11-19 11:27:48.893636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.727 qpair failed and we were unable to recover it. 00:25:53.727 [2024-11-19 11:27:48.893785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.727 [2024-11-19 11:27:48.893809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.727 qpair failed and we were unable to recover it. 00:25:53.727 [2024-11-19 11:27:48.894011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.727 [2024-11-19 11:27:48.894035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.727 qpair failed and we were unable to recover it. 00:25:53.727 [2024-11-19 11:27:48.894206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.727 [2024-11-19 11:27:48.894231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.727 qpair failed and we were unable to recover it. 00:25:53.727 [2024-11-19 11:27:48.894368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.727 [2024-11-19 11:27:48.894393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.727 qpair failed and we were unable to recover it. 00:25:53.727 [2024-11-19 11:27:48.894607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.727 [2024-11-19 11:27:48.894633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.727 qpair failed and we were unable to recover it. 00:25:53.727 [2024-11-19 11:27:48.894817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.727 [2024-11-19 11:27:48.894841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.727 qpair failed and we were unable to recover it. 00:25:53.727 [2024-11-19 11:27:48.895063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.727 [2024-11-19 11:27:48.895102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.727 qpair failed and we were unable to recover it. 00:25:53.727 [2024-11-19 11:27:48.895313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.727 [2024-11-19 11:27:48.895336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.727 qpair failed and we were unable to recover it. 00:25:53.727 [2024-11-19 11:27:48.895537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.727 [2024-11-19 11:27:48.895563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.727 qpair failed and we were unable to recover it. 00:25:53.727 [2024-11-19 11:27:48.895686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.727 [2024-11-19 11:27:48.895711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.727 qpair failed and we were unable to recover it. 00:25:53.727 [2024-11-19 11:27:48.895857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.727 [2024-11-19 11:27:48.895882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.727 qpair failed and we were unable to recover it. 00:25:53.727 [2024-11-19 11:27:48.896116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.727 [2024-11-19 11:27:48.896140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.727 qpair failed and we were unable to recover it. 00:25:53.727 [2024-11-19 11:27:48.896351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.727 [2024-11-19 11:27:48.896401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.727 qpair failed and we were unable to recover it. 00:25:53.727 [2024-11-19 11:27:48.896606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.727 [2024-11-19 11:27:48.896632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.727 qpair failed and we were unable to recover it. 00:25:53.727 [2024-11-19 11:27:48.896789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.727 [2024-11-19 11:27:48.896813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.727 qpair failed and we were unable to recover it. 00:25:53.727 [2024-11-19 11:27:48.897027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.727 [2024-11-19 11:27:48.897054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.727 qpair failed and we were unable to recover it. 00:25:53.727 [2024-11-19 11:27:48.897243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.727 [2024-11-19 11:27:48.897268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.727 qpair failed and we were unable to recover it. 00:25:53.727 [2024-11-19 11:27:48.897501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.727 [2024-11-19 11:27:48.897527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.727 qpair failed and we were unable to recover it. 00:25:53.727 [2024-11-19 11:27:48.897751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.727 [2024-11-19 11:27:48.897776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.727 qpair failed and we were unable to recover it. 00:25:53.727 [2024-11-19 11:27:48.897959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.727 [2024-11-19 11:27:48.897983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.727 qpair failed and we were unable to recover it. 00:25:53.727 [2024-11-19 11:27:48.898207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.727 [2024-11-19 11:27:48.898232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.727 qpair failed and we were unable to recover it. 00:25:53.727 [2024-11-19 11:27:48.898475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.727 [2024-11-19 11:27:48.898501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.727 qpair failed and we were unable to recover it. 00:25:53.727 [2024-11-19 11:27:48.898632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.727 [2024-11-19 11:27:48.898657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.727 qpair failed and we were unable to recover it. 00:25:53.727 [2024-11-19 11:27:48.898826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.727 [2024-11-19 11:27:48.898863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.727 qpair failed and we were unable to recover it. 00:25:53.727 [2024-11-19 11:27:48.899065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.727 [2024-11-19 11:27:48.899090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.727 qpair failed and we were unable to recover it. 00:25:53.727 [2024-11-19 11:27:48.899309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.727 [2024-11-19 11:27:48.899334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.727 qpair failed and we were unable to recover it. 00:25:53.727 [2024-11-19 11:27:48.899545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.727 [2024-11-19 11:27:48.899571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.727 qpair failed and we were unable to recover it. 00:25:53.727 [2024-11-19 11:27:48.899792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.727 [2024-11-19 11:27:48.899816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.727 qpair failed and we were unable to recover it. 00:25:53.727 [2024-11-19 11:27:48.900000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.727 [2024-11-19 11:27:48.900023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.727 qpair failed and we were unable to recover it. 00:25:53.727 [2024-11-19 11:27:48.900212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.727 [2024-11-19 11:27:48.900236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.727 qpair failed and we were unable to recover it. 00:25:53.727 [2024-11-19 11:27:48.900417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.727 [2024-11-19 11:27:48.900442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.727 qpair failed and we were unable to recover it. 00:25:53.727 [2024-11-19 11:27:48.900575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.728 [2024-11-19 11:27:48.900599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.728 qpair failed and we were unable to recover it. 00:25:53.728 [2024-11-19 11:27:48.900793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.728 [2024-11-19 11:27:48.900818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.728 qpair failed and we were unable to recover it. 00:25:53.728 [2024-11-19 11:27:48.900984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.728 [2024-11-19 11:27:48.901008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.728 qpair failed and we were unable to recover it. 00:25:53.728 [2024-11-19 11:27:48.901200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.728 [2024-11-19 11:27:48.901225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.728 qpair failed and we were unable to recover it. 00:25:53.728 [2024-11-19 11:27:48.901387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.728 [2024-11-19 11:27:48.901427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.728 qpair failed and we were unable to recover it. 00:25:53.728 [2024-11-19 11:27:48.901644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.728 [2024-11-19 11:27:48.901669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.728 qpair failed and we were unable to recover it. 00:25:53.728 [2024-11-19 11:27:48.901834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.728 [2024-11-19 11:27:48.901873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.728 qpair failed and we were unable to recover it. 00:25:53.728 [2024-11-19 11:27:48.902092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.728 [2024-11-19 11:27:48.902117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.728 qpair failed and we were unable to recover it. 00:25:53.728 [2024-11-19 11:27:48.902288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.728 [2024-11-19 11:27:48.902311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.728 qpair failed and we were unable to recover it. 00:25:53.728 [2024-11-19 11:27:48.902538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.728 [2024-11-19 11:27:48.902564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.728 qpair failed and we were unable to recover it. 00:25:53.728 [2024-11-19 11:27:48.902757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.728 [2024-11-19 11:27:48.902781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.728 qpair failed and we were unable to recover it. 00:25:53.728 [2024-11-19 11:27:48.902968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.728 [2024-11-19 11:27:48.902997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.728 qpair failed and we were unable to recover it. 00:25:53.728 [2024-11-19 11:27:48.903168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.728 [2024-11-19 11:27:48.903192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.728 qpair failed and we were unable to recover it. 00:25:53.728 [2024-11-19 11:27:48.903420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.728 [2024-11-19 11:27:48.903446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.728 qpair failed and we were unable to recover it. 00:25:53.728 [2024-11-19 11:27:48.903661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.728 [2024-11-19 11:27:48.903699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.728 qpair failed and we were unable to recover it. 00:25:53.728 [2024-11-19 11:27:48.903868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.728 [2024-11-19 11:27:48.903892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.728 qpair failed and we were unable to recover it. 00:25:53.728 [2024-11-19 11:27:48.904074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.728 [2024-11-19 11:27:48.904097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.728 qpair failed and we were unable to recover it. 00:25:53.728 [2024-11-19 11:27:48.904318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.728 [2024-11-19 11:27:48.904342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.728 qpair failed and we were unable to recover it. 00:25:53.728 [2024-11-19 11:27:48.904552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.728 [2024-11-19 11:27:48.904578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.728 qpair failed and we were unable to recover it. 00:25:53.728 [2024-11-19 11:27:48.904714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.728 [2024-11-19 11:27:48.904738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.728 qpair failed and we were unable to recover it. 00:25:53.728 [2024-11-19 11:27:48.904908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.728 [2024-11-19 11:27:48.904932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.728 qpair failed and we were unable to recover it. 00:25:53.728 [2024-11-19 11:27:48.905152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.728 [2024-11-19 11:27:48.905177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.728 qpair failed and we were unable to recover it. 00:25:53.728 [2024-11-19 11:27:48.905305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.728 [2024-11-19 11:27:48.905329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.728 qpair failed and we were unable to recover it. 00:25:53.728 [2024-11-19 11:27:48.905522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.728 [2024-11-19 11:27:48.905547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.728 qpair failed and we were unable to recover it. 00:25:53.728 [2024-11-19 11:27:48.905757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.728 [2024-11-19 11:27:48.905783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.728 qpair failed and we were unable to recover it. 00:25:53.728 [2024-11-19 11:27:48.905988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.728 [2024-11-19 11:27:48.906013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.728 qpair failed and we were unable to recover it. 00:25:53.728 [2024-11-19 11:27:48.906241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.728 [2024-11-19 11:27:48.906265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.728 qpair failed and we were unable to recover it. 00:25:53.728 [2024-11-19 11:27:48.906486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.728 [2024-11-19 11:27:48.906512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.728 qpair failed and we were unable to recover it. 00:25:53.728 [2024-11-19 11:27:48.906673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.728 [2024-11-19 11:27:48.906697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.728 qpair failed and we were unable to recover it. 00:25:53.728 [2024-11-19 11:27:48.906929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.728 [2024-11-19 11:27:48.906954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.728 qpair failed and we were unable to recover it. 00:25:53.728 [2024-11-19 11:27:48.907182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.728 [2024-11-19 11:27:48.907206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.728 qpair failed and we were unable to recover it. 00:25:53.728 [2024-11-19 11:27:48.907405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.728 [2024-11-19 11:27:48.907430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.728 qpair failed and we were unable to recover it. 00:25:53.729 [2024-11-19 11:27:48.907590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.729 [2024-11-19 11:27:48.907615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.729 qpair failed and we were unable to recover it. 00:25:53.729 [2024-11-19 11:27:48.907752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.729 [2024-11-19 11:27:48.907776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.729 qpair failed and we were unable to recover it. 00:25:53.729 [2024-11-19 11:27:48.907989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.729 [2024-11-19 11:27:48.908013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.729 qpair failed and we were unable to recover it. 00:25:53.729 [2024-11-19 11:27:48.908248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.729 [2024-11-19 11:27:48.908273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.729 qpair failed and we were unable to recover it. 00:25:53.729 [2024-11-19 11:27:48.908483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.729 [2024-11-19 11:27:48.908508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.729 qpair failed and we were unable to recover it. 00:25:53.729 [2024-11-19 11:27:48.908678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.729 [2024-11-19 11:27:48.908702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.729 qpair failed and we were unable to recover it. 00:25:53.729 [2024-11-19 11:27:48.908903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.729 [2024-11-19 11:27:48.908930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.729 qpair failed and we were unable to recover it. 00:25:53.729 [2024-11-19 11:27:48.909172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.729 [2024-11-19 11:27:48.909196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.729 qpair failed and we were unable to recover it. 00:25:53.729 [2024-11-19 11:27:48.909392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.729 [2024-11-19 11:27:48.909417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.729 qpair failed and we were unable to recover it. 00:25:53.729 [2024-11-19 11:27:48.909585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.729 [2024-11-19 11:27:48.909611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.729 qpair failed and we were unable to recover it. 00:25:53.729 [2024-11-19 11:27:48.909822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.729 [2024-11-19 11:27:48.909847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.729 qpair failed and we were unable to recover it. 00:25:53.729 [2024-11-19 11:27:48.910086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.729 [2024-11-19 11:27:48.910110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.729 qpair failed and we were unable to recover it. 00:25:53.729 [2024-11-19 11:27:48.910321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.729 [2024-11-19 11:27:48.910359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.729 qpair failed and we were unable to recover it. 00:25:53.729 [2024-11-19 11:27:48.910564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.729 [2024-11-19 11:27:48.910589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.729 qpair failed and we were unable to recover it. 00:25:53.729 [2024-11-19 11:27:48.910802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.729 [2024-11-19 11:27:48.910826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.729 qpair failed and we were unable to recover it. 00:25:53.729 [2024-11-19 11:27:48.910960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.729 [2024-11-19 11:27:48.910985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.729 qpair failed and we were unable to recover it. 00:25:53.729 [2024-11-19 11:27:48.911194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.729 [2024-11-19 11:27:48.911219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.729 qpair failed and we were unable to recover it. 00:25:53.729 [2024-11-19 11:27:48.911445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.729 [2024-11-19 11:27:48.911471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.729 qpair failed and we were unable to recover it. 00:25:53.729 [2024-11-19 11:27:48.911691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.729 [2024-11-19 11:27:48.911716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.729 qpair failed and we were unable to recover it. 00:25:53.729 [2024-11-19 11:27:48.911891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.729 [2024-11-19 11:27:48.911916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.729 qpair failed and we were unable to recover it. 00:25:53.729 [2024-11-19 11:27:48.912137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.729 [2024-11-19 11:27:48.912160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.729 qpair failed and we were unable to recover it. 00:25:53.729 [2024-11-19 11:27:48.912344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.729 [2024-11-19 11:27:48.912399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.729 qpair failed and we were unable to recover it. 00:25:53.729 [2024-11-19 11:27:48.912629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.729 [2024-11-19 11:27:48.912654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.729 qpair failed and we were unable to recover it. 00:25:53.729 [2024-11-19 11:27:48.912822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.729 [2024-11-19 11:27:48.912847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.729 qpair failed and we were unable to recover it. 00:25:53.729 [2024-11-19 11:27:48.913053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.729 [2024-11-19 11:27:48.913077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.729 qpair failed and we were unable to recover it. 00:25:53.729 [2024-11-19 11:27:48.913291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.729 [2024-11-19 11:27:48.913314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.729 qpair failed and we were unable to recover it. 00:25:53.729 [2024-11-19 11:27:48.913470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.729 [2024-11-19 11:27:48.913496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.729 qpair failed and we were unable to recover it. 00:25:53.729 [2024-11-19 11:27:48.913668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.729 [2024-11-19 11:27:48.913693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.729 qpair failed and we were unable to recover it. 00:25:53.729 [2024-11-19 11:27:48.913922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.729 [2024-11-19 11:27:48.913946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.729 qpair failed and we were unable to recover it. 00:25:53.729 [2024-11-19 11:27:48.914176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.729 [2024-11-19 11:27:48.914200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.729 qpair failed and we were unable to recover it. 00:25:53.729 [2024-11-19 11:27:48.914399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.729 [2024-11-19 11:27:48.914425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.729 qpair failed and we were unable to recover it. 00:25:53.729 [2024-11-19 11:27:48.914606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.729 [2024-11-19 11:27:48.914631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.729 qpair failed and we were unable to recover it. 00:25:53.729 [2024-11-19 11:27:48.914793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.729 [2024-11-19 11:27:48.914816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.729 qpair failed and we were unable to recover it. 00:25:53.729 [2024-11-19 11:27:48.914987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.729 [2024-11-19 11:27:48.915011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.729 qpair failed and we were unable to recover it. 00:25:53.729 [2024-11-19 11:27:48.915237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.729 [2024-11-19 11:27:48.915261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.729 qpair failed and we were unable to recover it. 00:25:53.729 [2024-11-19 11:27:48.915454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.729 [2024-11-19 11:27:48.915479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.729 qpair failed and we were unable to recover it. 00:25:53.729 [2024-11-19 11:27:48.915699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.730 [2024-11-19 11:27:48.915723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.730 qpair failed and we were unable to recover it. 00:25:53.730 [2024-11-19 11:27:48.915954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.730 [2024-11-19 11:27:48.915978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.730 qpair failed and we were unable to recover it. 00:25:53.730 [2024-11-19 11:27:48.916194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.730 [2024-11-19 11:27:48.916219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.730 qpair failed and we were unable to recover it. 00:25:53.730 [2024-11-19 11:27:48.916461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.730 [2024-11-19 11:27:48.916486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.730 qpair failed and we were unable to recover it. 00:25:53.730 [2024-11-19 11:27:48.916666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.730 [2024-11-19 11:27:48.916690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.730 qpair failed and we were unable to recover it. 00:25:53.730 [2024-11-19 11:27:48.916914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.730 [2024-11-19 11:27:48.916938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.730 qpair failed and we were unable to recover it. 00:25:53.730 [2024-11-19 11:27:48.917166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.730 [2024-11-19 11:27:48.917190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.730 qpair failed and we were unable to recover it. 00:25:53.730 [2024-11-19 11:27:48.917380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.730 [2024-11-19 11:27:48.917406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.730 qpair failed and we were unable to recover it. 00:25:53.730 [2024-11-19 11:27:48.917609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.730 [2024-11-19 11:27:48.917634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.730 qpair failed and we were unable to recover it. 00:25:53.730 [2024-11-19 11:27:48.917866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.730 [2024-11-19 11:27:48.917891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.730 qpair failed and we were unable to recover it. 00:25:53.730 [2024-11-19 11:27:48.918111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.730 [2024-11-19 11:27:48.918136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.730 qpair failed and we were unable to recover it. 00:25:53.730 [2024-11-19 11:27:48.918310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.730 [2024-11-19 11:27:48.918335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.730 qpair failed and we were unable to recover it. 00:25:53.730 [2024-11-19 11:27:48.918496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.730 [2024-11-19 11:27:48.918522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.730 qpair failed and we were unable to recover it. 00:25:53.730 [2024-11-19 11:27:48.918741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.730 [2024-11-19 11:27:48.918781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.730 qpair failed and we were unable to recover it. 00:25:53.730 [2024-11-19 11:27:48.918993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.730 [2024-11-19 11:27:48.919017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.730 qpair failed and we were unable to recover it. 00:25:53.730 [2024-11-19 11:27:48.919179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.730 [2024-11-19 11:27:48.919204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.730 qpair failed and we were unable to recover it. 00:25:53.730 [2024-11-19 11:27:48.919376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.730 [2024-11-19 11:27:48.919402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.730 qpair failed and we were unable to recover it. 00:25:53.730 [2024-11-19 11:27:48.919590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.730 [2024-11-19 11:27:48.919615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.730 qpair failed and we were unable to recover it. 00:25:53.730 [2024-11-19 11:27:48.919730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.730 [2024-11-19 11:27:48.919770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.730 qpair failed and we were unable to recover it. 00:25:53.730 [2024-11-19 11:27:48.920001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.730 [2024-11-19 11:27:48.920027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.730 qpair failed and we were unable to recover it. 00:25:53.730 [2024-11-19 11:27:48.920206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.730 [2024-11-19 11:27:48.920230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.730 qpair failed and we were unable to recover it. 00:25:53.730 [2024-11-19 11:27:48.920437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.730 [2024-11-19 11:27:48.920477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.730 qpair failed and we were unable to recover it. 00:25:53.730 [2024-11-19 11:27:48.920642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.730 [2024-11-19 11:27:48.920667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.730 qpair failed and we were unable to recover it. 00:25:53.730 [2024-11-19 11:27:48.920867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.730 [2024-11-19 11:27:48.920892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.730 qpair failed and we were unable to recover it. 00:25:53.730 [2024-11-19 11:27:48.921016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.730 [2024-11-19 11:27:48.921040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.730 qpair failed and we were unable to recover it. 00:25:53.730 [2024-11-19 11:27:48.921265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.730 [2024-11-19 11:27:48.921290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.730 qpair failed and we were unable to recover it. 00:25:53.730 [2024-11-19 11:27:48.921476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.730 [2024-11-19 11:27:48.921503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.730 qpair failed and we were unable to recover it. 00:25:53.730 [2024-11-19 11:27:48.921644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.730 [2024-11-19 11:27:48.921669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.730 qpair failed and we were unable to recover it. 00:25:53.730 [2024-11-19 11:27:48.921873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.730 [2024-11-19 11:27:48.921897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.730 qpair failed and we were unable to recover it. 00:25:53.730 [2024-11-19 11:27:48.922102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.730 [2024-11-19 11:27:48.922126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.730 qpair failed and we were unable to recover it. 00:25:53.730 [2024-11-19 11:27:48.922350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.730 [2024-11-19 11:27:48.922395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.730 qpair failed and we were unable to recover it. 00:25:53.730 [2024-11-19 11:27:48.922547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.730 [2024-11-19 11:27:48.922572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.730 qpair failed and we were unable to recover it. 00:25:53.730 [2024-11-19 11:27:48.922761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.730 [2024-11-19 11:27:48.922784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.730 qpair failed and we were unable to recover it. 00:25:53.730 [2024-11-19 11:27:48.922999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.730 [2024-11-19 11:27:48.923024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.730 qpair failed and we were unable to recover it. 00:25:53.730 [2024-11-19 11:27:48.923208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.730 [2024-11-19 11:27:48.923231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.730 qpair failed and we were unable to recover it. 00:25:53.730 [2024-11-19 11:27:48.923448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.730 [2024-11-19 11:27:48.923474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.730 qpair failed and we were unable to recover it. 00:25:53.730 [2024-11-19 11:27:48.923674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.730 [2024-11-19 11:27:48.923699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.730 qpair failed and we were unable to recover it. 00:25:53.731 [2024-11-19 11:27:48.923890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.731 [2024-11-19 11:27:48.923915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.731 qpair failed and we were unable to recover it. 00:25:53.731 [2024-11-19 11:27:48.924146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.731 [2024-11-19 11:27:48.924174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.731 qpair failed and we were unable to recover it. 00:25:53.731 [2024-11-19 11:27:48.924407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.731 [2024-11-19 11:27:48.924434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.731 qpair failed and we were unable to recover it. 00:25:53.731 [2024-11-19 11:27:48.924570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.731 [2024-11-19 11:27:48.924596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.731 qpair failed and we were unable to recover it. 00:25:53.731 [2024-11-19 11:27:48.924756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.731 [2024-11-19 11:27:48.924782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.731 qpair failed and we were unable to recover it. 00:25:53.731 [2024-11-19 11:27:48.924911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.731 [2024-11-19 11:27:48.924951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.731 qpair failed and we were unable to recover it. 00:25:53.731 [2024-11-19 11:27:48.925101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.731 [2024-11-19 11:27:48.925141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.731 qpair failed and we were unable to recover it. 00:25:53.731 [2024-11-19 11:27:48.925291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.731 [2024-11-19 11:27:48.925315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.731 qpair failed and we were unable to recover it. 00:25:53.731 [2024-11-19 11:27:48.925512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.731 [2024-11-19 11:27:48.925539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.731 qpair failed and we were unable to recover it. 00:25:53.731 [2024-11-19 11:27:48.925685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.731 [2024-11-19 11:27:48.925709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.731 qpair failed and we were unable to recover it. 00:25:53.731 [2024-11-19 11:27:48.925915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.731 [2024-11-19 11:27:48.925939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.731 qpair failed and we were unable to recover it. 00:25:53.731 [2024-11-19 11:27:48.926159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.731 [2024-11-19 11:27:48.926183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.731 qpair failed and we were unable to recover it. 00:25:53.731 [2024-11-19 11:27:48.926414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.731 [2024-11-19 11:27:48.926441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.731 qpair failed and we were unable to recover it. 00:25:53.731 [2024-11-19 11:27:48.926646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.731 [2024-11-19 11:27:48.926686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.731 qpair failed and we were unable to recover it. 00:25:53.731 [2024-11-19 11:27:48.926866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.731 [2024-11-19 11:27:48.926890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.731 qpair failed and we were unable to recover it. 00:25:53.731 [2024-11-19 11:27:48.927013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.731 [2024-11-19 11:27:48.927038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.731 qpair failed and we were unable to recover it. 00:25:53.731 [2024-11-19 11:27:48.927225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.731 [2024-11-19 11:27:48.927249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.731 qpair failed and we were unable to recover it. 00:25:53.731 [2024-11-19 11:27:48.927463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.731 [2024-11-19 11:27:48.927490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.731 qpair failed and we were unable to recover it. 00:25:53.731 [2024-11-19 11:27:48.927718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.731 [2024-11-19 11:27:48.927743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.731 qpair failed and we were unable to recover it. 00:25:53.731 [2024-11-19 11:27:48.927967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.731 [2024-11-19 11:27:48.927991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.731 qpair failed and we were unable to recover it. 00:25:53.731 [2024-11-19 11:27:48.928157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.731 [2024-11-19 11:27:48.928182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.731 qpair failed and we were unable to recover it. 00:25:53.731 [2024-11-19 11:27:48.928387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.731 [2024-11-19 11:27:48.928428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.731 qpair failed and we were unable to recover it. 00:25:53.731 [2024-11-19 11:27:48.928612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.731 [2024-11-19 11:27:48.928636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.731 qpair failed and we were unable to recover it. 00:25:53.731 [2024-11-19 11:27:48.928853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.731 [2024-11-19 11:27:48.928877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.731 qpair failed and we were unable to recover it. 00:25:53.731 [2024-11-19 11:27:48.929108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.731 [2024-11-19 11:27:48.929132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.731 qpair failed and we were unable to recover it. 00:25:53.731 [2024-11-19 11:27:48.929263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.731 [2024-11-19 11:27:48.929288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.731 qpair failed and we were unable to recover it. 00:25:53.731 [2024-11-19 11:27:48.929480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.731 [2024-11-19 11:27:48.929505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.731 qpair failed and we were unable to recover it. 00:25:53.731 [2024-11-19 11:27:48.929692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.731 [2024-11-19 11:27:48.929716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.731 qpair failed and we were unable to recover it. 00:25:53.731 [2024-11-19 11:27:48.929936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.731 [2024-11-19 11:27:48.929965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.731 qpair failed and we were unable to recover it. 00:25:53.731 [2024-11-19 11:27:48.930133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.731 [2024-11-19 11:27:48.930158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.731 qpair failed and we were unable to recover it. 00:25:53.731 [2024-11-19 11:27:48.930311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.731 [2024-11-19 11:27:48.930351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.731 qpair failed and we were unable to recover it. 00:25:53.731 [2024-11-19 11:27:48.930576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.731 [2024-11-19 11:27:48.930603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.731 qpair failed and we were unable to recover it. 00:25:53.731 [2024-11-19 11:27:48.930826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.731 [2024-11-19 11:27:48.930851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.731 qpair failed and we were unable to recover it. 00:25:53.731 [2024-11-19 11:27:48.931085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.731 [2024-11-19 11:27:48.931110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.731 qpair failed and we were unable to recover it. 00:25:53.731 [2024-11-19 11:27:48.931274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.731 [2024-11-19 11:27:48.931299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.731 qpair failed and we were unable to recover it. 00:25:53.731 [2024-11-19 11:27:48.931493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.731 [2024-11-19 11:27:48.931519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.731 qpair failed and we were unable to recover it. 00:25:53.731 [2024-11-19 11:27:48.931705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.732 [2024-11-19 11:27:48.931745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.732 qpair failed and we were unable to recover it. 00:25:53.732 [2024-11-19 11:27:48.931945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.732 [2024-11-19 11:27:48.931970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.732 qpair failed and we were unable to recover it. 00:25:53.732 [2024-11-19 11:27:48.932131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.732 [2024-11-19 11:27:48.932156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.732 qpair failed and we were unable to recover it. 00:25:53.732 [2024-11-19 11:27:48.932334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.732 [2024-11-19 11:27:48.932358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.732 qpair failed and we were unable to recover it. 00:25:53.732 [2024-11-19 11:27:48.932525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.732 [2024-11-19 11:27:48.932551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.732 qpair failed and we were unable to recover it. 00:25:53.732 [2024-11-19 11:27:48.932775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.732 [2024-11-19 11:27:48.932799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.732 qpair failed and we were unable to recover it. 00:25:53.732 [2024-11-19 11:27:48.932987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.732 [2024-11-19 11:27:48.933012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.732 qpair failed and we were unable to recover it. 00:25:53.732 [2024-11-19 11:27:48.933198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.732 [2024-11-19 11:27:48.933220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.732 qpair failed and we were unable to recover it. 00:25:53.732 [2024-11-19 11:27:48.933465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.732 [2024-11-19 11:27:48.933491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.732 qpair failed and we were unable to recover it. 00:25:53.732 [2024-11-19 11:27:48.933651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.732 [2024-11-19 11:27:48.933677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.732 qpair failed and we were unable to recover it. 00:25:53.732 [2024-11-19 11:27:48.933859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.732 [2024-11-19 11:27:48.933884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.732 qpair failed and we were unable to recover it. 00:25:53.732 [2024-11-19 11:27:48.934054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.732 [2024-11-19 11:27:48.934095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.732 qpair failed and we were unable to recover it. 00:25:53.732 [2024-11-19 11:27:48.934214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.732 [2024-11-19 11:27:48.934239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.732 qpair failed and we were unable to recover it. 00:25:53.732 [2024-11-19 11:27:48.934462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.732 [2024-11-19 11:27:48.934489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.732 qpair failed and we were unable to recover it. 00:25:53.732 [2024-11-19 11:27:48.934716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.732 [2024-11-19 11:27:48.934742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.732 qpair failed and we were unable to recover it. 00:25:53.732 [2024-11-19 11:27:48.934962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.732 [2024-11-19 11:27:48.934989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.732 qpair failed and we were unable to recover it. 00:25:53.732 [2024-11-19 11:27:48.935213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.732 [2024-11-19 11:27:48.935238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.732 qpair failed and we were unable to recover it. 00:25:53.732 [2024-11-19 11:27:48.935449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.732 [2024-11-19 11:27:48.935475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.732 qpair failed and we were unable to recover it. 00:25:53.732 [2024-11-19 11:27:48.935693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.732 [2024-11-19 11:27:48.935718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.732 qpair failed and we were unable to recover it. 00:25:53.732 [2024-11-19 11:27:48.935936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.732 [2024-11-19 11:27:48.935963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.732 qpair failed and we were unable to recover it. 00:25:53.732 [2024-11-19 11:27:48.936183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.732 [2024-11-19 11:27:48.936208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.732 qpair failed and we were unable to recover it. 00:25:53.732 [2024-11-19 11:27:48.936448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.732 [2024-11-19 11:27:48.936474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.732 qpair failed and we were unable to recover it. 00:25:53.732 [2024-11-19 11:27:48.936692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.732 [2024-11-19 11:27:48.936716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.732 qpair failed and we were unable to recover it. 00:25:53.732 [2024-11-19 11:27:48.936815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.732 [2024-11-19 11:27:48.936838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.732 qpair failed and we were unable to recover it. 00:25:53.732 [2024-11-19 11:27:48.937023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.732 [2024-11-19 11:27:48.937047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.732 qpair failed and we were unable to recover it. 00:25:53.732 [2024-11-19 11:27:48.937245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.732 [2024-11-19 11:27:48.937270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.732 qpair failed and we were unable to recover it. 00:25:53.732 [2024-11-19 11:27:48.937452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.732 [2024-11-19 11:27:48.937479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.732 qpair failed and we were unable to recover it. 00:25:53.732 [2024-11-19 11:27:48.937643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.732 [2024-11-19 11:27:48.937681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.732 qpair failed and we were unable to recover it. 00:25:53.732 [2024-11-19 11:27:48.937861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.732 [2024-11-19 11:27:48.937886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.732 qpair failed and we were unable to recover it. 00:25:53.732 [2024-11-19 11:27:48.938080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.732 [2024-11-19 11:27:48.938104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.732 qpair failed and we were unable to recover it. 00:25:53.732 [2024-11-19 11:27:48.938297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.732 [2024-11-19 11:27:48.938321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.732 qpair failed and we were unable to recover it. 00:25:53.732 [2024-11-19 11:27:48.938531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.732 [2024-11-19 11:27:48.938556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.732 qpair failed and we were unable to recover it. 00:25:53.732 [2024-11-19 11:27:48.938687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.732 [2024-11-19 11:27:48.938713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.732 qpair failed and we were unable to recover it. 00:25:53.732 [2024-11-19 11:27:48.938923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.732 [2024-11-19 11:27:48.938948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.732 qpair failed and we were unable to recover it. 00:25:53.732 [2024-11-19 11:27:48.939125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.732 [2024-11-19 11:27:48.939150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.732 qpair failed and we were unable to recover it. 00:25:53.732 [2024-11-19 11:27:48.939329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.732 [2024-11-19 11:27:48.939353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.732 qpair failed and we were unable to recover it. 00:25:53.732 [2024-11-19 11:27:48.939566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.732 [2024-11-19 11:27:48.939591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.732 qpair failed and we were unable to recover it. 00:25:53.733 [2024-11-19 11:27:48.939782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.733 [2024-11-19 11:27:48.939806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.733 qpair failed and we were unable to recover it. 00:25:53.733 [2024-11-19 11:27:48.939950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.733 [2024-11-19 11:27:48.939975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.733 qpair failed and we were unable to recover it. 00:25:53.733 [2024-11-19 11:27:48.940126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.733 [2024-11-19 11:27:48.940165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.733 qpair failed and we were unable to recover it. 00:25:53.733 [2024-11-19 11:27:48.940348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.733 [2024-11-19 11:27:48.940379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.733 qpair failed and we were unable to recover it. 00:25:53.733 [2024-11-19 11:27:48.940580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.733 [2024-11-19 11:27:48.940606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.733 qpair failed and we were unable to recover it. 00:25:53.733 [2024-11-19 11:27:48.940754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.733 [2024-11-19 11:27:48.940778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.733 qpair failed and we were unable to recover it. 00:25:53.733 [2024-11-19 11:27:48.940952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.733 [2024-11-19 11:27:48.940976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.733 qpair failed and we were unable to recover it. 00:25:53.733 [2024-11-19 11:27:48.941193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.733 [2024-11-19 11:27:48.941218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.733 qpair failed and we were unable to recover it. 00:25:53.733 [2024-11-19 11:27:48.941466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.733 [2024-11-19 11:27:48.941494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.733 qpair failed and we were unable to recover it. 00:25:53.733 [2024-11-19 11:27:48.941665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.733 [2024-11-19 11:27:48.941704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.733 qpair failed and we were unable to recover it. 00:25:53.733 [2024-11-19 11:27:48.941918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.733 [2024-11-19 11:27:48.941942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.733 qpair failed and we were unable to recover it. 00:25:53.733 [2024-11-19 11:27:48.942146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.733 [2024-11-19 11:27:48.942171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.733 qpair failed and we were unable to recover it. 00:25:53.733 [2024-11-19 11:27:48.942340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.733 [2024-11-19 11:27:48.942385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.733 qpair failed and we were unable to recover it. 00:25:53.733 [2024-11-19 11:27:48.942564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.733 [2024-11-19 11:27:48.942588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.733 qpair failed and we were unable to recover it. 00:25:53.733 [2024-11-19 11:27:48.942747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.733 [2024-11-19 11:27:48.942772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.733 qpair failed and we were unable to recover it. 00:25:53.733 [2024-11-19 11:27:48.942965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.733 [2024-11-19 11:27:48.942990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.733 qpair failed and we were unable to recover it. 00:25:53.733 [2024-11-19 11:27:48.943194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.733 [2024-11-19 11:27:48.943218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.733 qpair failed and we were unable to recover it. 00:25:53.733 [2024-11-19 11:27:48.943440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.733 [2024-11-19 11:27:48.943465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.733 qpair failed and we were unable to recover it. 00:25:53.733 [2024-11-19 11:27:48.943583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.733 [2024-11-19 11:27:48.943607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.733 qpair failed and we were unable to recover it. 00:25:53.733 [2024-11-19 11:27:48.943766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.733 [2024-11-19 11:27:48.943790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.733 qpair failed and we were unable to recover it. 00:25:53.733 [2024-11-19 11:27:48.943981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.733 [2024-11-19 11:27:48.944006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.733 qpair failed and we were unable to recover it. 00:25:53.733 [2024-11-19 11:27:48.944167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.733 [2024-11-19 11:27:48.944192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.733 qpair failed and we were unable to recover it. 00:25:53.733 [2024-11-19 11:27:48.944356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.733 [2024-11-19 11:27:48.944387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.733 qpair failed and we were unable to recover it. 00:25:53.733 [2024-11-19 11:27:48.944567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.733 [2024-11-19 11:27:48.944597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.733 qpair failed and we were unable to recover it. 00:25:53.733 [2024-11-19 11:27:48.944764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.733 [2024-11-19 11:27:48.944788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.733 qpair failed and we were unable to recover it. 00:25:53.733 [2024-11-19 11:27:48.944998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.733 [2024-11-19 11:27:48.945022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.733 qpair failed and we were unable to recover it. 00:25:53.733 [2024-11-19 11:27:48.945177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.733 [2024-11-19 11:27:48.945200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.733 qpair failed and we were unable to recover it. 00:25:53.733 [2024-11-19 11:27:48.945396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.733 [2024-11-19 11:27:48.945422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.733 qpair failed and we were unable to recover it. 00:25:53.733 [2024-11-19 11:27:48.945660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.733 [2024-11-19 11:27:48.945702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.733 qpair failed and we were unable to recover it. 00:25:53.733 [2024-11-19 11:27:48.945937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.733 [2024-11-19 11:27:48.945962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.733 qpair failed and we were unable to recover it. 00:25:53.733 [2024-11-19 11:27:48.946135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.733 [2024-11-19 11:27:48.946159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.733 qpair failed and we were unable to recover it. 00:25:53.733 [2024-11-19 11:27:48.946336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.733 [2024-11-19 11:27:48.946360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.733 qpair failed and we were unable to recover it. 00:25:53.733 [2024-11-19 11:27:48.946582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.733 [2024-11-19 11:27:48.946608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.734 qpair failed and we were unable to recover it. 00:25:53.734 [2024-11-19 11:27:48.946799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.734 [2024-11-19 11:27:48.946824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.734 qpair failed and we were unable to recover it. 00:25:53.734 [2024-11-19 11:27:48.946965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.734 [2024-11-19 11:27:48.946989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.734 qpair failed and we were unable to recover it. 00:25:53.734 [2024-11-19 11:27:48.947169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.734 [2024-11-19 11:27:48.947194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.734 qpair failed and we were unable to recover it. 00:25:53.734 [2024-11-19 11:27:48.947349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.734 [2024-11-19 11:27:48.947380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.734 qpair failed and we were unable to recover it. 00:25:53.734 [2024-11-19 11:27:48.947559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.734 [2024-11-19 11:27:48.947586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.734 qpair failed and we were unable to recover it. 00:25:53.734 [2024-11-19 11:27:48.947702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.734 [2024-11-19 11:27:48.947740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.734 qpair failed and we were unable to recover it. 00:25:53.734 [2024-11-19 11:27:48.947857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.734 [2024-11-19 11:27:48.947880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.734 qpair failed and we were unable to recover it. 00:25:53.734 [2024-11-19 11:27:48.948123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.734 [2024-11-19 11:27:48.948148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.734 qpair failed and we were unable to recover it. 00:25:53.734 [2024-11-19 11:27:48.948335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.734 [2024-11-19 11:27:48.948366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.734 qpair failed and we were unable to recover it. 00:25:53.734 [2024-11-19 11:27:48.948539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.734 [2024-11-19 11:27:48.948564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.734 qpair failed and we were unable to recover it. 00:25:53.734 [2024-11-19 11:27:48.948786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.734 [2024-11-19 11:27:48.948810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.734 qpair failed and we were unable to recover it. 00:25:53.734 [2024-11-19 11:27:48.949016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.734 [2024-11-19 11:27:48.949041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.734 qpair failed and we were unable to recover it. 00:25:53.734 [2024-11-19 11:27:48.949263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.734 [2024-11-19 11:27:48.949287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.734 qpair failed and we were unable to recover it. 00:25:53.734 [2024-11-19 11:27:48.949422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.734 [2024-11-19 11:27:48.949447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.734 qpair failed and we were unable to recover it. 00:25:53.734 [2024-11-19 11:27:48.949642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.734 [2024-11-19 11:27:48.949667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.734 qpair failed and we were unable to recover it. 00:25:53.734 [2024-11-19 11:27:48.949848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.734 [2024-11-19 11:27:48.949873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.734 qpair failed and we were unable to recover it. 00:25:53.734 [2024-11-19 11:27:48.950041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.734 [2024-11-19 11:27:48.950066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.734 qpair failed and we were unable to recover it. 00:25:53.734 [2024-11-19 11:27:48.950252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.734 [2024-11-19 11:27:48.950279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.734 qpair failed and we were unable to recover it. 00:25:53.734 [2024-11-19 11:27:48.950477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.734 [2024-11-19 11:27:48.950503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.734 qpair failed and we were unable to recover it. 00:25:53.734 [2024-11-19 11:27:48.950720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.734 [2024-11-19 11:27:48.950745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.734 qpair failed and we were unable to recover it. 00:25:53.734 [2024-11-19 11:27:48.950959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.734 [2024-11-19 11:27:48.950982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.734 qpair failed and we were unable to recover it. 00:25:53.734 [2024-11-19 11:27:48.951186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.734 [2024-11-19 11:27:48.951210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.734 qpair failed and we were unable to recover it. 00:25:53.734 [2024-11-19 11:27:48.951402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.734 [2024-11-19 11:27:48.951427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.734 qpair failed and we were unable to recover it. 00:25:53.734 [2024-11-19 11:27:48.951656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.734 [2024-11-19 11:27:48.951696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.734 qpair failed and we were unable to recover it. 00:25:53.734 [2024-11-19 11:27:48.951865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.734 [2024-11-19 11:27:48.951889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.734 qpair failed and we were unable to recover it. 00:25:53.734 [2024-11-19 11:27:48.952108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.734 [2024-11-19 11:27:48.952132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.734 qpair failed and we were unable to recover it. 00:25:53.734 [2024-11-19 11:27:48.952321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.734 [2024-11-19 11:27:48.952360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.734 qpair failed and we were unable to recover it. 00:25:53.734 [2024-11-19 11:27:48.952574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.734 [2024-11-19 11:27:48.952600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.734 qpair failed and we were unable to recover it. 00:25:53.734 [2024-11-19 11:27:48.952778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.734 [2024-11-19 11:27:48.952802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.734 qpair failed and we were unable to recover it. 00:25:53.734 [2024-11-19 11:27:48.952991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.734 [2024-11-19 11:27:48.953014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.734 qpair failed and we were unable to recover it. 00:25:53.734 [2024-11-19 11:27:48.953166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.734 [2024-11-19 11:27:48.953190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.734 qpair failed and we were unable to recover it. 00:25:53.734 [2024-11-19 11:27:48.953327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.734 [2024-11-19 11:27:48.953353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.734 qpair failed and we were unable to recover it. 00:25:53.734 [2024-11-19 11:27:48.953584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.734 [2024-11-19 11:27:48.953610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.734 qpair failed and we were unable to recover it. 00:25:53.734 [2024-11-19 11:27:48.953751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.734 [2024-11-19 11:27:48.953774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.734 qpair failed and we were unable to recover it. 00:25:53.734 [2024-11-19 11:27:48.953950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.734 [2024-11-19 11:27:48.953975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.734 qpair failed and we were unable to recover it. 00:25:53.734 [2024-11-19 11:27:48.954172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.734 [2024-11-19 11:27:48.954197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.734 qpair failed and we were unable to recover it. 00:25:53.735 [2024-11-19 11:27:48.954393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.735 [2024-11-19 11:27:48.954418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.735 qpair failed and we were unable to recover it. 00:25:53.735 [2024-11-19 11:27:48.954582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.735 [2024-11-19 11:27:48.954606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.735 qpair failed and we were unable to recover it. 00:25:53.735 [2024-11-19 11:27:48.954743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.735 [2024-11-19 11:27:48.954767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.735 qpair failed and we were unable to recover it. 00:25:53.735 [2024-11-19 11:27:48.954994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.735 [2024-11-19 11:27:48.955017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.735 qpair failed and we were unable to recover it. 00:25:53.735 [2024-11-19 11:27:48.955209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.735 [2024-11-19 11:27:48.955233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.735 qpair failed and we were unable to recover it. 00:25:53.735 [2024-11-19 11:27:48.955421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.735 [2024-11-19 11:27:48.955448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.735 qpair failed and we were unable to recover it. 00:25:53.735 [2024-11-19 11:27:48.955598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.735 [2024-11-19 11:27:48.955623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.735 qpair failed and we were unable to recover it. 00:25:53.735 [2024-11-19 11:27:48.955828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.735 [2024-11-19 11:27:48.955853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.735 qpair failed and we were unable to recover it. 00:25:53.735 [2024-11-19 11:27:48.956083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.735 [2024-11-19 11:27:48.956115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.735 qpair failed and we were unable to recover it. 00:25:53.735 [2024-11-19 11:27:48.956244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.735 [2024-11-19 11:27:48.956281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.735 qpair failed and we were unable to recover it. 00:25:53.735 [2024-11-19 11:27:48.956433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.735 [2024-11-19 11:27:48.956458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.735 qpair failed and we were unable to recover it. 00:25:53.735 [2024-11-19 11:27:48.956663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.735 [2024-11-19 11:27:48.956689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.735 qpair failed and we were unable to recover it. 00:25:53.735 [2024-11-19 11:27:48.956882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.735 [2024-11-19 11:27:48.956907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.735 qpair failed and we were unable to recover it. 00:25:53.735 [2024-11-19 11:27:48.957130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.735 [2024-11-19 11:27:48.957155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.735 qpair failed and we were unable to recover it. 00:25:53.735 [2024-11-19 11:27:48.957385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.735 [2024-11-19 11:27:48.957411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.735 qpair failed and we were unable to recover it. 00:25:53.735 [2024-11-19 11:27:48.957523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.735 [2024-11-19 11:27:48.957548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.735 qpair failed and we were unable to recover it. 00:25:53.735 [2024-11-19 11:27:48.957723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.735 [2024-11-19 11:27:48.957762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.735 qpair failed and we were unable to recover it. 00:25:53.735 [2024-11-19 11:27:48.957981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.735 [2024-11-19 11:27:48.958004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.735 qpair failed and we were unable to recover it. 00:25:53.735 [2024-11-19 11:27:48.958230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.735 [2024-11-19 11:27:48.958255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.735 qpair failed and we were unable to recover it. 00:25:53.735 [2024-11-19 11:27:48.958447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.735 [2024-11-19 11:27:48.958473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.735 qpair failed and we were unable to recover it. 00:25:53.735 [2024-11-19 11:27:48.958643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.735 [2024-11-19 11:27:48.958667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.735 qpair failed and we were unable to recover it. 00:25:53.735 [2024-11-19 11:27:48.958887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.735 [2024-11-19 11:27:48.958911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.735 qpair failed and we were unable to recover it. 00:25:53.735 [2024-11-19 11:27:48.959106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.735 [2024-11-19 11:27:48.959130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.735 qpair failed and we were unable to recover it. 00:25:53.735 [2024-11-19 11:27:48.959317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.735 [2024-11-19 11:27:48.959341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.735 qpair failed and we were unable to recover it. 00:25:53.735 [2024-11-19 11:27:48.959523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.735 [2024-11-19 11:27:48.959547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.735 qpair failed and we were unable to recover it. 00:25:53.735 [2024-11-19 11:27:48.959683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.735 [2024-11-19 11:27:48.959706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.735 qpair failed and we were unable to recover it. 00:25:53.735 [2024-11-19 11:27:48.959911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.735 [2024-11-19 11:27:48.959935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.735 qpair failed and we were unable to recover it. 00:25:53.735 [2024-11-19 11:27:48.960065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.735 [2024-11-19 11:27:48.960091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.735 qpair failed and we were unable to recover it. 00:25:53.735 [2024-11-19 11:27:48.960285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.735 [2024-11-19 11:27:48.960310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.735 qpair failed and we were unable to recover it. 00:25:53.735 [2024-11-19 11:27:48.960518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.735 [2024-11-19 11:27:48.960543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.735 qpair failed and we were unable to recover it. 00:25:53.735 [2024-11-19 11:27:48.960729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.735 [2024-11-19 11:27:48.960753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.735 qpair failed and we were unable to recover it. 00:25:53.735 [2024-11-19 11:27:48.960953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.735 [2024-11-19 11:27:48.960977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.735 qpair failed and we were unable to recover it. 00:25:53.735 [2024-11-19 11:27:48.961145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.735 [2024-11-19 11:27:48.961169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.735 qpair failed and we were unable to recover it. 00:25:53.735 [2024-11-19 11:27:48.961345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.735 [2024-11-19 11:27:48.961388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.735 qpair failed and we were unable to recover it. 00:25:53.735 [2024-11-19 11:27:48.961620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.735 [2024-11-19 11:27:48.961645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.735 qpair failed and we were unable to recover it. 00:25:53.735 [2024-11-19 11:27:48.961746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.735 [2024-11-19 11:27:48.961785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.735 qpair failed and we were unable to recover it. 00:25:53.735 [2024-11-19 11:27:48.961954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.735 [2024-11-19 11:27:48.961994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.736 qpair failed and we were unable to recover it. 00:25:53.736 [2024-11-19 11:27:48.962157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.736 [2024-11-19 11:27:48.962182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.736 qpair failed and we were unable to recover it. 00:25:53.736 [2024-11-19 11:27:48.962404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.736 [2024-11-19 11:27:48.962430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.736 qpair failed and we were unable to recover it. 00:25:53.736 [2024-11-19 11:27:48.962631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.736 [2024-11-19 11:27:48.962672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.736 qpair failed and we were unable to recover it. 00:25:53.736 [2024-11-19 11:27:48.962897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.736 [2024-11-19 11:27:48.962921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.736 qpair failed and we were unable to recover it. 00:25:53.736 [2024-11-19 11:27:48.963111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.736 [2024-11-19 11:27:48.963134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.736 qpair failed and we were unable to recover it. 00:25:53.736 [2024-11-19 11:27:48.963389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.736 [2024-11-19 11:27:48.963415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.736 qpair failed and we were unable to recover it. 00:25:53.736 [2024-11-19 11:27:48.963644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.736 [2024-11-19 11:27:48.963670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.736 qpair failed and we were unable to recover it. 00:25:53.736 [2024-11-19 11:27:48.963865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.736 [2024-11-19 11:27:48.963890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.736 qpair failed and we were unable to recover it. 00:25:53.736 [2024-11-19 11:27:48.964022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.736 [2024-11-19 11:27:48.964062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.736 qpair failed and we were unable to recover it. 00:25:53.736 [2024-11-19 11:27:48.964213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.736 [2024-11-19 11:27:48.964237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.736 qpair failed and we were unable to recover it. 00:25:53.736 [2024-11-19 11:27:48.964424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.736 [2024-11-19 11:27:48.964450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.736 qpair failed and we were unable to recover it. 00:25:53.736 [2024-11-19 11:27:48.964626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.736 [2024-11-19 11:27:48.964666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.736 qpair failed and we were unable to recover it. 00:25:53.736 [2024-11-19 11:27:48.964891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.736 [2024-11-19 11:27:48.964915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.736 qpair failed and we were unable to recover it. 00:25:53.736 [2024-11-19 11:27:48.965154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.736 [2024-11-19 11:27:48.965178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.736 qpair failed and we were unable to recover it. 00:25:53.736 [2024-11-19 11:27:48.965380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.736 [2024-11-19 11:27:48.965406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.736 qpair failed and we were unable to recover it. 00:25:53.736 [2024-11-19 11:27:48.965574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.736 [2024-11-19 11:27:48.965600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.736 qpair failed and we were unable to recover it. 00:25:53.736 [2024-11-19 11:27:48.965807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.736 [2024-11-19 11:27:48.965845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.736 qpair failed and we were unable to recover it. 00:25:53.736 [2024-11-19 11:27:48.966062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.736 [2024-11-19 11:27:48.966086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.736 qpair failed and we were unable to recover it. 00:25:53.736 [2024-11-19 11:27:48.966286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.736 [2024-11-19 11:27:48.966310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.736 qpair failed and we were unable to recover it. 00:25:53.736 [2024-11-19 11:27:48.966517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.736 [2024-11-19 11:27:48.966544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.736 qpair failed and we were unable to recover it. 00:25:53.736 [2024-11-19 11:27:48.966698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.736 [2024-11-19 11:27:48.966737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.736 qpair failed and we were unable to recover it. 00:25:53.736 [2024-11-19 11:27:48.966965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.736 [2024-11-19 11:27:48.966990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.736 qpair failed and we were unable to recover it. 00:25:53.736 [2024-11-19 11:27:48.967214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.736 [2024-11-19 11:27:48.967239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.736 qpair failed and we were unable to recover it. 00:25:53.736 [2024-11-19 11:27:48.967423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.736 [2024-11-19 11:27:48.967450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.736 qpair failed and we were unable to recover it. 00:25:53.736 [2024-11-19 11:27:48.967634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.736 [2024-11-19 11:27:48.967674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.736 qpair failed and we were unable to recover it. 00:25:53.736 [2024-11-19 11:27:48.967907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.736 [2024-11-19 11:27:48.967931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.736 qpair failed and we were unable to recover it. 00:25:53.736 [2024-11-19 11:27:48.968153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.736 [2024-11-19 11:27:48.968178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.736 qpair failed and we were unable to recover it. 00:25:53.736 [2024-11-19 11:27:48.968315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.736 [2024-11-19 11:27:48.968339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.736 qpair failed and we were unable to recover it. 00:25:53.736 [2024-11-19 11:27:48.968514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.736 [2024-11-19 11:27:48.968539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.736 qpair failed and we were unable to recover it. 00:25:53.736 [2024-11-19 11:27:48.968740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.736 [2024-11-19 11:27:48.968779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.736 qpair failed and we were unable to recover it. 00:25:53.736 [2024-11-19 11:27:48.968964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.736 [2024-11-19 11:27:48.968988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.736 qpair failed and we were unable to recover it. 00:25:53.736 [2024-11-19 11:27:48.969184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.736 [2024-11-19 11:27:48.969208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.736 qpair failed and we were unable to recover it. 00:25:53.736 [2024-11-19 11:27:48.969383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.736 [2024-11-19 11:27:48.969423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.736 qpair failed and we were unable to recover it. 00:25:53.736 [2024-11-19 11:27:48.969637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.736 [2024-11-19 11:27:48.969663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.736 qpair failed and we were unable to recover it. 00:25:53.736 [2024-11-19 11:27:48.969886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.736 [2024-11-19 11:27:48.969910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.736 qpair failed and we were unable to recover it. 00:25:53.736 [2024-11-19 11:27:48.970109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.736 [2024-11-19 11:27:48.970132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.736 qpair failed and we were unable to recover it. 00:25:53.736 [2024-11-19 11:27:48.970338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.737 [2024-11-19 11:27:48.970385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.737 qpair failed and we were unable to recover it. 00:25:53.737 [2024-11-19 11:27:48.970607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.737 [2024-11-19 11:27:48.970633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.737 qpair failed and we were unable to recover it. 00:25:53.737 [2024-11-19 11:27:48.970852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.737 [2024-11-19 11:27:48.970876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.737 qpair failed and we were unable to recover it. 00:25:53.737 [2024-11-19 11:27:48.971025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.737 [2024-11-19 11:27:48.971055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.737 qpair failed and we were unable to recover it. 00:25:53.737 [2024-11-19 11:27:48.971237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.737 [2024-11-19 11:27:48.971261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.737 qpair failed and we were unable to recover it. 00:25:53.737 [2024-11-19 11:27:48.971479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.737 [2024-11-19 11:27:48.971504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.737 qpair failed and we were unable to recover it. 00:25:53.737 [2024-11-19 11:27:48.971710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.737 [2024-11-19 11:27:48.971749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.737 qpair failed and we were unable to recover it. 00:25:53.737 [2024-11-19 11:27:48.971935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.737 [2024-11-19 11:27:48.971959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.737 qpair failed and we were unable to recover it. 00:25:53.737 [2024-11-19 11:27:48.972136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.737 [2024-11-19 11:27:48.972174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.737 qpair failed and we were unable to recover it. 00:25:53.737 [2024-11-19 11:27:48.972330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.737 [2024-11-19 11:27:48.972354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.737 qpair failed and we were unable to recover it. 00:25:53.737 [2024-11-19 11:27:48.972582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.737 [2024-11-19 11:27:48.972607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.737 qpair failed and we were unable to recover it. 00:25:53.737 [2024-11-19 11:27:48.972835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.737 [2024-11-19 11:27:48.972874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.737 qpair failed and we were unable to recover it. 00:25:53.737 [2024-11-19 11:27:48.973071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.737 [2024-11-19 11:27:48.973094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.737 qpair failed and we were unable to recover it. 00:25:53.737 [2024-11-19 11:27:48.973281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.737 [2024-11-19 11:27:48.973305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.737 qpair failed and we were unable to recover it. 00:25:53.737 [2024-11-19 11:27:48.973529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.737 [2024-11-19 11:27:48.973555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.737 qpair failed and we were unable to recover it. 00:25:53.737 [2024-11-19 11:27:48.973734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.737 [2024-11-19 11:27:48.973758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.737 qpair failed and we were unable to recover it. 00:25:53.737 [2024-11-19 11:27:48.973970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.737 [2024-11-19 11:27:48.973994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.737 qpair failed and we were unable to recover it. 00:25:53.737 [2024-11-19 11:27:48.974220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.737 [2024-11-19 11:27:48.974245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.737 qpair failed and we were unable to recover it. 00:25:53.737 [2024-11-19 11:27:48.974469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.737 [2024-11-19 11:27:48.974495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.737 qpair failed and we were unable to recover it. 00:25:53.737 [2024-11-19 11:27:48.974651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.737 [2024-11-19 11:27:48.974675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.737 qpair failed and we were unable to recover it. 00:25:53.737 [2024-11-19 11:27:48.974859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.737 [2024-11-19 11:27:48.974883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.737 qpair failed and we were unable to recover it. 00:25:53.737 [2024-11-19 11:27:48.975073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.737 [2024-11-19 11:27:48.975097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.737 qpair failed and we were unable to recover it. 00:25:53.737 [2024-11-19 11:27:48.975319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.737 [2024-11-19 11:27:48.975358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.737 qpair failed and we were unable to recover it. 00:25:53.737 [2024-11-19 11:27:48.975541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.737 [2024-11-19 11:27:48.975567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.737 qpair failed and we were unable to recover it. 00:25:53.737 [2024-11-19 11:27:48.975725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.737 [2024-11-19 11:27:48.975749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.737 qpair failed and we were unable to recover it. 00:25:53.737 [2024-11-19 11:27:48.975939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.737 [2024-11-19 11:27:48.975964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.737 qpair failed and we were unable to recover it. 00:25:53.737 [2024-11-19 11:27:48.976204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.737 [2024-11-19 11:27:48.976229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.737 qpair failed and we were unable to recover it. 00:25:53.737 [2024-11-19 11:27:48.976425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.737 [2024-11-19 11:27:48.976451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.737 qpair failed and we were unable to recover it. 00:25:53.737 [2024-11-19 11:27:48.976681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.737 [2024-11-19 11:27:48.976706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.737 qpair failed and we were unable to recover it. 00:25:53.737 [2024-11-19 11:27:48.976880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.737 [2024-11-19 11:27:48.976904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.737 qpair failed and we were unable to recover it. 00:25:53.737 [2024-11-19 11:27:48.977102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.737 [2024-11-19 11:27:48.977130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.737 qpair failed and we were unable to recover it. 00:25:53.737 [2024-11-19 11:27:48.977360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.737 [2024-11-19 11:27:48.977407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.737 qpair failed and we were unable to recover it. 00:25:53.737 [2024-11-19 11:27:48.977579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.737 [2024-11-19 11:27:48.977605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.737 qpair failed and we were unable to recover it. 00:25:53.737 [2024-11-19 11:27:48.977789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.737 [2024-11-19 11:27:48.977814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.737 qpair failed and we were unable to recover it. 00:25:53.737 [2024-11-19 11:27:48.977985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.737 [2024-11-19 11:27:48.978010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.737 qpair failed and we were unable to recover it. 00:25:53.737 [2024-11-19 11:27:48.978208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.737 [2024-11-19 11:27:48.978232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.737 qpair failed and we were unable to recover it. 00:25:53.737 [2024-11-19 11:27:48.978439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.737 [2024-11-19 11:27:48.978464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.738 qpair failed and we were unable to recover it. 00:25:53.738 [2024-11-19 11:27:48.978671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.738 [2024-11-19 11:27:48.978695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.738 qpair failed and we were unable to recover it. 00:25:53.738 [2024-11-19 11:27:48.978875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.738 [2024-11-19 11:27:48.978899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.738 qpair failed and we were unable to recover it. 00:25:53.738 [2024-11-19 11:27:48.979129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.738 [2024-11-19 11:27:48.979153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.738 qpair failed and we were unable to recover it. 00:25:53.738 [2024-11-19 11:27:48.979318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.738 [2024-11-19 11:27:48.979356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.738 qpair failed and we were unable to recover it. 00:25:53.738 [2024-11-19 11:27:48.979544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.738 [2024-11-19 11:27:48.979571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.738 qpair failed and we were unable to recover it. 00:25:53.738 [2024-11-19 11:27:48.979782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.738 [2024-11-19 11:27:48.979807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.738 qpair failed and we were unable to recover it. 00:25:53.738 [2024-11-19 11:27:48.980006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.738 [2024-11-19 11:27:48.980029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.738 qpair failed and we were unable to recover it. 00:25:53.738 [2024-11-19 11:27:48.980195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.738 [2024-11-19 11:27:48.980219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.738 qpair failed and we were unable to recover it. 00:25:53.738 [2024-11-19 11:27:48.980409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.738 [2024-11-19 11:27:48.980434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.738 qpair failed and we were unable to recover it. 00:25:53.738 [2024-11-19 11:27:48.980659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.738 [2024-11-19 11:27:48.980699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.738 qpair failed and we were unable to recover it. 00:25:53.738 [2024-11-19 11:27:48.980937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.738 [2024-11-19 11:27:48.980962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.738 qpair failed and we were unable to recover it. 00:25:53.738 [2024-11-19 11:27:48.981185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.738 [2024-11-19 11:27:48.981209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.738 qpair failed and we were unable to recover it. 00:25:53.738 [2024-11-19 11:27:48.981415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.738 [2024-11-19 11:27:48.981441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.738 qpair failed and we were unable to recover it. 00:25:53.738 [2024-11-19 11:27:48.981659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.738 [2024-11-19 11:27:48.981699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.738 qpair failed and we were unable to recover it. 00:25:53.738 [2024-11-19 11:27:48.981880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.738 [2024-11-19 11:27:48.981903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.738 qpair failed and we were unable to recover it. 00:25:53.738 [2024-11-19 11:27:48.982104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.738 [2024-11-19 11:27:48.982127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.738 qpair failed and we were unable to recover it. 00:25:53.738 [2024-11-19 11:27:48.982326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.738 [2024-11-19 11:27:48.982350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.738 qpair failed and we were unable to recover it. 00:25:53.738 [2024-11-19 11:27:48.982583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.738 [2024-11-19 11:27:48.982608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.738 qpair failed and we were unable to recover it. 00:25:53.738 [2024-11-19 11:27:48.982766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.738 [2024-11-19 11:27:48.982791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.738 qpair failed and we were unable to recover it. 00:25:53.738 [2024-11-19 11:27:48.982999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.738 [2024-11-19 11:27:48.983024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.738 qpair failed and we were unable to recover it. 00:25:53.738 [2024-11-19 11:27:48.983249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.738 [2024-11-19 11:27:48.983278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.738 qpair failed and we were unable to recover it. 00:25:53.738 [2024-11-19 11:27:48.983430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.738 [2024-11-19 11:27:48.983456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.738 qpair failed and we were unable to recover it. 00:25:53.738 [2024-11-19 11:27:48.983609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.738 [2024-11-19 11:27:48.983634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.738 qpair failed and we were unable to recover it. 00:25:53.738 [2024-11-19 11:27:48.983820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.738 [2024-11-19 11:27:48.983843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.738 qpair failed and we were unable to recover it. 00:25:53.738 [2024-11-19 11:27:48.984066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.738 [2024-11-19 11:27:48.984090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.738 qpair failed and we were unable to recover it. 00:25:53.738 [2024-11-19 11:27:48.984306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.738 [2024-11-19 11:27:48.984331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.738 qpair failed and we were unable to recover it. 00:25:53.738 [2024-11-19 11:27:48.984564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.738 [2024-11-19 11:27:48.984591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.738 qpair failed and we were unable to recover it. 00:25:53.738 [2024-11-19 11:27:48.984816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.738 [2024-11-19 11:27:48.984840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.738 qpair failed and we were unable to recover it. 00:25:53.738 [2024-11-19 11:27:48.985065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.738 [2024-11-19 11:27:48.985089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.738 qpair failed and we were unable to recover it. 00:25:53.738 [2024-11-19 11:27:48.985313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.738 [2024-11-19 11:27:48.985352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.738 qpair failed and we were unable to recover it. 00:25:53.738 [2024-11-19 11:27:48.985551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.738 [2024-11-19 11:27:48.985577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.738 qpair failed and we were unable to recover it. 00:25:53.738 [2024-11-19 11:27:48.985786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.738 [2024-11-19 11:27:48.985809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.738 qpair failed and we were unable to recover it. 00:25:53.739 [2024-11-19 11:27:48.986029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.739 [2024-11-19 11:27:48.986053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.739 qpair failed and we were unable to recover it. 00:25:53.739 [2024-11-19 11:27:48.986245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.739 [2024-11-19 11:27:48.986270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.739 qpair failed and we were unable to recover it. 00:25:53.739 [2024-11-19 11:27:48.986437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.739 [2024-11-19 11:27:48.986463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.739 qpair failed and we were unable to recover it. 00:25:53.739 [2024-11-19 11:27:48.986612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.739 [2024-11-19 11:27:48.986637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.739 qpair failed and we were unable to recover it. 00:25:53.739 [2024-11-19 11:27:48.986832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.739 [2024-11-19 11:27:48.986856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.739 qpair failed and we were unable to recover it. 00:25:53.739 [2024-11-19 11:27:48.987080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.739 [2024-11-19 11:27:48.987104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.739 qpair failed and we were unable to recover it. 00:25:53.739 [2024-11-19 11:27:48.987293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.739 [2024-11-19 11:27:48.987317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.739 qpair failed and we were unable to recover it. 00:25:53.739 [2024-11-19 11:27:48.987509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.739 [2024-11-19 11:27:48.987535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.739 qpair failed and we were unable to recover it. 00:25:53.739 [2024-11-19 11:27:48.987686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.739 [2024-11-19 11:27:48.987725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.739 qpair failed and we were unable to recover it. 00:25:53.739 [2024-11-19 11:27:48.987896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.739 [2024-11-19 11:27:48.987936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.739 qpair failed and we were unable to recover it. 00:25:53.739 [2024-11-19 11:27:48.988114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.739 [2024-11-19 11:27:48.988139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.739 qpair failed and we were unable to recover it. 00:25:53.739 [2024-11-19 11:27:48.988346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.739 [2024-11-19 11:27:48.988391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.739 qpair failed and we were unable to recover it. 00:25:53.739 [2024-11-19 11:27:48.988624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.739 [2024-11-19 11:27:48.988648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.739 qpair failed and we were unable to recover it. 00:25:53.739 [2024-11-19 11:27:48.988841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.739 [2024-11-19 11:27:48.988865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.739 qpair failed and we were unable to recover it. 00:25:53.739 [2024-11-19 11:27:48.989089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.739 [2024-11-19 11:27:48.989127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.739 qpair failed and we were unable to recover it. 00:25:53.739 [2024-11-19 11:27:48.989310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.739 [2024-11-19 11:27:48.989333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.739 qpair failed and we were unable to recover it. 00:25:53.739 [2024-11-19 11:27:48.989504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.739 [2024-11-19 11:27:48.989530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.739 qpair failed and we were unable to recover it. 00:25:53.739 [2024-11-19 11:27:48.989693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.739 [2024-11-19 11:27:48.989732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.739 qpair failed and we were unable to recover it. 00:25:53.739 [2024-11-19 11:27:48.989943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.739 [2024-11-19 11:27:48.989968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.739 qpair failed and we were unable to recover it. 00:25:53.739 [2024-11-19 11:27:48.990141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.739 [2024-11-19 11:27:48.990165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.739 qpair failed and we were unable to recover it. 00:25:53.739 [2024-11-19 11:27:48.990322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.739 [2024-11-19 11:27:48.990346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.739 qpair failed and we were unable to recover it. 00:25:53.739 [2024-11-19 11:27:48.990547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.739 [2024-11-19 11:27:48.990572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.739 qpair failed and we were unable to recover it. 00:25:53.739 [2024-11-19 11:27:48.990756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.739 [2024-11-19 11:27:48.990781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.739 qpair failed and we were unable to recover it. 00:25:53.739 [2024-11-19 11:27:48.990985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.739 [2024-11-19 11:27:48.991008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.739 qpair failed and we were unable to recover it. 00:25:53.739 [2024-11-19 11:27:48.991197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.739 [2024-11-19 11:27:48.991221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.739 qpair failed and we were unable to recover it. 00:25:53.739 [2024-11-19 11:27:48.991405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.739 [2024-11-19 11:27:48.991431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.739 qpair failed and we were unable to recover it. 00:25:53.739 [2024-11-19 11:27:48.991656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.739 [2024-11-19 11:27:48.991681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.739 qpair failed and we were unable to recover it. 00:25:53.739 [2024-11-19 11:27:48.991785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.739 [2024-11-19 11:27:48.991809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.739 qpair failed and we were unable to recover it. 00:25:53.739 [2024-11-19 11:27:48.991981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.739 [2024-11-19 11:27:48.992019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.739 qpair failed and we were unable to recover it. 00:25:53.739 [2024-11-19 11:27:48.992215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.739 [2024-11-19 11:27:48.992243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.739 qpair failed and we were unable to recover it. 00:25:53.739 [2024-11-19 11:27:48.992435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.739 [2024-11-19 11:27:48.992460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.739 qpair failed and we were unable to recover it. 00:25:53.739 [2024-11-19 11:27:48.992619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.739 [2024-11-19 11:27:48.992644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.739 qpair failed and we were unable to recover it. 00:25:53.739 [2024-11-19 11:27:48.992814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.739 [2024-11-19 11:27:48.992838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.739 qpair failed and we were unable to recover it. 00:25:53.739 [2024-11-19 11:27:48.993080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.739 [2024-11-19 11:27:48.993105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.739 qpair failed and we were unable to recover it. 00:25:53.739 [2024-11-19 11:27:48.993291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.739 [2024-11-19 11:27:48.993315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.739 qpair failed and we were unable to recover it. 00:25:53.739 [2024-11-19 11:27:48.993504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.739 [2024-11-19 11:27:48.993529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.739 qpair failed and we were unable to recover it. 00:25:53.739 [2024-11-19 11:27:48.993752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.740 [2024-11-19 11:27:48.993776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.740 qpair failed and we were unable to recover it. 00:25:53.740 [2024-11-19 11:27:48.993932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.740 [2024-11-19 11:27:48.993956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.740 qpair failed and we were unable to recover it. 00:25:53.740 [2024-11-19 11:27:48.994187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.740 [2024-11-19 11:27:48.994211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.740 qpair failed and we were unable to recover it. 00:25:53.740 [2024-11-19 11:27:48.994438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.740 [2024-11-19 11:27:48.994463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.740 qpair failed and we were unable to recover it. 00:25:53.740 [2024-11-19 11:27:48.994648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.740 [2024-11-19 11:27:48.994672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.740 qpair failed and we were unable to recover it. 00:25:53.740 [2024-11-19 11:27:48.994929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.740 [2024-11-19 11:27:48.994953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.740 qpair failed and we were unable to recover it. 00:25:53.740 [2024-11-19 11:27:48.995152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.740 [2024-11-19 11:27:48.995177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.740 qpair failed and we were unable to recover it. 00:25:53.740 [2024-11-19 11:27:48.995405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.740 [2024-11-19 11:27:48.995430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.740 qpair failed and we were unable to recover it. 00:25:53.740 [2024-11-19 11:27:48.995626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.740 [2024-11-19 11:27:48.995653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.740 qpair failed and we were unable to recover it. 00:25:53.740 [2024-11-19 11:27:48.995838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.740 [2024-11-19 11:27:48.995863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.740 qpair failed and we were unable to recover it. 00:25:53.740 [2024-11-19 11:27:48.996079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.740 [2024-11-19 11:27:48.996103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.740 qpair failed and we were unable to recover it. 00:25:53.740 [2024-11-19 11:27:48.996310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.740 [2024-11-19 11:27:48.996333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.740 qpair failed and we were unable to recover it. 00:25:53.740 [2024-11-19 11:27:48.996524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.740 [2024-11-19 11:27:48.996550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.740 qpair failed and we were unable to recover it. 00:25:53.740 [2024-11-19 11:27:48.996745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.740 [2024-11-19 11:27:48.996770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.740 qpair failed and we were unable to recover it. 00:25:53.740 [2024-11-19 11:27:48.996938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.740 [2024-11-19 11:27:48.996963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.740 qpair failed and we were unable to recover it. 00:25:53.740 [2024-11-19 11:27:48.997078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.740 [2024-11-19 11:27:48.997103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.740 qpair failed and we were unable to recover it. 00:25:53.740 [2024-11-19 11:27:48.997318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.740 [2024-11-19 11:27:48.997343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.740 qpair failed and we were unable to recover it. 00:25:53.740 [2024-11-19 11:27:48.997514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.740 [2024-11-19 11:27:48.997539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.740 qpair failed and we were unable to recover it. 00:25:53.740 [2024-11-19 11:27:48.997729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.740 [2024-11-19 11:27:48.997768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.740 qpair failed and we were unable to recover it. 00:25:53.740 [2024-11-19 11:27:48.997945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.740 [2024-11-19 11:27:48.997967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.740 qpair failed and we were unable to recover it. 00:25:53.740 [2024-11-19 11:27:48.998134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.740 [2024-11-19 11:27:48.998162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.740 qpair failed and we were unable to recover it. 00:25:53.740 [2024-11-19 11:27:48.998413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.740 [2024-11-19 11:27:48.998455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.740 qpair failed and we were unable to recover it. 00:25:53.740 [2024-11-19 11:27:48.998655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.740 [2024-11-19 11:27:48.998681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.740 qpair failed and we were unable to recover it. 00:25:53.740 [2024-11-19 11:27:48.998867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.740 [2024-11-19 11:27:48.998907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.740 qpair failed and we were unable to recover it. 00:25:53.740 [2024-11-19 11:27:48.999106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.740 [2024-11-19 11:27:48.999131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.740 qpair failed and we were unable to recover it. 00:25:53.740 [2024-11-19 11:27:48.999329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.740 [2024-11-19 11:27:48.999353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.740 qpair failed and we were unable to recover it. 00:25:53.740 [2024-11-19 11:27:48.999566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.740 [2024-11-19 11:27:48.999591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.740 qpair failed and we were unable to recover it. 00:25:53.740 [2024-11-19 11:27:48.999793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.740 [2024-11-19 11:27:48.999816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.740 qpair failed and we were unable to recover it. 00:25:53.740 [2024-11-19 11:27:49.000035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.740 [2024-11-19 11:27:49.000058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.740 qpair failed and we were unable to recover it. 00:25:53.740 [2024-11-19 11:27:49.000233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.740 [2024-11-19 11:27:49.000258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.740 qpair failed and we were unable to recover it. 00:25:53.740 [2024-11-19 11:27:49.000491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.740 [2024-11-19 11:27:49.000518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.740 qpair failed and we were unable to recover it. 00:25:53.740 [2024-11-19 11:27:49.000730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.740 [2024-11-19 11:27:49.000755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.740 qpair failed and we were unable to recover it. 00:25:53.740 [2024-11-19 11:27:49.000936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.740 [2024-11-19 11:27:49.000962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.740 qpair failed and we were unable to recover it. 00:25:53.740 [2024-11-19 11:27:49.001141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.740 [2024-11-19 11:27:49.001164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.740 qpair failed and we were unable to recover it. 00:25:53.740 [2024-11-19 11:27:49.001360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.740 [2024-11-19 11:27:49.001406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.740 qpair failed and we were unable to recover it. 00:25:53.740 [2024-11-19 11:27:49.001608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.740 [2024-11-19 11:27:49.001633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.740 qpair failed and we were unable to recover it. 00:25:53.740 [2024-11-19 11:27:49.001875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.740 [2024-11-19 11:27:49.001914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.740 qpair failed and we were unable to recover it. 00:25:53.741 [2024-11-19 11:27:49.002108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.741 [2024-11-19 11:27:49.002133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.741 qpair failed and we were unable to recover it. 00:25:53.741 [2024-11-19 11:27:49.002292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.741 [2024-11-19 11:27:49.002316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.741 qpair failed and we were unable to recover it. 00:25:53.741 [2024-11-19 11:27:49.002526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.741 [2024-11-19 11:27:49.002552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.741 qpair failed and we were unable to recover it. 00:25:53.741 [2024-11-19 11:27:49.002793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.741 [2024-11-19 11:27:49.002817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.741 qpair failed and we were unable to recover it. 00:25:53.741 [2024-11-19 11:27:49.002975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.741 [2024-11-19 11:27:49.002999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.741 qpair failed and we were unable to recover it. 00:25:53.741 [2024-11-19 11:27:49.003217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.741 [2024-11-19 11:27:49.003241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.741 qpair failed and we were unable to recover it. 00:25:53.741 [2024-11-19 11:27:49.003429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.741 [2024-11-19 11:27:49.003454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.741 qpair failed and we were unable to recover it. 00:25:53.741 [2024-11-19 11:27:49.003682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.741 [2024-11-19 11:27:49.003723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.741 qpair failed and we were unable to recover it. 00:25:53.741 [2024-11-19 11:27:49.003866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.741 [2024-11-19 11:27:49.003890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.741 qpair failed and we were unable to recover it. 00:25:53.741 [2024-11-19 11:27:49.004034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.741 [2024-11-19 11:27:49.004059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.741 qpair failed and we were unable to recover it. 00:25:53.741 [2024-11-19 11:27:49.004246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.741 [2024-11-19 11:27:49.004274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.741 qpair failed and we were unable to recover it. 00:25:53.741 [2024-11-19 11:27:49.004504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.741 [2024-11-19 11:27:49.004531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.741 qpair failed and we were unable to recover it. 00:25:53.741 [2024-11-19 11:27:49.004743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.741 [2024-11-19 11:27:49.004767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.741 qpair failed and we were unable to recover it. 00:25:53.741 [2024-11-19 11:27:49.004908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.741 [2024-11-19 11:27:49.004931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.741 qpair failed and we were unable to recover it. 00:25:53.741 [2024-11-19 11:27:49.005023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.741 [2024-11-19 11:27:49.005047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.741 qpair failed and we were unable to recover it. 00:25:53.741 [2024-11-19 11:27:49.005210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.741 [2024-11-19 11:27:49.005234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.741 qpair failed and we were unable to recover it. 00:25:53.741 [2024-11-19 11:27:49.005448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.741 [2024-11-19 11:27:49.005474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.741 qpair failed and we were unable to recover it. 00:25:53.741 [2024-11-19 11:27:49.005619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.741 [2024-11-19 11:27:49.005645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.741 qpair failed and we were unable to recover it. 00:25:53.741 [2024-11-19 11:27:49.005886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.741 [2024-11-19 11:27:49.005911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.741 qpair failed and we were unable to recover it. 00:25:53.741 [2024-11-19 11:27:49.006131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.741 [2024-11-19 11:27:49.006155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.741 qpair failed and we were unable to recover it. 00:25:53.741 [2024-11-19 11:27:49.006390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.741 [2024-11-19 11:27:49.006415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.741 qpair failed and we were unable to recover it. 00:25:53.741 [2024-11-19 11:27:49.006572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.741 [2024-11-19 11:27:49.006597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.741 qpair failed and we were unable to recover it. 00:25:53.741 [2024-11-19 11:27:49.006782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.741 [2024-11-19 11:27:49.006806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.741 qpair failed and we were unable to recover it. 00:25:53.741 [2024-11-19 11:27:49.007027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.741 [2024-11-19 11:27:49.007051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.741 qpair failed and we were unable to recover it. 00:25:53.741 [2024-11-19 11:27:49.007243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.741 [2024-11-19 11:27:49.007268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.741 qpair failed and we were unable to recover it. 00:25:53.741 [2024-11-19 11:27:49.007459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.741 [2024-11-19 11:27:49.007485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.741 qpair failed and we were unable to recover it. 00:25:53.741 [2024-11-19 11:27:49.007703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.741 [2024-11-19 11:27:49.007727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.741 qpair failed and we were unable to recover it. 00:25:53.741 [2024-11-19 11:27:49.007957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.741 [2024-11-19 11:27:49.007981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.741 qpair failed and we were unable to recover it. 00:25:53.741 [2024-11-19 11:27:49.008130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.741 [2024-11-19 11:27:49.008169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.741 qpair failed and we were unable to recover it. 00:25:53.741 [2024-11-19 11:27:49.008396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.741 [2024-11-19 11:27:49.008422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.741 qpair failed and we were unable to recover it. 00:25:53.741 [2024-11-19 11:27:49.008597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.741 [2024-11-19 11:27:49.008622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.741 qpair failed and we were unable to recover it. 00:25:53.741 [2024-11-19 11:27:49.008843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.741 [2024-11-19 11:27:49.008867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.741 qpair failed and we were unable to recover it. 00:25:53.741 [2024-11-19 11:27:49.009083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.741 [2024-11-19 11:27:49.009107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.741 qpair failed and we were unable to recover it. 00:25:53.741 [2024-11-19 11:27:49.009325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.741 [2024-11-19 11:27:49.009372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.741 qpair failed and we were unable to recover it. 00:25:53.741 [2024-11-19 11:27:49.009544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.741 [2024-11-19 11:27:49.009570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.741 qpair failed and we were unable to recover it. 00:25:53.741 [2024-11-19 11:27:49.009755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.741 [2024-11-19 11:27:49.009780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.741 qpair failed and we were unable to recover it. 00:25:53.741 [2024-11-19 11:27:49.009969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.742 [2024-11-19 11:27:49.009993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.742 qpair failed and we were unable to recover it. 00:25:53.742 [2024-11-19 11:27:49.010158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.742 [2024-11-19 11:27:49.010182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.742 qpair failed and we were unable to recover it. 00:25:53.742 [2024-11-19 11:27:49.010410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.742 [2024-11-19 11:27:49.010435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.742 qpair failed and we were unable to recover it. 00:25:53.742 [2024-11-19 11:27:49.010602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.742 [2024-11-19 11:27:49.010626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.742 qpair failed and we were unable to recover it. 00:25:53.742 [2024-11-19 11:27:49.010818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.742 [2024-11-19 11:27:49.010842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.742 qpair failed and we were unable to recover it. 00:25:53.742 [2024-11-19 11:27:49.011064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.742 [2024-11-19 11:27:49.011089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.742 qpair failed and we were unable to recover it. 00:25:53.742 [2024-11-19 11:27:49.011254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.742 [2024-11-19 11:27:49.011278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.742 qpair failed and we were unable to recover it. 00:25:53.742 [2024-11-19 11:27:49.011509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.742 [2024-11-19 11:27:49.011536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.742 qpair failed and we were unable to recover it. 00:25:53.742 [2024-11-19 11:27:49.011752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.742 [2024-11-19 11:27:49.011776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.742 qpair failed and we were unable to recover it. 00:25:53.742 [2024-11-19 11:27:49.011981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.742 [2024-11-19 11:27:49.012006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.742 qpair failed and we were unable to recover it. 00:25:53.742 [2024-11-19 11:27:49.012181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.742 [2024-11-19 11:27:49.012204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.742 qpair failed and we were unable to recover it. 00:25:53.742 [2024-11-19 11:27:49.012404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.742 [2024-11-19 11:27:49.012429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.742 qpair failed and we were unable to recover it. 00:25:53.742 [2024-11-19 11:27:49.012575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.742 [2024-11-19 11:27:49.012600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.742 qpair failed and we were unable to recover it. 00:25:53.742 [2024-11-19 11:27:49.012751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.742 [2024-11-19 11:27:49.012792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.742 qpair failed and we were unable to recover it. 00:25:53.742 [2024-11-19 11:27:49.012951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.742 [2024-11-19 11:27:49.012975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.742 qpair failed and we were unable to recover it. 00:25:53.742 [2024-11-19 11:27:49.013195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.742 [2024-11-19 11:27:49.013220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.742 qpair failed and we were unable to recover it. 00:25:53.742 [2024-11-19 11:27:49.013447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.742 [2024-11-19 11:27:49.013473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.742 qpair failed and we were unable to recover it. 00:25:53.742 [2024-11-19 11:27:49.013639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.742 [2024-11-19 11:27:49.013664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.742 qpair failed and we were unable to recover it. 00:25:53.742 [2024-11-19 11:27:49.013884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.742 [2024-11-19 11:27:49.013907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.742 qpair failed and we were unable to recover it. 00:25:53.742 [2024-11-19 11:27:49.014085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.742 [2024-11-19 11:27:49.014109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.742 qpair failed and we were unable to recover it. 00:25:53.742 [2024-11-19 11:27:49.014304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.742 [2024-11-19 11:27:49.014343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.742 qpair failed and we were unable to recover it. 00:25:53.742 [2024-11-19 11:27:49.014583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.742 [2024-11-19 11:27:49.014607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.742 qpair failed and we were unable to recover it. 00:25:53.742 [2024-11-19 11:27:49.014802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.742 [2024-11-19 11:27:49.014826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.742 qpair failed and we were unable to recover it. 00:25:53.742 [2024-11-19 11:27:49.015053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.742 [2024-11-19 11:27:49.015092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.742 qpair failed and we were unable to recover it. 00:25:53.742 [2024-11-19 11:27:49.015255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.742 [2024-11-19 11:27:49.015279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.742 qpair failed and we were unable to recover it. 00:25:53.742 [2024-11-19 11:27:49.015501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.742 [2024-11-19 11:27:49.015526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.742 qpair failed and we were unable to recover it. 00:25:53.742 [2024-11-19 11:27:49.015678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.742 [2024-11-19 11:27:49.015702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.742 qpair failed and we were unable to recover it. 00:25:53.742 [2024-11-19 11:27:49.015933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.742 [2024-11-19 11:27:49.015957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.742 qpair failed and we were unable to recover it. 00:25:53.742 [2024-11-19 11:27:49.016201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.742 [2024-11-19 11:27:49.016225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.742 qpair failed and we were unable to recover it. 00:25:53.742 [2024-11-19 11:27:49.016373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.742 [2024-11-19 11:27:49.016399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.742 qpair failed and we were unable to recover it. 00:25:53.742 [2024-11-19 11:27:49.016607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.742 [2024-11-19 11:27:49.016632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.742 qpair failed and we were unable to recover it. 00:25:53.742 [2024-11-19 11:27:49.016853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.742 [2024-11-19 11:27:49.016877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.742 qpair failed and we were unable to recover it. 00:25:53.742 [2024-11-19 11:27:49.017011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.742 [2024-11-19 11:27:49.017050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.742 qpair failed and we were unable to recover it. 00:25:53.742 [2024-11-19 11:27:49.017202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.742 [2024-11-19 11:27:49.017240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.742 qpair failed and we were unable to recover it. 00:25:53.742 [2024-11-19 11:27:49.017425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.742 [2024-11-19 11:27:49.017451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.742 qpair failed and we were unable to recover it. 00:25:53.742 [2024-11-19 11:27:49.017649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.742 [2024-11-19 11:27:49.017689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.742 qpair failed and we were unable to recover it. 00:25:53.742 [2024-11-19 11:27:49.017881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.742 [2024-11-19 11:27:49.017906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.742 qpair failed and we were unable to recover it. 00:25:53.743 [2024-11-19 11:27:49.018128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.743 [2024-11-19 11:27:49.018152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.743 qpair failed and we were unable to recover it. 00:25:53.743 [2024-11-19 11:27:49.018287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.743 [2024-11-19 11:27:49.018311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.743 qpair failed and we were unable to recover it. 00:25:53.743 [2024-11-19 11:27:49.018505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.743 [2024-11-19 11:27:49.018531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.743 qpair failed and we were unable to recover it. 00:25:53.743 [2024-11-19 11:27:49.018744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.743 [2024-11-19 11:27:49.018770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.743 qpair failed and we were unable to recover it. 00:25:53.743 [2024-11-19 11:27:49.018947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.743 [2024-11-19 11:27:49.018971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.743 qpair failed and we were unable to recover it. 00:25:53.743 [2024-11-19 11:27:49.019135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.743 [2024-11-19 11:27:49.019164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.743 qpair failed and we were unable to recover it. 00:25:53.743 [2024-11-19 11:27:49.019312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.743 [2024-11-19 11:27:49.019337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.743 qpair failed and we were unable to recover it. 00:25:53.743 [2024-11-19 11:27:49.019579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.743 [2024-11-19 11:27:49.019603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.743 qpair failed and we were unable to recover it. 00:25:53.743 [2024-11-19 11:27:49.019810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.743 [2024-11-19 11:27:49.019834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.743 qpair failed and we were unable to recover it. 00:25:53.743 [2024-11-19 11:27:49.020016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.743 [2024-11-19 11:27:49.020040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.743 qpair failed and we were unable to recover it. 00:25:53.743 [2024-11-19 11:27:49.020171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.743 [2024-11-19 11:27:49.020196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.743 qpair failed and we were unable to recover it. 00:25:53.743 [2024-11-19 11:27:49.020398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.743 [2024-11-19 11:27:49.020424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.743 qpair failed and we were unable to recover it. 00:25:53.743 [2024-11-19 11:27:49.020612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.743 [2024-11-19 11:27:49.020653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.743 qpair failed and we were unable to recover it. 00:25:53.743 [2024-11-19 11:27:49.020841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.743 [2024-11-19 11:27:49.020865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.743 qpair failed and we were unable to recover it. 00:25:53.743 [2024-11-19 11:27:49.021046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.743 [2024-11-19 11:27:49.021070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.743 qpair failed and we were unable to recover it. 00:25:53.743 [2024-11-19 11:27:49.021251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.743 [2024-11-19 11:27:49.021275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.743 qpair failed and we were unable to recover it. 00:25:53.743 [2024-11-19 11:27:49.021469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.743 [2024-11-19 11:27:49.021496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.743 qpair failed and we were unable to recover it. 00:25:53.743 [2024-11-19 11:27:49.021703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.743 [2024-11-19 11:27:49.021742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.743 qpair failed and we were unable to recover it. 00:25:53.743 [2024-11-19 11:27:49.021929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.743 [2024-11-19 11:27:49.021954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.743 qpair failed and we were unable to recover it. 00:25:53.743 [2024-11-19 11:27:49.022202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.743 [2024-11-19 11:27:49.022227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.743 qpair failed and we were unable to recover it. 00:25:53.743 [2024-11-19 11:27:49.022420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.743 [2024-11-19 11:27:49.022446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.743 qpair failed and we were unable to recover it. 00:25:53.743 [2024-11-19 11:27:49.022632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.743 [2024-11-19 11:27:49.022671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.743 qpair failed and we were unable to recover it. 00:25:53.743 [2024-11-19 11:27:49.022888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.743 [2024-11-19 11:27:49.022913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.743 qpair failed and we were unable to recover it. 00:25:53.743 [2024-11-19 11:27:49.023096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.743 [2024-11-19 11:27:49.023120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.743 qpair failed and we were unable to recover it. 00:25:53.743 [2024-11-19 11:27:49.023350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.743 [2024-11-19 11:27:49.023396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.743 qpair failed and we were unable to recover it. 00:25:53.743 [2024-11-19 11:27:49.023587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.743 [2024-11-19 11:27:49.023613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.743 qpair failed and we were unable to recover it. 00:25:53.743 [2024-11-19 11:27:49.023838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.743 [2024-11-19 11:27:49.023863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.743 qpair failed and we were unable to recover it. 00:25:53.743 [2024-11-19 11:27:49.024082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.743 [2024-11-19 11:27:49.024106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.743 qpair failed and we were unable to recover it. 00:25:53.743 [2024-11-19 11:27:49.024291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.743 [2024-11-19 11:27:49.024316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.743 qpair failed and we were unable to recover it. 00:25:53.743 [2024-11-19 11:27:49.024483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.743 [2024-11-19 11:27:49.024507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.743 qpair failed and we were unable to recover it. 00:25:53.743 [2024-11-19 11:27:49.024735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.743 [2024-11-19 11:27:49.024760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.743 qpair failed and we were unable to recover it. 00:25:53.743 [2024-11-19 11:27:49.024980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.743 [2024-11-19 11:27:49.025005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.743 qpair failed and we were unable to recover it. 00:25:53.744 [2024-11-19 11:27:49.025197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.744 [2024-11-19 11:27:49.025226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.744 qpair failed and we were unable to recover it. 00:25:53.744 [2024-11-19 11:27:49.025389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.744 [2024-11-19 11:27:49.025416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.744 qpair failed and we were unable to recover it. 00:25:53.744 [2024-11-19 11:27:49.025555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.744 [2024-11-19 11:27:49.025581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.744 qpair failed and we were unable to recover it. 00:25:53.744 [2024-11-19 11:27:49.025792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.744 [2024-11-19 11:27:49.025831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.744 qpair failed and we were unable to recover it. 00:25:53.744 [2024-11-19 11:27:49.025993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.744 [2024-11-19 11:27:49.026017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.744 qpair failed and we were unable to recover it. 00:25:53.744 [2024-11-19 11:27:49.026211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.744 [2024-11-19 11:27:49.026234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.744 qpair failed and we were unable to recover it. 00:25:53.744 [2024-11-19 11:27:49.026405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.744 [2024-11-19 11:27:49.026430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.744 qpair failed and we were unable to recover it. 00:25:53.744 [2024-11-19 11:27:49.026609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.744 [2024-11-19 11:27:49.026635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.744 qpair failed and we were unable to recover it. 00:25:53.744 [2024-11-19 11:27:49.026834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.744 [2024-11-19 11:27:49.026858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.744 qpair failed and we were unable to recover it. 00:25:53.744 [2024-11-19 11:27:49.027038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.744 [2024-11-19 11:27:49.027062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.744 qpair failed and we were unable to recover it. 00:25:53.744 [2024-11-19 11:27:49.027200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.744 [2024-11-19 11:27:49.027225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.744 qpair failed and we were unable to recover it. 00:25:53.744 [2024-11-19 11:27:49.027398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.744 [2024-11-19 11:27:49.027424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.744 qpair failed and we were unable to recover it. 00:25:53.744 [2024-11-19 11:27:49.027576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.744 [2024-11-19 11:27:49.027600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.744 qpair failed and we were unable to recover it. 00:25:53.744 [2024-11-19 11:27:49.027772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.744 [2024-11-19 11:27:49.027796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.744 qpair failed and we were unable to recover it. 00:25:53.744 [2024-11-19 11:27:49.028030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.744 [2024-11-19 11:27:49.028054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.744 qpair failed and we were unable to recover it. 00:25:53.744 [2024-11-19 11:27:49.028251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.744 [2024-11-19 11:27:49.028290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.744 qpair failed and we were unable to recover it. 00:25:53.744 [2024-11-19 11:27:49.028522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.744 [2024-11-19 11:27:49.028547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.744 qpair failed and we were unable to recover it. 00:25:53.744 [2024-11-19 11:27:49.028721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.744 [2024-11-19 11:27:49.028745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.744 qpair failed and we were unable to recover it. 00:25:53.744 [2024-11-19 11:27:49.028985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.744 [2024-11-19 11:27:49.029010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.744 qpair failed and we were unable to recover it. 00:25:53.744 [2024-11-19 11:27:49.029235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.744 [2024-11-19 11:27:49.029260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.744 qpair failed and we were unable to recover it. 00:25:53.744 [2024-11-19 11:27:49.029486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.744 [2024-11-19 11:27:49.029512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.744 qpair failed and we were unable to recover it. 00:25:53.744 [2024-11-19 11:27:49.029740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.744 [2024-11-19 11:27:49.029764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.744 qpair failed and we were unable to recover it. 00:25:53.744 [2024-11-19 11:27:49.029952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.744 [2024-11-19 11:27:49.029976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.744 qpair failed and we were unable to recover it. 00:25:53.744 [2024-11-19 11:27:49.030177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.744 [2024-11-19 11:27:49.030201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.744 qpair failed and we were unable to recover it. 00:25:53.744 [2024-11-19 11:27:49.030437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.744 [2024-11-19 11:27:49.030463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.744 qpair failed and we were unable to recover it. 00:25:53.744 [2024-11-19 11:27:49.030682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.744 [2024-11-19 11:27:49.030707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.744 qpair failed and we were unable to recover it. 00:25:53.744 [2024-11-19 11:27:49.030936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.744 [2024-11-19 11:27:49.030959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.744 qpair failed and we were unable to recover it. 00:25:53.744 [2024-11-19 11:27:49.031136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.744 [2024-11-19 11:27:49.031160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.744 qpair failed and we were unable to recover it. 00:25:53.744 [2024-11-19 11:27:49.031347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.744 [2024-11-19 11:27:49.031399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.744 qpair failed and we were unable to recover it. 00:25:53.744 [2024-11-19 11:27:49.031630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.744 [2024-11-19 11:27:49.031669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.744 qpair failed and we were unable to recover it. 00:25:53.744 [2024-11-19 11:27:49.031852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.744 [2024-11-19 11:27:49.031877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.744 qpair failed and we were unable to recover it. 00:25:53.744 [2024-11-19 11:27:49.032058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.744 [2024-11-19 11:27:49.032083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.744 qpair failed and we were unable to recover it. 00:25:53.744 [2024-11-19 11:27:49.032276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.744 [2024-11-19 11:27:49.032301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.744 qpair failed and we were unable to recover it. 00:25:53.744 [2024-11-19 11:27:49.032409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.744 [2024-11-19 11:27:49.032434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.744 qpair failed and we were unable to recover it. 00:25:53.744 [2024-11-19 11:27:49.032662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.744 [2024-11-19 11:27:49.032686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.744 qpair failed and we were unable to recover it. 00:25:53.744 [2024-11-19 11:27:49.032911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.744 [2024-11-19 11:27:49.032937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.744 qpair failed and we were unable to recover it. 00:25:53.744 [2024-11-19 11:27:49.033123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.745 [2024-11-19 11:27:49.033147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.745 qpair failed and we were unable to recover it. 00:25:53.745 [2024-11-19 11:27:49.033314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.745 [2024-11-19 11:27:49.033338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.745 qpair failed and we were unable to recover it. 00:25:53.745 [2024-11-19 11:27:49.033578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.745 [2024-11-19 11:27:49.033603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.745 qpair failed and we were unable to recover it. 00:25:53.745 [2024-11-19 11:27:49.033784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.745 [2024-11-19 11:27:49.033810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.745 qpair failed and we were unable to recover it. 00:25:53.745 [2024-11-19 11:27:49.033962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.745 [2024-11-19 11:27:49.033987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.745 qpair failed and we were unable to recover it. 00:25:53.745 [2024-11-19 11:27:49.034164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.745 [2024-11-19 11:27:49.034187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.745 qpair failed and we were unable to recover it. 00:25:53.745 [2024-11-19 11:27:49.034348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.745 [2024-11-19 11:27:49.034405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.745 qpair failed and we were unable to recover it. 00:25:53.745 [2024-11-19 11:27:49.034617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.745 [2024-11-19 11:27:49.034641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.745 qpair failed and we were unable to recover it. 00:25:53.745 [2024-11-19 11:27:49.034834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.745 [2024-11-19 11:27:49.034858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.745 qpair failed and we were unable to recover it. 00:25:53.745 [2024-11-19 11:27:49.035015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.745 [2024-11-19 11:27:49.035053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.745 qpair failed and we were unable to recover it. 00:25:53.745 [2024-11-19 11:27:49.035191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.745 [2024-11-19 11:27:49.035215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.745 qpair failed and we were unable to recover it. 00:25:53.745 [2024-11-19 11:27:49.035457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.745 [2024-11-19 11:27:49.035483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.745 qpair failed and we were unable to recover it. 00:25:53.745 [2024-11-19 11:27:49.035666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.745 [2024-11-19 11:27:49.035691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.745 qpair failed and we were unable to recover it. 00:25:53.745 [2024-11-19 11:27:49.035818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.745 [2024-11-19 11:27:49.035842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.745 qpair failed and we were unable to recover it. 00:25:53.745 [2024-11-19 11:27:49.036030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.745 [2024-11-19 11:27:49.036054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.745 qpair failed and we were unable to recover it. 00:25:53.745 [2024-11-19 11:27:49.036192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.745 [2024-11-19 11:27:49.036217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.745 qpair failed and we were unable to recover it. 00:25:53.745 [2024-11-19 11:27:49.036415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.745 [2024-11-19 11:27:49.036439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.745 qpair failed and we were unable to recover it. 00:25:53.745 [2024-11-19 11:27:49.036665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.745 [2024-11-19 11:27:49.036689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.745 qpair failed and we were unable to recover it. 00:25:53.745 [2024-11-19 11:27:49.036813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.745 [2024-11-19 11:27:49.036839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.745 qpair failed and we were unable to recover it. 00:25:53.745 [2024-11-19 11:27:49.037075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.745 [2024-11-19 11:27:49.037113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.745 qpair failed and we were unable to recover it. 00:25:53.745 [2024-11-19 11:27:49.037263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.745 [2024-11-19 11:27:49.037288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.745 qpair failed and we were unable to recover it. 00:25:53.745 [2024-11-19 11:27:49.037471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.745 [2024-11-19 11:27:49.037497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.745 qpair failed and we were unable to recover it. 00:25:53.745 [2024-11-19 11:27:49.037724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.745 [2024-11-19 11:27:49.037749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.745 qpair failed and we were unable to recover it. 00:25:53.745 [2024-11-19 11:27:49.037928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.745 [2024-11-19 11:27:49.037952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.745 qpair failed and we were unable to recover it. 00:25:53.745 [2024-11-19 11:27:49.038185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.745 [2024-11-19 11:27:49.038209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.745 qpair failed and we were unable to recover it. 00:25:53.745 [2024-11-19 11:27:49.038351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.745 [2024-11-19 11:27:49.038402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.745 qpair failed and we were unable to recover it. 00:25:53.745 [2024-11-19 11:27:49.038577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.745 [2024-11-19 11:27:49.038602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.745 qpair failed and we were unable to recover it. 00:25:53.745 [2024-11-19 11:27:49.038799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.745 [2024-11-19 11:27:49.038824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.745 qpair failed and we were unable to recover it. 00:25:53.745 [2024-11-19 11:27:49.039003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.745 [2024-11-19 11:27:49.039028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.745 qpair failed and we were unable to recover it. 00:25:53.745 [2024-11-19 11:27:49.039199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.745 [2024-11-19 11:27:49.039224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.745 qpair failed and we were unable to recover it. 00:25:53.745 [2024-11-19 11:27:49.039416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.745 [2024-11-19 11:27:49.039442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.745 qpair failed and we were unable to recover it. 00:25:53.745 [2024-11-19 11:27:49.039615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.745 [2024-11-19 11:27:49.039641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.745 qpair failed and we were unable to recover it. 00:25:53.745 [2024-11-19 11:27:49.039824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.745 [2024-11-19 11:27:49.039853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.745 qpair failed and we were unable to recover it. 00:25:53.745 [2024-11-19 11:27:49.040025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.745 [2024-11-19 11:27:49.040048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.745 qpair failed and we were unable to recover it. 00:25:53.745 [2024-11-19 11:27:49.040212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.745 [2024-11-19 11:27:49.040237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.745 qpair failed and we were unable to recover it. 00:25:53.745 [2024-11-19 11:27:49.040458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.745 [2024-11-19 11:27:49.040485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.745 qpair failed and we were unable to recover it. 00:25:53.745 [2024-11-19 11:27:49.040710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.745 [2024-11-19 11:27:49.040734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.745 qpair failed and we were unable to recover it. 00:25:53.746 [2024-11-19 11:27:49.040954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.746 [2024-11-19 11:27:49.040978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.746 qpair failed and we were unable to recover it. 00:25:53.746 [2024-11-19 11:27:49.041125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.746 [2024-11-19 11:27:49.041150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.746 qpair failed and we were unable to recover it. 00:25:53.746 [2024-11-19 11:27:49.041293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.746 [2024-11-19 11:27:49.041332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.746 qpair failed and we were unable to recover it. 00:25:53.746 [2024-11-19 11:27:49.041495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.746 [2024-11-19 11:27:49.041521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.746 qpair failed and we were unable to recover it. 00:25:53.746 [2024-11-19 11:27:49.041687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.746 [2024-11-19 11:27:49.041712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.746 qpair failed and we were unable to recover it. 00:25:53.746 [2024-11-19 11:27:49.041894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.746 [2024-11-19 11:27:49.041917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.746 qpair failed and we were unable to recover it. 00:25:53.746 [2024-11-19 11:27:49.042056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.746 [2024-11-19 11:27:49.042081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.746 qpair failed and we were unable to recover it. 00:25:53.746 [2024-11-19 11:27:49.042212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.746 [2024-11-19 11:27:49.042238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.746 qpair failed and we were unable to recover it. 00:25:53.746 [2024-11-19 11:27:49.042406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.746 [2024-11-19 11:27:49.042432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.746 qpair failed and we were unable to recover it. 00:25:53.746 [2024-11-19 11:27:49.042667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.746 [2024-11-19 11:27:49.042691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.746 qpair failed and we were unable to recover it. 00:25:53.746 [2024-11-19 11:27:49.042906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.746 [2024-11-19 11:27:49.042930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.746 qpair failed and we were unable to recover it. 00:25:53.746 [2024-11-19 11:27:49.043115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.746 [2024-11-19 11:27:49.043140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.746 qpair failed and we were unable to recover it. 00:25:53.746 [2024-11-19 11:27:49.043370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.746 [2024-11-19 11:27:49.043409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.746 qpair failed and we were unable to recover it. 00:25:53.746 [2024-11-19 11:27:49.043613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.746 [2024-11-19 11:27:49.043654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.746 qpair failed and we were unable to recover it. 00:25:53.746 [2024-11-19 11:27:49.043857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.746 [2024-11-19 11:27:49.043881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.746 qpair failed and we were unable to recover it. 00:25:53.746 [2024-11-19 11:27:49.044042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.746 [2024-11-19 11:27:49.044066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.746 qpair failed and we were unable to recover it. 00:25:53.746 [2024-11-19 11:27:49.044240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.746 [2024-11-19 11:27:49.044263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.746 qpair failed and we were unable to recover it. 00:25:53.746 [2024-11-19 11:27:49.044491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.746 [2024-11-19 11:27:49.044517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.746 qpair failed and we were unable to recover it. 00:25:53.746 [2024-11-19 11:27:49.044667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.746 [2024-11-19 11:27:49.044692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.746 qpair failed and we were unable to recover it. 00:25:53.746 [2024-11-19 11:27:49.044904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.746 [2024-11-19 11:27:49.044928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.746 qpair failed and we were unable to recover it. 00:25:53.746 [2024-11-19 11:27:49.045089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.746 [2024-11-19 11:27:49.045113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.746 qpair failed and we were unable to recover it. 00:25:53.746 [2024-11-19 11:27:49.045340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.746 [2024-11-19 11:27:49.045384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.746 qpair failed and we were unable to recover it. 00:25:53.746 [2024-11-19 11:27:49.045587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.746 [2024-11-19 11:27:49.045620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.746 qpair failed and we were unable to recover it. 00:25:53.746 [2024-11-19 11:27:49.045782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.746 [2024-11-19 11:27:49.045806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.746 qpair failed and we were unable to recover it. 00:25:53.746 [2024-11-19 11:27:49.045980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.746 [2024-11-19 11:27:49.046003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.746 qpair failed and we were unable to recover it. 00:25:53.746 [2024-11-19 11:27:49.046185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.746 [2024-11-19 11:27:49.046224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.746 qpair failed and we were unable to recover it. 00:25:53.746 [2024-11-19 11:27:49.046418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.746 [2024-11-19 11:27:49.046445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.746 qpair failed and we were unable to recover it. 00:25:53.746 [2024-11-19 11:27:49.046657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.746 [2024-11-19 11:27:49.046681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.746 qpair failed and we were unable to recover it. 00:25:53.746 [2024-11-19 11:27:49.046856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.746 [2024-11-19 11:27:49.046880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.746 qpair failed and we were unable to recover it. 00:25:53.746 [2024-11-19 11:27:49.047054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.746 [2024-11-19 11:27:49.047079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.746 qpair failed and we were unable to recover it. 00:25:53.746 [2024-11-19 11:27:49.047279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.746 [2024-11-19 11:27:49.047304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.746 qpair failed and we were unable to recover it. 00:25:53.746 [2024-11-19 11:27:49.047535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.746 [2024-11-19 11:27:49.047561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.746 qpair failed and we were unable to recover it. 00:25:53.746 [2024-11-19 11:27:49.047738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.746 [2024-11-19 11:27:49.047778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.746 qpair failed and we were unable to recover it. 00:25:53.746 [2024-11-19 11:27:49.047943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.746 [2024-11-19 11:27:49.047968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.746 qpair failed and we were unable to recover it. 00:25:53.746 [2024-11-19 11:27:49.048170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.746 [2024-11-19 11:27:49.048195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.746 qpair failed and we were unable to recover it. 00:25:53.746 [2024-11-19 11:27:49.048391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.746 [2024-11-19 11:27:49.048417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.746 qpair failed and we were unable to recover it. 00:25:53.746 [2024-11-19 11:27:49.048635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.747 [2024-11-19 11:27:49.048674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.747 qpair failed and we were unable to recover it. 00:25:53.747 [2024-11-19 11:27:49.048883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.747 [2024-11-19 11:27:49.048906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.747 qpair failed and we were unable to recover it. 00:25:53.747 [2024-11-19 11:27:49.049100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.747 [2024-11-19 11:27:49.049124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.747 qpair failed and we were unable to recover it. 00:25:53.747 [2024-11-19 11:27:49.049325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.747 [2024-11-19 11:27:49.049369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.747 qpair failed and we were unable to recover it. 00:25:53.747 [2024-11-19 11:27:49.049542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.747 [2024-11-19 11:27:49.049567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.747 qpair failed and we were unable to recover it. 00:25:53.747 [2024-11-19 11:27:49.049767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.747 [2024-11-19 11:27:49.049792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.747 qpair failed and we were unable to recover it. 00:25:53.747 [2024-11-19 11:27:49.049971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.747 [2024-11-19 11:27:49.049996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.747 qpair failed and we were unable to recover it. 00:25:53.747 [2024-11-19 11:27:49.050224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.747 [2024-11-19 11:27:49.050249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.747 qpair failed and we were unable to recover it. 00:25:53.747 [2024-11-19 11:27:49.050440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.747 [2024-11-19 11:27:49.050466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.747 qpair failed and we were unable to recover it. 00:25:53.747 [2024-11-19 11:27:49.050646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.747 [2024-11-19 11:27:49.050671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.747 qpair failed and we were unable to recover it. 00:25:53.747 [2024-11-19 11:27:49.050905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.747 [2024-11-19 11:27:49.050929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.747 qpair failed and we were unable to recover it. 00:25:53.747 [2024-11-19 11:27:49.051161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.747 [2024-11-19 11:27:49.051185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.747 qpair failed and we were unable to recover it. 00:25:53.747 [2024-11-19 11:27:49.051358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.747 [2024-11-19 11:27:49.051403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.747 qpair failed and we were unable to recover it. 00:25:53.747 [2024-11-19 11:27:49.051586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.747 [2024-11-19 11:27:49.051617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.747 qpair failed and we were unable to recover it. 00:25:53.747 [2024-11-19 11:27:49.051773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.747 [2024-11-19 11:27:49.051798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.747 qpair failed and we were unable to recover it. 00:25:53.747 [2024-11-19 11:27:49.051974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.747 [2024-11-19 11:27:49.051998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.747 qpair failed and we were unable to recover it. 00:25:53.747 [2024-11-19 11:27:49.052184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.747 [2024-11-19 11:27:49.052222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.747 qpair failed and we were unable to recover it. 00:25:53.747 [2024-11-19 11:27:49.052412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.747 [2024-11-19 11:27:49.052438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.747 qpair failed and we were unable to recover it. 00:25:53.747 [2024-11-19 11:27:49.052632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.747 [2024-11-19 11:27:49.052657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.747 qpair failed and we were unable to recover it. 00:25:53.747 [2024-11-19 11:27:49.052892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.747 [2024-11-19 11:27:49.052916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.747 qpair failed and we were unable to recover it. 00:25:53.747 [2024-11-19 11:27:49.053102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.747 [2024-11-19 11:27:49.053125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.747 qpair failed and we were unable to recover it. 00:25:53.747 [2024-11-19 11:27:49.053332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.747 [2024-11-19 11:27:49.053357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.747 qpair failed and we were unable to recover it. 00:25:53.747 [2024-11-19 11:27:49.053561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.747 [2024-11-19 11:27:49.053588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.747 qpair failed and we were unable to recover it. 00:25:53.747 [2024-11-19 11:27:49.053760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.747 [2024-11-19 11:27:49.053785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.747 qpair failed and we were unable to recover it. 00:25:53.747 [2024-11-19 11:27:49.053955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.747 [2024-11-19 11:27:49.053980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.747 qpair failed and we were unable to recover it. 00:25:53.747 [2024-11-19 11:27:49.054170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.747 [2024-11-19 11:27:49.054194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.747 qpair failed and we were unable to recover it. 00:25:53.747 [2024-11-19 11:27:49.054393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.747 [2024-11-19 11:27:49.054419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.747 qpair failed and we were unable to recover it. 00:25:53.747 [2024-11-19 11:27:49.054575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.747 [2024-11-19 11:27:49.054614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.747 qpair failed and we were unable to recover it. 00:25:53.747 [2024-11-19 11:27:49.054779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.747 [2024-11-19 11:27:49.054802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.747 qpair failed and we were unable to recover it. 00:25:53.747 [2024-11-19 11:27:49.054998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.747 [2024-11-19 11:27:49.055023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.747 qpair failed and we were unable to recover it. 00:25:53.747 [2024-11-19 11:27:49.055165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.747 [2024-11-19 11:27:49.055191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.747 qpair failed and we were unable to recover it. 00:25:53.747 [2024-11-19 11:27:49.055310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.747 [2024-11-19 11:27:49.055350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.747 qpair failed and we were unable to recover it. 00:25:53.747 [2024-11-19 11:27:49.055505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.747 [2024-11-19 11:27:49.055530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.748 qpair failed and we were unable to recover it. 00:25:53.748 [2024-11-19 11:27:49.055705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.748 [2024-11-19 11:27:49.055729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.748 qpair failed and we were unable to recover it. 00:25:53.748 [2024-11-19 11:27:49.055965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.748 [2024-11-19 11:27:49.055990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.748 qpair failed and we were unable to recover it. 00:25:53.748 [2024-11-19 11:27:49.056213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.748 [2024-11-19 11:27:49.056238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.748 qpair failed and we were unable to recover it. 00:25:53.748 [2024-11-19 11:27:49.056426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.748 [2024-11-19 11:27:49.056452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.748 qpair failed and we were unable to recover it. 00:25:53.748 [2024-11-19 11:27:49.056680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.748 [2024-11-19 11:27:49.056720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.748 qpair failed and we were unable to recover it. 00:25:53.748 [2024-11-19 11:27:49.056934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.748 [2024-11-19 11:27:49.056958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.748 qpair failed and we were unable to recover it. 00:25:53.748 [2024-11-19 11:27:49.057194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.748 [2024-11-19 11:27:49.057219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.748 qpair failed and we were unable to recover it. 00:25:53.748 [2024-11-19 11:27:49.057341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.748 [2024-11-19 11:27:49.057386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.748 qpair failed and we were unable to recover it. 00:25:53.748 [2024-11-19 11:27:49.057619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.748 [2024-11-19 11:27:49.057644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.748 qpair failed and we were unable to recover it. 00:25:53.748 [2024-11-19 11:27:49.057843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.748 [2024-11-19 11:27:49.057868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.748 qpair failed and we were unable to recover it. 00:25:53.748 [2024-11-19 11:27:49.058051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.748 [2024-11-19 11:27:49.058076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.748 qpair failed and we were unable to recover it. 00:25:53.748 [2024-11-19 11:27:49.058283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.748 [2024-11-19 11:27:49.058307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.748 qpair failed and we were unable to recover it. 00:25:53.748 [2024-11-19 11:27:49.058502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.748 [2024-11-19 11:27:49.058529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.748 qpair failed and we were unable to recover it. 00:25:53.748 [2024-11-19 11:27:49.058738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.748 [2024-11-19 11:27:49.058763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.748 qpair failed and we were unable to recover it. 00:25:53.748 [2024-11-19 11:27:49.058959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.748 [2024-11-19 11:27:49.058982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.748 qpair failed and we were unable to recover it. 00:25:53.748 [2024-11-19 11:27:49.059192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.748 [2024-11-19 11:27:49.059216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.748 qpair failed and we were unable to recover it. 00:25:53.748 [2024-11-19 11:27:49.059431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.748 [2024-11-19 11:27:49.059456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.748 qpair failed and we were unable to recover it. 00:25:53.748 [2024-11-19 11:27:49.059693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.748 [2024-11-19 11:27:49.059717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.748 qpair failed and we were unable to recover it. 00:25:53.748 [2024-11-19 11:27:49.059949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.748 [2024-11-19 11:27:49.059974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.748 qpair failed and we were unable to recover it. 00:25:53.748 [2024-11-19 11:27:49.060186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.748 [2024-11-19 11:27:49.060210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.748 qpair failed and we were unable to recover it. 00:25:53.748 [2024-11-19 11:27:49.060442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.748 [2024-11-19 11:27:49.060482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.748 qpair failed and we were unable to recover it. 00:25:53.748 [2024-11-19 11:27:49.060714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.748 [2024-11-19 11:27:49.060739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.748 qpair failed and we were unable to recover it. 00:25:53.748 [2024-11-19 11:27:49.060954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.748 [2024-11-19 11:27:49.060979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.748 qpair failed and we were unable to recover it. 00:25:53.748 [2024-11-19 11:27:49.061195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.748 [2024-11-19 11:27:49.061218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.748 qpair failed and we were unable to recover it. 00:25:53.748 [2024-11-19 11:27:49.061413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.748 [2024-11-19 11:27:49.061440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.748 qpair failed and we were unable to recover it. 00:25:53.748 [2024-11-19 11:27:49.061630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.748 [2024-11-19 11:27:49.061656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.748 qpair failed and we were unable to recover it. 00:25:53.748 [2024-11-19 11:27:49.061839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.748 [2024-11-19 11:27:49.061864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.748 qpair failed and we were unable to recover it. 00:25:53.748 [2024-11-19 11:27:49.062075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.748 [2024-11-19 11:27:49.062099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.748 qpair failed and we were unable to recover it. 00:25:53.748 [2024-11-19 11:27:49.062272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.748 [2024-11-19 11:27:49.062296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.748 qpair failed and we were unable to recover it. 00:25:53.748 [2024-11-19 11:27:49.062504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.748 [2024-11-19 11:27:49.062530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.748 qpair failed and we were unable to recover it. 00:25:53.748 [2024-11-19 11:27:49.062705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.748 [2024-11-19 11:27:49.062729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.748 qpair failed and we were unable to recover it. 00:25:53.748 [2024-11-19 11:27:49.062936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.748 [2024-11-19 11:27:49.062959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.748 qpair failed and we were unable to recover it. 00:25:53.748 [2024-11-19 11:27:49.063102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.748 [2024-11-19 11:27:49.063126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.748 qpair failed and we were unable to recover it. 00:25:53.748 [2024-11-19 11:27:49.063349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.748 [2024-11-19 11:27:49.063389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.748 qpair failed and we were unable to recover it. 00:25:53.748 [2024-11-19 11:27:49.063610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.748 [2024-11-19 11:27:49.063635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.748 qpair failed and we were unable to recover it. 00:25:53.748 [2024-11-19 11:27:49.063839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.748 [2024-11-19 11:27:49.063863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.748 qpair failed and we were unable to recover it. 00:25:53.749 [2024-11-19 11:27:49.064003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.749 [2024-11-19 11:27:49.064027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.749 qpair failed and we were unable to recover it. 00:25:53.749 [2024-11-19 11:27:49.064187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.749 [2024-11-19 11:27:49.064227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.749 qpair failed and we were unable to recover it. 00:25:53.749 [2024-11-19 11:27:49.064422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.749 [2024-11-19 11:27:49.064462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.749 qpair failed and we were unable to recover it. 00:25:53.749 [2024-11-19 11:27:49.064682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.749 [2024-11-19 11:27:49.064706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.749 qpair failed and we were unable to recover it. 00:25:53.749 [2024-11-19 11:27:49.064901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.749 [2024-11-19 11:27:49.064925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.749 qpair failed and we were unable to recover it. 00:25:53.749 [2024-11-19 11:27:49.065122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.749 [2024-11-19 11:27:49.065146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.749 qpair failed and we were unable to recover it. 00:25:53.749 [2024-11-19 11:27:49.065377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.749 [2024-11-19 11:27:49.065402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.749 qpair failed and we were unable to recover it. 00:25:53.749 [2024-11-19 11:27:49.065555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.749 [2024-11-19 11:27:49.065580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.749 qpair failed and we were unable to recover it. 00:25:53.749 [2024-11-19 11:27:49.065751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.749 [2024-11-19 11:27:49.065776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.749 qpair failed and we were unable to recover it. 00:25:53.749 [2024-11-19 11:27:49.065950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.749 [2024-11-19 11:27:49.065975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.749 qpair failed and we were unable to recover it. 00:25:53.749 [2024-11-19 11:27:49.066208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.749 [2024-11-19 11:27:49.066232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.749 qpair failed and we were unable to recover it. 00:25:53.749 [2024-11-19 11:27:49.066341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.749 [2024-11-19 11:27:49.066370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.749 qpair failed and we were unable to recover it. 00:25:53.749 [2024-11-19 11:27:49.066559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.749 [2024-11-19 11:27:49.066589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.749 qpair failed and we were unable to recover it. 00:25:53.749 [2024-11-19 11:27:49.066763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.749 [2024-11-19 11:27:49.066803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.749 qpair failed and we were unable to recover it. 00:25:53.749 [2024-11-19 11:27:49.066960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.749 [2024-11-19 11:27:49.066984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.749 qpair failed and we were unable to recover it. 00:25:53.749 [2024-11-19 11:27:49.067211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.749 [2024-11-19 11:27:49.067234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.749 qpair failed and we were unable to recover it. 00:25:53.749 [2024-11-19 11:27:49.067414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.749 [2024-11-19 11:27:49.067455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.749 qpair failed and we were unable to recover it. 00:25:53.749 [2024-11-19 11:27:49.067660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.749 [2024-11-19 11:27:49.067685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.749 qpair failed and we were unable to recover it. 00:25:53.749 [2024-11-19 11:27:49.067880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.749 [2024-11-19 11:27:49.067905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.749 qpair failed and we were unable to recover it. 00:25:53.749 [2024-11-19 11:27:49.068118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.749 [2024-11-19 11:27:49.068142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.749 qpair failed and we were unable to recover it. 00:25:53.749 [2024-11-19 11:27:49.068327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.749 [2024-11-19 11:27:49.068351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.749 qpair failed and we were unable to recover it. 00:25:53.749 [2024-11-19 11:27:49.068548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.749 [2024-11-19 11:27:49.068574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.749 qpair failed and we were unable to recover it. 00:25:53.749 [2024-11-19 11:27:49.068761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.749 [2024-11-19 11:27:49.068786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.749 qpair failed and we were unable to recover it. 00:25:53.749 [2024-11-19 11:27:49.068963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.749 [2024-11-19 11:27:49.068986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.749 qpair failed and we were unable to recover it. 00:25:53.749 [2024-11-19 11:27:49.069151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.749 [2024-11-19 11:27:49.069175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.749 qpair failed and we were unable to recover it. 00:25:53.749 [2024-11-19 11:27:49.069318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.749 [2024-11-19 11:27:49.069358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.749 qpair failed and we were unable to recover it. 00:25:53.749 [2024-11-19 11:27:49.069561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.749 [2024-11-19 11:27:49.069587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.749 qpair failed and we were unable to recover it. 00:25:53.749 [2024-11-19 11:27:49.069815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.749 [2024-11-19 11:27:49.069840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.749 qpair failed and we were unable to recover it. 00:25:53.749 [2024-11-19 11:27:49.070012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.749 [2024-11-19 11:27:49.070037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.749 qpair failed and we were unable to recover it. 00:25:53.749 [2024-11-19 11:27:49.070260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.749 [2024-11-19 11:27:49.070284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.749 qpair failed and we were unable to recover it. 00:25:53.749 [2024-11-19 11:27:49.070499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.749 [2024-11-19 11:27:49.070525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.749 qpair failed and we were unable to recover it. 00:25:53.749 [2024-11-19 11:27:49.070703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.749 [2024-11-19 11:27:49.070728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.749 qpair failed and we were unable to recover it. 00:25:53.749 [2024-11-19 11:27:49.070955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.749 [2024-11-19 11:27:49.070993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.749 qpair failed and we were unable to recover it. 00:25:53.749 [2024-11-19 11:27:49.071134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.749 [2024-11-19 11:27:49.071159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.749 qpair failed and we were unable to recover it. 00:25:53.749 [2024-11-19 11:27:49.071350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.749 [2024-11-19 11:27:49.071396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.749 qpair failed and we were unable to recover it. 00:25:53.749 [2024-11-19 11:27:49.071613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.749 [2024-11-19 11:27:49.071638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.749 qpair failed and we were unable to recover it. 00:25:53.749 [2024-11-19 11:27:49.071869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.750 [2024-11-19 11:27:49.071892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.750 qpair failed and we were unable to recover it. 00:25:53.750 [2024-11-19 11:27:49.072033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.750 [2024-11-19 11:27:49.072056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.750 qpair failed and we were unable to recover it. 00:25:53.750 [2024-11-19 11:27:49.072245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.750 [2024-11-19 11:27:49.072269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.750 qpair failed and we were unable to recover it. 00:25:53.750 [2024-11-19 11:27:49.072456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.750 [2024-11-19 11:27:49.072485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.750 qpair failed and we were unable to recover it. 00:25:53.750 [2024-11-19 11:27:49.072689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.750 [2024-11-19 11:27:49.072713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.750 qpair failed and we were unable to recover it. 00:25:53.750 [2024-11-19 11:27:49.072935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.750 [2024-11-19 11:27:49.072959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.750 qpair failed and we were unable to recover it. 00:25:53.750 [2024-11-19 11:27:49.073128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.750 [2024-11-19 11:27:49.073153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.750 qpair failed and we were unable to recover it. 00:25:53.750 [2024-11-19 11:27:49.073340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.750 [2024-11-19 11:27:49.073368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.750 qpair failed and we were unable to recover it. 00:25:53.750 [2024-11-19 11:27:49.073594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.750 [2024-11-19 11:27:49.073621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.750 qpair failed and we were unable to recover it. 00:25:53.750 [2024-11-19 11:27:49.073812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.750 [2024-11-19 11:27:49.073836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.750 qpair failed and we were unable to recover it. 00:25:53.750 [2024-11-19 11:27:49.074003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.750 [2024-11-19 11:27:49.074027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.750 qpair failed and we were unable to recover it. 00:25:53.750 [2024-11-19 11:27:49.074203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.750 [2024-11-19 11:27:49.074227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.750 qpair failed and we were unable to recover it. 00:25:53.750 [2024-11-19 11:27:49.074443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.750 [2024-11-19 11:27:49.074469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.750 qpair failed and we were unable to recover it. 00:25:53.750 [2024-11-19 11:27:49.074638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.750 [2024-11-19 11:27:49.074663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.750 qpair failed and we were unable to recover it. 00:25:53.750 [2024-11-19 11:27:49.074798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.750 [2024-11-19 11:27:49.074838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.750 qpair failed and we were unable to recover it. 00:25:53.750 [2024-11-19 11:27:49.075061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.750 [2024-11-19 11:27:49.075100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.750 qpair failed and we were unable to recover it. 00:25:53.750 [2024-11-19 11:27:49.075287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.750 [2024-11-19 11:27:49.075312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.750 qpair failed and we were unable to recover it. 00:25:53.750 [2024-11-19 11:27:49.075497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.750 [2024-11-19 11:27:49.075522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.750 qpair failed and we were unable to recover it. 00:25:53.750 [2024-11-19 11:27:49.075693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.750 [2024-11-19 11:27:49.075717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.750 qpair failed and we were unable to recover it. 00:25:53.750 [2024-11-19 11:27:49.075918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.750 [2024-11-19 11:27:49.075941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.750 qpair failed and we were unable to recover it. 00:25:53.750 [2024-11-19 11:27:49.076170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.750 [2024-11-19 11:27:49.076194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.750 qpair failed and we were unable to recover it. 00:25:53.750 [2024-11-19 11:27:49.076401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.750 [2024-11-19 11:27:49.076427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.750 qpair failed and we were unable to recover it. 00:25:53.750 [2024-11-19 11:27:49.076563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.750 [2024-11-19 11:27:49.076588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.750 qpair failed and we were unable to recover it. 00:25:53.750 [2024-11-19 11:27:49.076773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.750 [2024-11-19 11:27:49.076796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.750 qpair failed and we were unable to recover it. 00:25:53.750 [2024-11-19 11:27:49.076999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.750 [2024-11-19 11:27:49.077024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.750 qpair failed and we were unable to recover it. 00:25:53.750 [2024-11-19 11:27:49.077209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.750 [2024-11-19 11:27:49.077233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.750 qpair failed and we were unable to recover it. 00:25:53.750 [2024-11-19 11:27:49.077453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.750 [2024-11-19 11:27:49.077478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.750 qpair failed and we were unable to recover it. 00:25:53.750 [2024-11-19 11:27:49.077658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.750 [2024-11-19 11:27:49.077682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.750 qpair failed and we were unable to recover it. 00:25:53.750 [2024-11-19 11:27:49.077887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.750 [2024-11-19 11:27:49.077911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.750 qpair failed and we were unable to recover it. 00:25:53.750 [2024-11-19 11:27:49.078101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.750 [2024-11-19 11:27:49.078126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.750 qpair failed and we were unable to recover it. 00:25:53.750 [2024-11-19 11:27:49.078305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.750 [2024-11-19 11:27:49.078330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.750 qpair failed and we were unable to recover it. 00:25:53.750 [2024-11-19 11:27:49.078525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.750 [2024-11-19 11:27:49.078551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.750 qpair failed and we were unable to recover it. 00:25:53.750 [2024-11-19 11:27:49.078749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.750 [2024-11-19 11:27:49.078773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.750 qpair failed and we were unable to recover it. 00:25:53.750 [2024-11-19 11:27:49.078955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.750 [2024-11-19 11:27:49.078978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.750 qpair failed and we were unable to recover it. 00:25:53.750 [2024-11-19 11:27:49.079204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.750 [2024-11-19 11:27:49.079228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.750 qpair failed and we were unable to recover it. 00:25:53.750 [2024-11-19 11:27:49.079466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.750 [2024-11-19 11:27:49.079491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.750 qpair failed and we were unable to recover it. 00:25:53.750 [2024-11-19 11:27:49.079734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.750 [2024-11-19 11:27:49.079759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.750 qpair failed and we were unable to recover it. 00:25:53.751 [2024-11-19 11:27:49.079924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.751 [2024-11-19 11:27:49.079949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.751 qpair failed and we were unable to recover it. 00:25:53.751 [2024-11-19 11:27:49.080140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.751 [2024-11-19 11:27:49.080165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.751 qpair failed and we were unable to recover it. 00:25:53.751 [2024-11-19 11:27:49.080392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.751 [2024-11-19 11:27:49.080417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.751 qpair failed and we were unable to recover it. 00:25:53.751 [2024-11-19 11:27:49.080619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.751 [2024-11-19 11:27:49.080643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.751 qpair failed and we were unable to recover it. 00:25:53.751 [2024-11-19 11:27:49.080798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.751 [2024-11-19 11:27:49.080822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.751 qpair failed and we were unable to recover it. 00:25:53.751 [2024-11-19 11:27:49.081016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.751 [2024-11-19 11:27:49.081039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.751 qpair failed and we were unable to recover it. 00:25:53.751 [2024-11-19 11:27:49.081248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.751 [2024-11-19 11:27:49.081272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.751 qpair failed and we were unable to recover it. 00:25:53.751 [2024-11-19 11:27:49.081522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.751 [2024-11-19 11:27:49.081549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.751 qpair failed and we were unable to recover it. 00:25:53.751 [2024-11-19 11:27:49.081716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.751 [2024-11-19 11:27:49.081741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.751 qpair failed and we were unable to recover it. 00:25:53.751 [2024-11-19 11:27:49.081977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.751 [2024-11-19 11:27:49.082002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.751 qpair failed and we were unable to recover it. 00:25:53.751 [2024-11-19 11:27:49.082205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.751 [2024-11-19 11:27:49.082229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.751 qpair failed and we were unable to recover it. 00:25:53.751 [2024-11-19 11:27:49.082405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.751 [2024-11-19 11:27:49.082431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.751 qpair failed and we were unable to recover it. 00:25:53.751 [2024-11-19 11:27:49.082609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.751 [2024-11-19 11:27:49.082633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.751 qpair failed and we were unable to recover it. 00:25:53.751 [2024-11-19 11:27:49.082818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.751 [2024-11-19 11:27:49.082842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.751 qpair failed and we were unable to recover it. 00:25:53.751 [2024-11-19 11:27:49.083003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.751 [2024-11-19 11:27:49.083026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.751 qpair failed and we were unable to recover it. 00:25:53.751 [2024-11-19 11:27:49.083181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.751 [2024-11-19 11:27:49.083205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.751 qpair failed and we were unable to recover it. 00:25:53.751 [2024-11-19 11:27:49.083411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.751 [2024-11-19 11:27:49.083446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.751 qpair failed and we were unable to recover it. 00:25:53.751 [2024-11-19 11:27:49.083584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.751 [2024-11-19 11:27:49.083609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.751 qpair failed and we were unable to recover it. 00:25:53.751 [2024-11-19 11:27:49.083793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.751 [2024-11-19 11:27:49.083817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.751 qpair failed and we were unable to recover it. 00:25:53.751 [2024-11-19 11:27:49.083998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.751 [2024-11-19 11:27:49.084023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.751 qpair failed and we were unable to recover it. 00:25:53.751 [2024-11-19 11:27:49.084229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.751 [2024-11-19 11:27:49.084253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.751 qpair failed and we were unable to recover it. 00:25:53.751 [2024-11-19 11:27:49.084491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.751 [2024-11-19 11:27:49.084516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.751 qpair failed and we were unable to recover it. 00:25:53.751 [2024-11-19 11:27:49.084673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.751 [2024-11-19 11:27:49.084696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.751 qpair failed and we were unable to recover it. 00:25:53.751 [2024-11-19 11:27:49.084841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.751 [2024-11-19 11:27:49.084865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.751 qpair failed and we were unable to recover it. 00:25:53.751 [2024-11-19 11:27:49.085049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.751 [2024-11-19 11:27:49.085089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.751 qpair failed and we were unable to recover it. 00:25:53.751 [2024-11-19 11:27:49.085264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.751 [2024-11-19 11:27:49.085288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.751 qpair failed and we were unable to recover it. 00:25:53.751 [2024-11-19 11:27:49.085459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.751 [2024-11-19 11:27:49.085485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.751 qpair failed and we were unable to recover it. 00:25:53.751 [2024-11-19 11:27:49.085694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.751 [2024-11-19 11:27:49.085719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.751 qpair failed and we were unable to recover it. 00:25:53.751 [2024-11-19 11:27:49.085853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.751 [2024-11-19 11:27:49.085878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.751 qpair failed and we were unable to recover it. 00:25:53.751 [2024-11-19 11:27:49.086040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.751 [2024-11-19 11:27:49.086080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.751 qpair failed and we were unable to recover it. 00:25:53.751 [2024-11-19 11:27:49.086220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.751 [2024-11-19 11:27:49.086244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.751 qpair failed and we were unable to recover it. 00:25:53.751 [2024-11-19 11:27:49.086405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.751 [2024-11-19 11:27:49.086431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.751 qpair failed and we were unable to recover it. 00:25:53.751 [2024-11-19 11:27:49.086667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.751 [2024-11-19 11:27:49.086692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.751 qpair failed and we were unable to recover it. 00:25:53.751 [2024-11-19 11:27:49.086875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.751 [2024-11-19 11:27:49.086899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.751 qpair failed and we were unable to recover it. 00:25:53.751 [2024-11-19 11:27:49.087110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.751 [2024-11-19 11:27:49.087138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.751 qpair failed and we were unable to recover it. 00:25:53.751 [2024-11-19 11:27:49.087331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.751 [2024-11-19 11:27:49.087356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.751 qpair failed and we were unable to recover it. 00:25:53.751 [2024-11-19 11:27:49.087503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.752 [2024-11-19 11:27:49.087529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.752 qpair failed and we were unable to recover it. 00:25:53.752 [2024-11-19 11:27:49.087751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.752 [2024-11-19 11:27:49.087776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.752 qpair failed and we were unable to recover it. 00:25:53.752 [2024-11-19 11:27:49.087947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.752 [2024-11-19 11:27:49.087971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.752 qpair failed and we were unable to recover it. 00:25:53.752 [2024-11-19 11:27:49.088160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.752 [2024-11-19 11:27:49.088184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.752 qpair failed and we were unable to recover it. 00:25:53.752 [2024-11-19 11:27:49.088420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.752 [2024-11-19 11:27:49.088446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.752 qpair failed and we were unable to recover it. 00:25:53.752 [2024-11-19 11:27:49.088668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.752 [2024-11-19 11:27:49.088692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.752 qpair failed and we were unable to recover it. 00:25:53.752 [2024-11-19 11:27:49.088915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.752 [2024-11-19 11:27:49.088939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.752 qpair failed and we were unable to recover it. 00:25:53.752 [2024-11-19 11:27:49.089138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.752 [2024-11-19 11:27:49.089163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.752 qpair failed and we were unable to recover it. 00:25:53.752 [2024-11-19 11:27:49.089388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.752 [2024-11-19 11:27:49.089414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.752 qpair failed and we were unable to recover it. 00:25:53.752 [2024-11-19 11:27:49.089625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.752 [2024-11-19 11:27:49.089665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.752 qpair failed and we were unable to recover it. 00:25:53.752 [2024-11-19 11:27:49.089836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.752 [2024-11-19 11:27:49.089859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.752 qpair failed and we were unable to recover it. 00:25:53.752 [2024-11-19 11:27:49.090078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.752 [2024-11-19 11:27:49.090103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.752 qpair failed and we were unable to recover it. 00:25:53.752 [2024-11-19 11:27:49.090240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.752 [2024-11-19 11:27:49.090280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.752 qpair failed and we were unable to recover it. 00:25:53.752 [2024-11-19 11:27:49.090450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.752 [2024-11-19 11:27:49.090476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.752 qpair failed and we were unable to recover it. 00:25:53.752 [2024-11-19 11:27:49.090612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.752 [2024-11-19 11:27:49.090638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.752 qpair failed and we were unable to recover it. 00:25:53.752 [2024-11-19 11:27:49.090828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.752 [2024-11-19 11:27:49.090852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.752 qpair failed and we were unable to recover it. 00:25:53.752 [2024-11-19 11:27:49.091072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.752 [2024-11-19 11:27:49.091096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.752 qpair failed and we were unable to recover it. 00:25:53.752 [2024-11-19 11:27:49.091280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.752 [2024-11-19 11:27:49.091305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.752 qpair failed and we were unable to recover it. 00:25:53.752 [2024-11-19 11:27:49.091445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.752 [2024-11-19 11:27:49.091469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.752 qpair failed and we were unable to recover it. 00:25:53.752 [2024-11-19 11:27:49.091636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.752 [2024-11-19 11:27:49.091676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.752 qpair failed and we were unable to recover it. 00:25:53.752 [2024-11-19 11:27:49.091854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.752 [2024-11-19 11:27:49.091879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.752 qpair failed and we were unable to recover it. 00:25:53.752 [2024-11-19 11:27:49.092067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.752 [2024-11-19 11:27:49.092092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.752 qpair failed and we were unable to recover it. 00:25:53.752 [2024-11-19 11:27:49.092261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.752 [2024-11-19 11:27:49.092285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.752 qpair failed and we were unable to recover it. 00:25:53.752 [2024-11-19 11:27:49.092526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.752 [2024-11-19 11:27:49.092552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.752 qpair failed and we were unable to recover it. 00:25:53.752 [2024-11-19 11:27:49.092788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.752 [2024-11-19 11:27:49.092812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.752 qpair failed and we were unable to recover it. 00:25:53.752 [2024-11-19 11:27:49.093008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.752 [2024-11-19 11:27:49.093036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.752 qpair failed and we were unable to recover it. 00:25:53.752 [2024-11-19 11:27:49.093212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.752 [2024-11-19 11:27:49.093251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.752 qpair failed and we were unable to recover it. 00:25:53.752 [2024-11-19 11:27:49.093406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.752 [2024-11-19 11:27:49.093432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.752 qpair failed and we were unable to recover it. 00:25:53.752 [2024-11-19 11:27:49.093589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.752 [2024-11-19 11:27:49.093615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.752 qpair failed and we were unable to recover it. 00:25:53.752 [2024-11-19 11:27:49.093826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.752 [2024-11-19 11:27:49.093851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.752 qpair failed and we were unable to recover it. 00:25:53.752 [2024-11-19 11:27:49.094071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.752 [2024-11-19 11:27:49.094110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.752 qpair failed and we were unable to recover it. 00:25:53.752 [2024-11-19 11:27:49.094280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.752 [2024-11-19 11:27:49.094304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.752 qpair failed and we were unable to recover it. 00:25:53.752 [2024-11-19 11:27:49.094529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.753 [2024-11-19 11:27:49.094556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.753 qpair failed and we were unable to recover it. 00:25:53.753 [2024-11-19 11:27:49.094743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.753 [2024-11-19 11:27:49.094767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.753 qpair failed and we were unable to recover it. 00:25:53.753 [2024-11-19 11:27:49.094974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.753 [2024-11-19 11:27:49.094998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.753 qpair failed and we were unable to recover it. 00:25:53.753 [2024-11-19 11:27:49.095176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.753 [2024-11-19 11:27:49.095200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.753 qpair failed and we were unable to recover it. 00:25:53.753 [2024-11-19 11:27:49.095420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.753 [2024-11-19 11:27:49.095447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.753 qpair failed and we were unable to recover it. 00:25:53.753 [2024-11-19 11:27:49.095582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.753 [2024-11-19 11:27:49.095608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.753 qpair failed and we were unable to recover it. 00:25:53.753 [2024-11-19 11:27:49.095731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.753 [2024-11-19 11:27:49.095771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.753 qpair failed and we were unable to recover it. 00:25:53.753 [2024-11-19 11:27:49.095939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.753 [2024-11-19 11:27:49.095963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.753 qpair failed and we were unable to recover it. 00:25:53.753 [2024-11-19 11:27:49.096190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.753 [2024-11-19 11:27:49.096214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.753 qpair failed and we were unable to recover it. 00:25:53.753 [2024-11-19 11:27:49.096428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.753 [2024-11-19 11:27:49.096453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.753 qpair failed and we were unable to recover it. 00:25:53.753 [2024-11-19 11:27:49.096686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.753 [2024-11-19 11:27:49.096710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.753 qpair failed and we were unable to recover it. 00:25:53.753 [2024-11-19 11:27:49.096941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.753 [2024-11-19 11:27:49.096965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.753 qpair failed and we were unable to recover it. 00:25:53.753 [2024-11-19 11:27:49.097163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.753 [2024-11-19 11:27:49.097188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.753 qpair failed and we were unable to recover it. 00:25:53.753 [2024-11-19 11:27:49.097375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.753 [2024-11-19 11:27:49.097402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.753 qpair failed and we were unable to recover it. 00:25:53.753 [2024-11-19 11:27:49.097529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.753 [2024-11-19 11:27:49.097555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.753 qpair failed and we were unable to recover it. 00:25:53.753 [2024-11-19 11:27:49.097773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.753 [2024-11-19 11:27:49.097797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.753 qpair failed and we were unable to recover it. 00:25:53.753 [2024-11-19 11:27:49.097985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.753 [2024-11-19 11:27:49.098009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.753 qpair failed and we were unable to recover it. 00:25:53.753 [2024-11-19 11:27:49.098227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.753 [2024-11-19 11:27:49.098253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.753 qpair failed and we were unable to recover it. 00:25:53.753 [2024-11-19 11:27:49.098437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.753 [2024-11-19 11:27:49.098463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.753 qpair failed and we were unable to recover it. 00:25:53.753 [2024-11-19 11:27:49.098636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.753 [2024-11-19 11:27:49.098662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.753 qpair failed and we were unable to recover it. 00:25:53.753 [2024-11-19 11:27:49.098823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.753 [2024-11-19 11:27:49.098852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.753 qpair failed and we were unable to recover it. 00:25:53.753 [2024-11-19 11:27:49.099031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.753 [2024-11-19 11:27:49.099056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.753 qpair failed and we were unable to recover it. 00:25:53.753 [2024-11-19 11:27:49.099233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.753 [2024-11-19 11:27:49.099257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.753 qpair failed and we were unable to recover it. 00:25:53.753 [2024-11-19 11:27:49.099443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.753 [2024-11-19 11:27:49.099469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.753 qpair failed and we were unable to recover it. 00:25:53.753 [2024-11-19 11:27:49.099632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.753 [2024-11-19 11:27:49.099658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.753 qpair failed and we were unable to recover it. 00:25:53.753 [2024-11-19 11:27:49.099814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.753 [2024-11-19 11:27:49.099839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.753 qpair failed and we were unable to recover it. 00:25:53.753 [2024-11-19 11:27:49.100053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.753 [2024-11-19 11:27:49.100076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.753 qpair failed and we were unable to recover it. 00:25:53.753 [2024-11-19 11:27:49.100258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.753 [2024-11-19 11:27:49.100281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.753 qpair failed and we were unable to recover it. 00:25:53.753 [2024-11-19 11:27:49.100412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.753 [2024-11-19 11:27:49.100438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.753 qpair failed and we were unable to recover it. 00:25:53.753 [2024-11-19 11:27:49.100591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.753 [2024-11-19 11:27:49.100630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.753 qpair failed and we were unable to recover it. 00:25:53.753 [2024-11-19 11:27:49.100833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.753 [2024-11-19 11:27:49.100857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.753 qpair failed and we were unable to recover it. 00:25:53.753 [2024-11-19 11:27:49.101035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.753 [2024-11-19 11:27:49.101075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.753 qpair failed and we were unable to recover it. 00:25:53.753 [2024-11-19 11:27:49.101295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.753 [2024-11-19 11:27:49.101319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.753 qpair failed and we were unable to recover it. 00:25:53.753 [2024-11-19 11:27:49.101533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.753 [2024-11-19 11:27:49.101559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.753 qpair failed and we were unable to recover it. 00:25:53.753 [2024-11-19 11:27:49.101771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.753 [2024-11-19 11:27:49.101794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.753 qpair failed and we were unable to recover it. 00:25:53.753 [2024-11-19 11:27:49.101958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.753 [2024-11-19 11:27:49.101983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.753 qpair failed and we were unable to recover it. 00:25:53.753 [2024-11-19 11:27:49.102205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.753 [2024-11-19 11:27:49.102230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.753 qpair failed and we were unable to recover it. 00:25:53.754 [2024-11-19 11:27:49.102383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.754 [2024-11-19 11:27:49.102409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.754 qpair failed and we were unable to recover it. 00:25:53.754 [2024-11-19 11:27:49.102582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.754 [2024-11-19 11:27:49.102607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.754 qpair failed and we were unable to recover it. 00:25:53.754 [2024-11-19 11:27:49.102799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.754 [2024-11-19 11:27:49.102823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.754 qpair failed and we were unable to recover it. 00:25:53.754 [2024-11-19 11:27:49.103049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.754 [2024-11-19 11:27:49.103073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.754 qpair failed and we were unable to recover it. 00:25:53.754 [2024-11-19 11:27:49.103275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.754 [2024-11-19 11:27:49.103299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.754 qpair failed and we were unable to recover it. 00:25:53.754 [2024-11-19 11:27:49.103502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.754 [2024-11-19 11:27:49.103527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.754 qpair failed and we were unable to recover it. 00:25:53.754 [2024-11-19 11:27:49.103722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.754 [2024-11-19 11:27:49.103746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.754 qpair failed and we were unable to recover it. 00:25:53.754 [2024-11-19 11:27:49.103943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.754 [2024-11-19 11:27:49.103967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.754 qpair failed and we were unable to recover it. 00:25:53.754 [2024-11-19 11:27:49.104134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.754 [2024-11-19 11:27:49.104158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.754 qpair failed and we were unable to recover it. 00:25:53.754 [2024-11-19 11:27:49.104386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.754 [2024-11-19 11:27:49.104411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.754 qpair failed and we were unable to recover it. 00:25:53.754 [2024-11-19 11:27:49.104614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.754 [2024-11-19 11:27:49.104639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.754 qpair failed and we were unable to recover it. 00:25:53.754 [2024-11-19 11:27:49.104866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.754 [2024-11-19 11:27:49.104889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.754 qpair failed and we were unable to recover it. 00:25:53.754 [2024-11-19 11:27:49.105128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.754 [2024-11-19 11:27:49.105152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.754 qpair failed and we were unable to recover it. 00:25:53.754 [2024-11-19 11:27:49.105372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.754 [2024-11-19 11:27:49.105396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.754 qpair failed and we were unable to recover it. 00:25:53.754 [2024-11-19 11:27:49.105629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.754 [2024-11-19 11:27:49.105668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.754 qpair failed and we were unable to recover it. 00:25:53.754 [2024-11-19 11:27:49.105889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.754 [2024-11-19 11:27:49.105913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.754 qpair failed and we were unable to recover it. 00:25:53.754 [2024-11-19 11:27:49.106122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.754 [2024-11-19 11:27:49.106146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.754 qpair failed and we were unable to recover it. 00:25:53.754 [2024-11-19 11:27:49.106308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.754 [2024-11-19 11:27:49.106330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.754 qpair failed and we were unable to recover it. 00:25:53.754 [2024-11-19 11:27:49.106513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.754 [2024-11-19 11:27:49.106538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.754 qpair failed and we were unable to recover it. 00:25:53.754 [2024-11-19 11:27:49.106701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.754 [2024-11-19 11:27:49.106726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.754 qpair failed and we were unable to recover it. 00:25:53.754 [2024-11-19 11:27:49.106943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.754 [2024-11-19 11:27:49.106967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.754 qpair failed and we were unable to recover it. 00:25:53.754 [2024-11-19 11:27:49.107149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.754 [2024-11-19 11:27:49.107172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.754 qpair failed and we were unable to recover it. 00:25:53.754 [2024-11-19 11:27:49.107399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.754 [2024-11-19 11:27:49.107425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.754 qpair failed and we were unable to recover it. 00:25:53.754 [2024-11-19 11:27:49.107662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.754 [2024-11-19 11:27:49.107699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.754 qpair failed and we were unable to recover it. 00:25:53.754 [2024-11-19 11:27:49.107938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.754 [2024-11-19 11:27:49.107962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.754 qpair failed and we were unable to recover it. 00:25:53.754 [2024-11-19 11:27:49.108179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.754 [2024-11-19 11:27:49.108216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.754 qpair failed and we were unable to recover it. 00:25:53.754 [2024-11-19 11:27:49.108406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.754 [2024-11-19 11:27:49.108431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.754 qpair failed and we were unable to recover it. 00:25:53.754 [2024-11-19 11:27:49.108650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.754 [2024-11-19 11:27:49.108689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.754 qpair failed and we were unable to recover it. 00:25:53.754 [2024-11-19 11:27:49.108896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.754 [2024-11-19 11:27:49.108920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.754 qpair failed and we were unable to recover it. 00:25:53.754 [2024-11-19 11:27:49.109096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.754 [2024-11-19 11:27:49.109119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.754 qpair failed and we were unable to recover it. 00:25:53.754 [2024-11-19 11:27:49.109278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.754 [2024-11-19 11:27:49.109302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.754 qpair failed and we were unable to recover it. 00:25:53.754 [2024-11-19 11:27:49.109455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.754 [2024-11-19 11:27:49.109481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.754 qpair failed and we were unable to recover it. 00:25:53.754 [2024-11-19 11:27:49.109637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.754 [2024-11-19 11:27:49.109662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.754 qpair failed and we were unable to recover it. 00:25:53.754 [2024-11-19 11:27:49.109879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.754 [2024-11-19 11:27:49.109902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.754 qpair failed and we were unable to recover it. 00:25:53.754 [2024-11-19 11:27:49.110045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.754 [2024-11-19 11:27:49.110070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.754 qpair failed and we were unable to recover it. 00:25:53.754 [2024-11-19 11:27:49.110288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.754 [2024-11-19 11:27:49.110311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.754 qpair failed and we were unable to recover it. 00:25:53.754 [2024-11-19 11:27:49.110539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.755 [2024-11-19 11:27:49.110564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.755 qpair failed and we were unable to recover it. 00:25:53.755 [2024-11-19 11:27:49.110793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.755 [2024-11-19 11:27:49.110816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.755 qpair failed and we were unable to recover it. 00:25:53.755 [2024-11-19 11:27:49.110998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.755 [2024-11-19 11:27:49.111021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.755 qpair failed and we were unable to recover it. 00:25:53.755 [2024-11-19 11:27:49.111239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.755 [2024-11-19 11:27:49.111263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.755 qpair failed and we were unable to recover it. 00:25:53.755 [2024-11-19 11:27:49.111491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.755 [2024-11-19 11:27:49.111516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.755 qpair failed and we were unable to recover it. 00:25:53.755 [2024-11-19 11:27:49.111694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.755 [2024-11-19 11:27:49.111718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.755 qpair failed and we were unable to recover it. 00:25:53.755 [2024-11-19 11:27:49.111949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.755 [2024-11-19 11:27:49.111973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.755 qpair failed and we were unable to recover it. 00:25:53.755 [2024-11-19 11:27:49.112209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.755 [2024-11-19 11:27:49.112233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.755 qpair failed and we were unable to recover it. 00:25:53.755 [2024-11-19 11:27:49.112425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.755 [2024-11-19 11:27:49.112450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.755 qpair failed and we were unable to recover it. 00:25:53.755 [2024-11-19 11:27:49.112632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.755 [2024-11-19 11:27:49.112671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.755 qpair failed and we were unable to recover it. 00:25:53.755 [2024-11-19 11:27:49.112836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.755 [2024-11-19 11:27:49.112860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.755 qpair failed and we were unable to recover it. 00:25:53.755 [2024-11-19 11:27:49.113048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.755 [2024-11-19 11:27:49.113072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.755 qpair failed and we were unable to recover it. 00:25:53.755 [2024-11-19 11:27:49.113245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.755 [2024-11-19 11:27:49.113269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.755 qpair failed and we were unable to recover it. 00:25:53.755 [2024-11-19 11:27:49.113480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.755 [2024-11-19 11:27:49.113505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.755 qpair failed and we were unable to recover it. 00:25:53.755 [2024-11-19 11:27:49.113692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.755 [2024-11-19 11:27:49.113731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.755 qpair failed and we were unable to recover it. 00:25:53.755 [2024-11-19 11:27:49.113953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.755 [2024-11-19 11:27:49.113981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.755 qpair failed and we were unable to recover it. 00:25:53.755 [2024-11-19 11:27:49.114210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.755 [2024-11-19 11:27:49.114234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.755 qpair failed and we were unable to recover it. 00:25:53.755 [2024-11-19 11:27:49.114446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.755 [2024-11-19 11:27:49.114471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.755 qpair failed and we were unable to recover it. 00:25:53.755 [2024-11-19 11:27:49.114714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.755 [2024-11-19 11:27:49.114738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.755 qpair failed and we were unable to recover it. 00:25:53.755 [2024-11-19 11:27:49.114967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.755 [2024-11-19 11:27:49.114991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.755 qpair failed and we were unable to recover it. 00:25:53.755 [2024-11-19 11:27:49.115232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.755 [2024-11-19 11:27:49.115255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.755 qpair failed and we were unable to recover it. 00:25:53.755 [2024-11-19 11:27:49.115436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.755 [2024-11-19 11:27:49.115462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.755 qpair failed and we were unable to recover it. 00:25:53.755 [2024-11-19 11:27:49.115640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.755 [2024-11-19 11:27:49.115666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.755 qpair failed and we were unable to recover it. 00:25:53.755 [2024-11-19 11:27:49.115875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.755 [2024-11-19 11:27:49.115900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.755 qpair failed and we were unable to recover it. 00:25:53.755 [2024-11-19 11:27:49.116080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.755 [2024-11-19 11:27:49.116104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.755 qpair failed and we were unable to recover it. 00:25:53.755 [2024-11-19 11:27:49.116313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.755 [2024-11-19 11:27:49.116337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.755 qpair failed and we were unable to recover it. 00:25:53.755 [2024-11-19 11:27:49.116535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.755 [2024-11-19 11:27:49.116560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.755 qpair failed and we were unable to recover it. 00:25:53.755 [2024-11-19 11:27:49.116738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.755 [2024-11-19 11:27:49.116763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.755 qpair failed and we were unable to recover it. 00:25:53.755 [2024-11-19 11:27:49.116995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.755 [2024-11-19 11:27:49.117020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.755 qpair failed and we were unable to recover it. 00:25:53.755 [2024-11-19 11:27:49.117232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.755 [2024-11-19 11:27:49.117255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.755 qpair failed and we were unable to recover it. 00:25:53.755 [2024-11-19 11:27:49.117444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.755 [2024-11-19 11:27:49.117470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.755 qpair failed and we were unable to recover it. 00:25:53.755 [2024-11-19 11:27:49.117686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.755 [2024-11-19 11:27:49.117711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.755 qpair failed and we were unable to recover it. 00:25:53.755 [2024-11-19 11:27:49.117896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.755 [2024-11-19 11:27:49.117920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.755 qpair failed and we were unable to recover it. 00:25:53.755 [2024-11-19 11:27:49.118135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.755 [2024-11-19 11:27:49.118158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.755 qpair failed and we were unable to recover it. 00:25:53.755 [2024-11-19 11:27:49.118338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.755 [2024-11-19 11:27:49.118366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.755 qpair failed and we were unable to recover it. 00:25:53.755 [2024-11-19 11:27:49.118553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.755 [2024-11-19 11:27:49.118579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.755 qpair failed and we were unable to recover it. 00:25:53.755 [2024-11-19 11:27:49.118767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.756 [2024-11-19 11:27:49.118791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.756 qpair failed and we were unable to recover it. 00:25:53.756 [2024-11-19 11:27:49.118945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.756 [2024-11-19 11:27:49.118967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.756 qpair failed and we were unable to recover it. 00:25:53.756 [2024-11-19 11:27:49.119183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.756 [2024-11-19 11:27:49.119207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.756 qpair failed and we were unable to recover it. 00:25:53.756 [2024-11-19 11:27:49.119407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.756 [2024-11-19 11:27:49.119448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.756 qpair failed and we were unable to recover it. 00:25:53.756 [2024-11-19 11:27:49.119630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.756 [2024-11-19 11:27:49.119655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.756 qpair failed and we were unable to recover it. 00:25:53.756 [2024-11-19 11:27:49.119878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.756 [2024-11-19 11:27:49.119901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.756 qpair failed and we were unable to recover it. 00:25:53.756 [2024-11-19 11:27:49.120132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.756 [2024-11-19 11:27:49.120160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.756 qpair failed and we were unable to recover it. 00:25:53.756 [2024-11-19 11:27:49.120402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.756 [2024-11-19 11:27:49.120427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.756 qpair failed and we were unable to recover it. 00:25:53.756 [2024-11-19 11:27:49.120654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.756 [2024-11-19 11:27:49.120678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.756 qpair failed and we were unable to recover it. 00:25:53.756 [2024-11-19 11:27:49.120840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.756 [2024-11-19 11:27:49.120863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.756 qpair failed and we were unable to recover it. 00:25:53.756 [2024-11-19 11:27:49.121079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.756 [2024-11-19 11:27:49.121103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.756 qpair failed and we were unable to recover it. 00:25:53.756 [2024-11-19 11:27:49.121237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.756 [2024-11-19 11:27:49.121261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.756 qpair failed and we were unable to recover it. 00:25:53.756 [2024-11-19 11:27:49.121498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.756 [2024-11-19 11:27:49.121524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.756 qpair failed and we were unable to recover it. 00:25:53.756 [2024-11-19 11:27:49.121744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.756 [2024-11-19 11:27:49.121767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.756 qpair failed and we were unable to recover it. 00:25:53.756 [2024-11-19 11:27:49.121959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.756 [2024-11-19 11:27:49.121983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.756 qpair failed and we were unable to recover it. 00:25:53.756 [2024-11-19 11:27:49.122200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.756 [2024-11-19 11:27:49.122224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.756 qpair failed and we were unable to recover it. 00:25:53.756 [2024-11-19 11:27:49.122418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.756 [2024-11-19 11:27:49.122444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.756 qpair failed and we were unable to recover it. 00:25:53.756 [2024-11-19 11:27:49.122590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.756 [2024-11-19 11:27:49.122615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.756 qpair failed and we were unable to recover it. 00:25:53.756 [2024-11-19 11:27:49.122835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.756 [2024-11-19 11:27:49.122859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.756 qpair failed and we were unable to recover it. 00:25:53.756 [2024-11-19 11:27:49.123063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.756 [2024-11-19 11:27:49.123087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.756 qpair failed and we were unable to recover it. 00:25:53.756 [2024-11-19 11:27:49.123297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.756 [2024-11-19 11:27:49.123321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.756 qpair failed and we were unable to recover it. 00:25:53.756 [2024-11-19 11:27:49.123532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.756 [2024-11-19 11:27:49.123557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.756 qpair failed and we were unable to recover it. 00:25:53.756 [2024-11-19 11:27:49.123775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.756 [2024-11-19 11:27:49.123799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.756 qpair failed and we were unable to recover it. 00:25:53.756 [2024-11-19 11:27:49.123988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.756 [2024-11-19 11:27:49.124012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.756 qpair failed and we were unable to recover it. 00:25:53.756 [2024-11-19 11:27:49.124214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.756 [2024-11-19 11:27:49.124237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.756 qpair failed and we were unable to recover it. 00:25:53.756 [2024-11-19 11:27:49.124449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.756 [2024-11-19 11:27:49.124473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.756 qpair failed and we were unable to recover it. 00:25:53.756 [2024-11-19 11:27:49.124707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.756 [2024-11-19 11:27:49.124747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.756 qpair failed and we were unable to recover it. 00:25:53.756 [2024-11-19 11:27:49.124986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.756 [2024-11-19 11:27:49.125010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.756 qpair failed and we were unable to recover it. 00:25:53.756 [2024-11-19 11:27:49.125186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.756 [2024-11-19 11:27:49.125209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.756 qpair failed and we were unable to recover it. 00:25:53.756 [2024-11-19 11:27:49.125427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.756 [2024-11-19 11:27:49.125452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.756 qpair failed and we were unable to recover it. 00:25:53.756 [2024-11-19 11:27:49.125703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.756 [2024-11-19 11:27:49.125727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.756 qpair failed and we were unable to recover it. 00:25:53.756 [2024-11-19 11:27:49.125958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.756 [2024-11-19 11:27:49.125982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.756 qpair failed and we were unable to recover it. 00:25:53.756 [2024-11-19 11:27:49.126158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.756 [2024-11-19 11:27:49.126182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.756 qpair failed and we were unable to recover it. 00:25:53.756 [2024-11-19 11:27:49.126366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.756 [2024-11-19 11:27:49.126405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.756 qpair failed and we were unable to recover it. 00:25:53.756 [2024-11-19 11:27:49.126612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.756 [2024-11-19 11:27:49.126636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.756 qpair failed and we were unable to recover it. 00:25:53.756 [2024-11-19 11:27:49.126806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.756 [2024-11-19 11:27:49.126830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.757 qpair failed and we were unable to recover it. 00:25:53.757 [2024-11-19 11:27:49.127030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.757 [2024-11-19 11:27:49.127055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.757 qpair failed and we were unable to recover it. 00:25:53.757 [2024-11-19 11:27:49.127184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.757 [2024-11-19 11:27:49.127206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.757 qpair failed and we were unable to recover it. 00:25:53.757 [2024-11-19 11:27:49.127397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.757 [2024-11-19 11:27:49.127422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.757 qpair failed and we were unable to recover it. 00:25:53.757 [2024-11-19 11:27:49.127649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.757 [2024-11-19 11:27:49.127689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.757 qpair failed and we were unable to recover it. 00:25:53.757 [2024-11-19 11:27:49.127905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.757 [2024-11-19 11:27:49.127929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.757 qpair failed and we were unable to recover it. 00:25:53.757 [2024-11-19 11:27:49.128134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.757 [2024-11-19 11:27:49.128157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.757 qpair failed and we were unable to recover it. 00:25:53.757 [2024-11-19 11:27:49.128332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.757 [2024-11-19 11:27:49.128356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.757 qpair failed and we were unable to recover it. 00:25:53.757 [2024-11-19 11:27:49.128542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.757 [2024-11-19 11:27:49.128567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.757 qpair failed and we were unable to recover it. 00:25:53.757 [2024-11-19 11:27:49.128779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.757 [2024-11-19 11:27:49.128803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.757 qpair failed and we were unable to recover it. 00:25:53.757 [2024-11-19 11:27:49.129007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.757 [2024-11-19 11:27:49.129030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.757 qpair failed and we were unable to recover it. 00:25:53.757 [2024-11-19 11:27:49.129253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.757 [2024-11-19 11:27:49.129277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.757 qpair failed and we were unable to recover it. 00:25:53.757 [2024-11-19 11:27:49.129518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.757 [2024-11-19 11:27:49.129544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.757 qpair failed and we were unable to recover it. 00:25:53.757 [2024-11-19 11:27:49.129763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.757 [2024-11-19 11:27:49.129787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.757 qpair failed and we were unable to recover it. 00:25:53.757 [2024-11-19 11:27:49.129998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.757 [2024-11-19 11:27:49.130021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.757 qpair failed and we were unable to recover it. 00:25:53.757 [2024-11-19 11:27:49.130209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.757 [2024-11-19 11:27:49.130233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.757 qpair failed and we were unable to recover it. 00:25:53.757 [2024-11-19 11:27:49.130456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.757 [2024-11-19 11:27:49.130482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.757 qpair failed and we were unable to recover it. 00:25:53.757 [2024-11-19 11:27:49.130692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.757 [2024-11-19 11:27:49.130715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.757 qpair failed and we were unable to recover it. 00:25:53.757 [2024-11-19 11:27:49.130865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.757 [2024-11-19 11:27:49.130888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.757 qpair failed and we were unable to recover it. 00:25:53.757 [2024-11-19 11:27:49.131106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.757 [2024-11-19 11:27:49.131130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.757 qpair failed and we were unable to recover it. 00:25:53.757 [2024-11-19 11:27:49.131323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.757 [2024-11-19 11:27:49.131347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.757 qpair failed and we were unable to recover it. 00:25:53.757 [2024-11-19 11:27:49.131592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.757 [2024-11-19 11:27:49.131618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.757 qpair failed and we were unable to recover it. 00:25:53.757 [2024-11-19 11:27:49.131820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.757 [2024-11-19 11:27:49.131843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.757 qpair failed and we were unable to recover it. 00:25:53.757 [2024-11-19 11:27:49.132060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.757 [2024-11-19 11:27:49.132084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.757 qpair failed and we were unable to recover it. 00:25:53.757 [2024-11-19 11:27:49.132277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.757 [2024-11-19 11:27:49.132301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.757 qpair failed and we were unable to recover it. 00:25:53.757 [2024-11-19 11:27:49.132535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.757 [2024-11-19 11:27:49.132561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.757 qpair failed and we were unable to recover it. 00:25:53.757 [2024-11-19 11:27:49.132788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.757 [2024-11-19 11:27:49.132811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.757 qpair failed and we were unable to recover it. 00:25:53.757 [2024-11-19 11:27:49.133035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.757 [2024-11-19 11:27:49.133059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.757 qpair failed and we were unable to recover it. 00:25:53.757 [2024-11-19 11:27:49.133286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.757 [2024-11-19 11:27:49.133310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.757 qpair failed and we were unable to recover it. 00:25:53.757 [2024-11-19 11:27:49.133543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.757 [2024-11-19 11:27:49.133569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.757 qpair failed and we were unable to recover it. 00:25:53.757 [2024-11-19 11:27:49.133806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.757 [2024-11-19 11:27:49.133829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.757 qpair failed and we were unable to recover it. 00:25:53.757 [2024-11-19 11:27:49.134017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.757 [2024-11-19 11:27:49.134041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.757 qpair failed and we were unable to recover it. 00:25:53.757 [2024-11-19 11:27:49.134270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.758 [2024-11-19 11:27:49.134293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.758 qpair failed and we were unable to recover it. 00:25:53.758 [2024-11-19 11:27:49.134532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.758 [2024-11-19 11:27:49.134556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.758 qpair failed and we were unable to recover it. 00:25:53.758 [2024-11-19 11:27:49.134773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.758 [2024-11-19 11:27:49.134796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.758 qpair failed and we were unable to recover it. 00:25:53.758 [2024-11-19 11:27:49.135028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.758 [2024-11-19 11:27:49.135052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.758 qpair failed and we were unable to recover it. 00:25:53.758 [2024-11-19 11:27:49.135290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.758 [2024-11-19 11:27:49.135313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.758 qpair failed and we were unable to recover it. 00:25:53.758 [2024-11-19 11:27:49.135486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.758 [2024-11-19 11:27:49.135511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.758 qpair failed and we were unable to recover it. 00:25:53.758 [2024-11-19 11:27:49.135685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.758 [2024-11-19 11:27:49.135708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.758 qpair failed and we were unable to recover it. 00:25:53.758 [2024-11-19 11:27:49.135865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.758 [2024-11-19 11:27:49.135895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.758 qpair failed and we were unable to recover it. 00:25:53.758 [2024-11-19 11:27:49.136061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.758 [2024-11-19 11:27:49.136085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.758 qpair failed and we were unable to recover it. 00:25:53.758 [2024-11-19 11:27:49.136276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.758 [2024-11-19 11:27:49.136301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.758 qpair failed and we were unable to recover it. 00:25:53.758 [2024-11-19 11:27:49.136469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.758 [2024-11-19 11:27:49.136494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.758 qpair failed and we were unable to recover it. 00:25:53.758 [2024-11-19 11:27:49.136728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.758 [2024-11-19 11:27:49.136752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.758 qpair failed and we were unable to recover it. 00:25:53.758 [2024-11-19 11:27:49.136937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.758 [2024-11-19 11:27:49.136961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.758 qpair failed and we were unable to recover it. 00:25:53.758 [2024-11-19 11:27:49.137129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.758 [2024-11-19 11:27:49.137152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.758 qpair failed and we were unable to recover it. 00:25:53.758 [2024-11-19 11:27:49.137380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.758 [2024-11-19 11:27:49.137404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.758 qpair failed and we were unable to recover it. 00:25:53.758 [2024-11-19 11:27:49.137644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.758 [2024-11-19 11:27:49.137685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.758 qpair failed and we were unable to recover it. 00:25:53.758 [2024-11-19 11:27:49.137900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.758 [2024-11-19 11:27:49.137924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.758 qpair failed and we were unable to recover it. 00:25:53.758 [2024-11-19 11:27:49.138123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.758 [2024-11-19 11:27:49.138147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.758 qpair failed and we were unable to recover it. 00:25:53.758 [2024-11-19 11:27:49.138316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.758 [2024-11-19 11:27:49.138339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.758 qpair failed and we were unable to recover it. 00:25:53.758 [2024-11-19 11:27:49.138548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.758 [2024-11-19 11:27:49.138573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.758 qpair failed and we were unable to recover it. 00:25:53.758 [2024-11-19 11:27:49.138775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.758 [2024-11-19 11:27:49.138799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.758 qpair failed and we were unable to recover it. 00:25:53.758 [2024-11-19 11:27:49.138986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.758 [2024-11-19 11:27:49.139010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.758 qpair failed and we were unable to recover it. 00:25:53.758 [2024-11-19 11:27:49.139181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.758 [2024-11-19 11:27:49.139204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.758 qpair failed and we were unable to recover it. 00:25:53.758 [2024-11-19 11:27:49.139434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.758 [2024-11-19 11:27:49.139459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.758 qpair failed and we were unable to recover it. 00:25:53.758 [2024-11-19 11:27:49.139692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.758 [2024-11-19 11:27:49.139716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.758 qpair failed and we were unable to recover it. 00:25:53.758 [2024-11-19 11:27:49.139944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.758 [2024-11-19 11:27:49.139968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.758 qpair failed and we were unable to recover it. 00:25:53.758 [2024-11-19 11:27:49.140176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.758 [2024-11-19 11:27:49.140199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.758 qpair failed and we were unable to recover it. 00:25:53.758 [2024-11-19 11:27:49.140424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.758 [2024-11-19 11:27:49.140449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.758 qpair failed and we were unable to recover it. 00:25:53.758 [2024-11-19 11:27:49.140651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.758 [2024-11-19 11:27:49.140676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.758 qpair failed and we were unable to recover it. 00:25:53.758 [2024-11-19 11:27:49.140889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.758 [2024-11-19 11:27:49.140913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.758 qpair failed and we were unable to recover it. 00:25:53.758 [2024-11-19 11:27:49.141124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.758 [2024-11-19 11:27:49.141147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.758 qpair failed and we were unable to recover it. 00:25:53.758 [2024-11-19 11:27:49.141265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.758 [2024-11-19 11:27:49.141289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.758 qpair failed and we were unable to recover it. 00:25:53.758 [2024-11-19 11:27:49.141481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.758 [2024-11-19 11:27:49.141507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.758 qpair failed and we were unable to recover it. 00:25:53.758 [2024-11-19 11:27:49.141728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.758 [2024-11-19 11:27:49.141752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.758 qpair failed and we were unable to recover it. 00:25:53.758 [2024-11-19 11:27:49.141968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.758 [2024-11-19 11:27:49.141996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.758 qpair failed and we were unable to recover it. 00:25:53.758 [2024-11-19 11:27:49.142229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.758 [2024-11-19 11:27:49.142253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.758 qpair failed and we were unable to recover it. 00:25:53.758 [2024-11-19 11:27:49.142436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.759 [2024-11-19 11:27:49.142461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.759 qpair failed and we were unable to recover it. 00:25:53.759 [2024-11-19 11:27:49.142685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.759 [2024-11-19 11:27:49.142709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.759 qpair failed and we were unable to recover it. 00:25:53.759 [2024-11-19 11:27:49.142881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.759 [2024-11-19 11:27:49.142904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.759 qpair failed and we were unable to recover it. 00:25:53.759 [2024-11-19 11:27:49.143100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.759 [2024-11-19 11:27:49.143124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.759 qpair failed and we were unable to recover it. 00:25:53.759 [2024-11-19 11:27:49.143344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.759 [2024-11-19 11:27:49.143390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.759 qpair failed and we were unable to recover it. 00:25:53.759 [2024-11-19 11:27:49.143561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.759 [2024-11-19 11:27:49.143585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.759 qpair failed and we were unable to recover it. 00:25:53.759 [2024-11-19 11:27:49.143789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.759 [2024-11-19 11:27:49.143812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.759 qpair failed and we were unable to recover it. 00:25:53.759 [2024-11-19 11:27:49.144030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.759 [2024-11-19 11:27:49.144054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.759 qpair failed and we were unable to recover it. 00:25:53.759 [2024-11-19 11:27:49.144279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.759 [2024-11-19 11:27:49.144302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.759 qpair failed and we were unable to recover it. 00:25:53.759 [2024-11-19 11:27:49.144470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.759 [2024-11-19 11:27:49.144497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.759 qpair failed and we were unable to recover it. 00:25:53.759 [2024-11-19 11:27:49.144722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.759 [2024-11-19 11:27:49.144745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.759 qpair failed and we were unable to recover it. 00:25:53.759 [2024-11-19 11:27:49.144921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.759 [2024-11-19 11:27:49.144945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.759 qpair failed and we were unable to recover it. 00:25:53.759 [2024-11-19 11:27:49.145128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.759 [2024-11-19 11:27:49.145152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.759 qpair failed and we were unable to recover it. 00:25:53.759 [2024-11-19 11:27:49.145383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.759 [2024-11-19 11:27:49.145408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.759 qpair failed and we were unable to recover it. 00:25:53.759 [2024-11-19 11:27:49.145568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.759 [2024-11-19 11:27:49.145594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.759 qpair failed and we were unable to recover it. 00:25:53.759 [2024-11-19 11:27:49.145771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.759 [2024-11-19 11:27:49.145796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.759 qpair failed and we were unable to recover it. 00:25:53.759 [2024-11-19 11:27:49.145991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.759 [2024-11-19 11:27:49.146015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.759 qpair failed and we were unable to recover it. 00:25:53.759 [2024-11-19 11:27:49.146213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.759 [2024-11-19 11:27:49.146236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.759 qpair failed and we were unable to recover it. 00:25:53.759 [2024-11-19 11:27:49.146448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.759 [2024-11-19 11:27:49.146473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.759 qpair failed and we were unable to recover it. 00:25:53.759 [2024-11-19 11:27:49.146688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.759 [2024-11-19 11:27:49.146726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.759 qpair failed and we were unable to recover it. 00:25:53.759 [2024-11-19 11:27:49.146901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.759 [2024-11-19 11:27:49.146925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.759 qpair failed and we were unable to recover it. 00:25:53.759 [2024-11-19 11:27:49.147147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.759 [2024-11-19 11:27:49.147172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.759 qpair failed and we were unable to recover it. 00:25:53.759 [2024-11-19 11:27:49.147354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.759 [2024-11-19 11:27:49.147398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.759 qpair failed and we were unable to recover it. 00:25:53.759 [2024-11-19 11:27:49.147619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.759 [2024-11-19 11:27:49.147643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.759 qpair failed and we were unable to recover it. 00:25:53.759 [2024-11-19 11:27:49.147868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.759 [2024-11-19 11:27:49.147891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.759 qpair failed and we were unable to recover it. 00:25:53.759 [2024-11-19 11:27:49.148132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.759 [2024-11-19 11:27:49.148160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.759 qpair failed and we were unable to recover it. 00:25:53.759 [2024-11-19 11:27:49.148385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.759 [2024-11-19 11:27:49.148410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.759 qpair failed and we were unable to recover it. 00:25:53.759 [2024-11-19 11:27:49.148640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.759 [2024-11-19 11:27:49.148678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.759 qpair failed and we were unable to recover it. 00:25:53.759 [2024-11-19 11:27:49.148889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.759 [2024-11-19 11:27:49.148913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.759 qpair failed and we were unable to recover it. 00:25:53.759 [2024-11-19 11:27:49.149136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.759 [2024-11-19 11:27:49.149160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.759 qpair failed and we were unable to recover it. 00:25:53.759 [2024-11-19 11:27:49.149320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.759 [2024-11-19 11:27:49.149357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.759 qpair failed and we were unable to recover it. 00:25:53.759 [2024-11-19 11:27:49.149591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.759 [2024-11-19 11:27:49.149617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.759 qpair failed and we were unable to recover it. 00:25:53.759 [2024-11-19 11:27:49.149837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.759 [2024-11-19 11:27:49.149861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.759 qpair failed and we were unable to recover it. 00:25:53.759 [2024-11-19 11:27:49.150086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.759 [2024-11-19 11:27:49.150109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.759 qpair failed and we were unable to recover it. 00:25:53.759 [2024-11-19 11:27:49.150326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.759 [2024-11-19 11:27:49.150375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.759 qpair failed and we were unable to recover it. 00:25:53.759 [2024-11-19 11:27:49.150550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.759 [2024-11-19 11:27:49.150575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.759 qpair failed and we were unable to recover it. 00:25:53.760 [2024-11-19 11:27:49.150758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.760 [2024-11-19 11:27:49.150782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.760 qpair failed and we were unable to recover it. 00:25:53.760 [2024-11-19 11:27:49.151010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.760 [2024-11-19 11:27:49.151034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.760 qpair failed and we were unable to recover it. 00:25:53.760 [2024-11-19 11:27:49.151254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.760 [2024-11-19 11:27:49.151278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.760 qpair failed and we were unable to recover it. 00:25:53.760 [2024-11-19 11:27:49.151450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.760 [2024-11-19 11:27:49.151476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.760 qpair failed and we were unable to recover it. 00:25:53.760 [2024-11-19 11:27:49.151712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.760 [2024-11-19 11:27:49.151736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.760 qpair failed and we were unable to recover it. 00:25:53.760 [2024-11-19 11:27:49.151877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.760 [2024-11-19 11:27:49.151901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.760 qpair failed and we were unable to recover it. 00:25:53.760 [2024-11-19 11:27:49.152055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.760 [2024-11-19 11:27:49.152093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.760 qpair failed and we were unable to recover it. 00:25:53.760 [2024-11-19 11:27:49.152306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.760 [2024-11-19 11:27:49.152330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.760 qpair failed and we were unable to recover it. 00:25:53.760 [2024-11-19 11:27:49.152568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.760 [2024-11-19 11:27:49.152593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.760 qpair failed and we were unable to recover it. 00:25:53.760 [2024-11-19 11:27:49.152757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.760 [2024-11-19 11:27:49.152781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.760 qpair failed and we were unable to recover it. 00:25:53.760 [2024-11-19 11:27:49.152942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.760 [2024-11-19 11:27:49.152965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.760 qpair failed and we were unable to recover it. 00:25:53.760 [2024-11-19 11:27:49.153145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.760 [2024-11-19 11:27:49.153169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.760 qpair failed and we were unable to recover it. 00:25:53.760 [2024-11-19 11:27:49.153404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.760 [2024-11-19 11:27:49.153430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.760 qpair failed and we were unable to recover it. 00:25:53.760 [2024-11-19 11:27:49.153639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.760 [2024-11-19 11:27:49.153681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.760 qpair failed and we were unable to recover it. 00:25:53.760 [2024-11-19 11:27:49.153896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.760 [2024-11-19 11:27:49.153919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.760 qpair failed and we were unable to recover it. 00:25:53.760 [2024-11-19 11:27:49.154149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.760 [2024-11-19 11:27:49.154173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.760 qpair failed and we were unable to recover it. 00:25:53.760 [2024-11-19 11:27:49.154336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.760 [2024-11-19 11:27:49.154359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.760 qpair failed and we were unable to recover it. 00:25:53.760 [2024-11-19 11:27:49.154563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.760 [2024-11-19 11:27:49.154588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.760 qpair failed and we were unable to recover it. 00:25:53.760 [2024-11-19 11:27:49.154792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.760 [2024-11-19 11:27:49.154815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.760 qpair failed and we were unable to recover it. 00:25:53.760 [2024-11-19 11:27:49.155035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.760 [2024-11-19 11:27:49.155059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.760 qpair failed and we were unable to recover it. 00:25:53.760 [2024-11-19 11:27:49.155245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.760 [2024-11-19 11:27:49.155269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.760 qpair failed and we were unable to recover it. 00:25:53.760 [2024-11-19 11:27:49.155490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.760 [2024-11-19 11:27:49.155515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.760 qpair failed and we were unable to recover it. 00:25:53.760 [2024-11-19 11:27:49.155693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.760 [2024-11-19 11:27:49.155716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.760 qpair failed and we were unable to recover it. 00:25:53.760 [2024-11-19 11:27:49.155942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.760 [2024-11-19 11:27:49.155966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.760 qpair failed and we were unable to recover it. 00:25:53.760 [2024-11-19 11:27:49.156126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.760 [2024-11-19 11:27:49.156150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.760 qpair failed and we were unable to recover it. 00:25:53.760 [2024-11-19 11:27:49.156355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.760 [2024-11-19 11:27:49.156399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.760 qpair failed and we were unable to recover it. 00:25:53.760 [2024-11-19 11:27:49.156545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.760 [2024-11-19 11:27:49.156569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.760 qpair failed and we were unable to recover it. 00:25:53.760 [2024-11-19 11:27:49.156760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.760 [2024-11-19 11:27:49.156784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.760 qpair failed and we were unable to recover it. 00:25:53.760 [2024-11-19 11:27:49.157016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.760 [2024-11-19 11:27:49.157040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.760 qpair failed and we were unable to recover it. 00:25:53.760 [2024-11-19 11:27:49.157272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.761 [2024-11-19 11:27:49.157297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.761 qpair failed and we were unable to recover it. 00:25:53.761 [2024-11-19 11:27:49.157515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.761 [2024-11-19 11:27:49.157540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.761 qpair failed and we were unable to recover it. 00:25:53.761 [2024-11-19 11:27:49.157760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.761 [2024-11-19 11:27:49.157784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.761 qpair failed and we were unable to recover it. 00:25:53.761 [2024-11-19 11:27:49.157977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.761 [2024-11-19 11:27:49.158000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.761 qpair failed and we were unable to recover it. 00:25:53.761 [2024-11-19 11:27:49.158217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.761 [2024-11-19 11:27:49.158241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.761 qpair failed and we were unable to recover it. 00:25:53.761 [2024-11-19 11:27:49.158412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.761 [2024-11-19 11:27:49.158436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.761 qpair failed and we were unable to recover it. 00:25:53.761 [2024-11-19 11:27:49.158649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.761 [2024-11-19 11:27:49.158673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.761 qpair failed and we were unable to recover it. 00:25:53.761 [2024-11-19 11:27:49.158898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.761 [2024-11-19 11:27:49.158921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.761 qpair failed and we were unable to recover it. 00:25:53.761 [2024-11-19 11:27:49.159149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.761 [2024-11-19 11:27:49.159172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.761 qpair failed and we were unable to recover it. 00:25:53.761 [2024-11-19 11:27:49.159400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.761 [2024-11-19 11:27:49.159425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.761 qpair failed and we were unable to recover it. 00:25:53.761 [2024-11-19 11:27:49.159613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.761 [2024-11-19 11:27:49.159638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.761 qpair failed and we were unable to recover it. 00:25:53.761 [2024-11-19 11:27:49.159793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.761 [2024-11-19 11:27:49.159816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.761 qpair failed and we were unable to recover it. 00:25:53.761 [2024-11-19 11:27:49.160033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.761 [2024-11-19 11:27:49.160057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.761 qpair failed and we were unable to recover it. 00:25:53.761 [2024-11-19 11:27:49.160254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.761 [2024-11-19 11:27:49.160278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.761 qpair failed and we were unable to recover it. 00:25:53.761 [2024-11-19 11:27:49.160503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.761 [2024-11-19 11:27:49.160528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.761 qpair failed and we were unable to recover it. 00:25:53.761 [2024-11-19 11:27:49.160757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.761 [2024-11-19 11:27:49.160781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.761 qpair failed and we were unable to recover it. 00:25:53.761 [2024-11-19 11:27:49.160968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.761 [2024-11-19 11:27:49.160992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.761 qpair failed and we were unable to recover it. 00:25:53.761 [2024-11-19 11:27:49.161146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.761 [2024-11-19 11:27:49.161169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.761 qpair failed and we were unable to recover it. 00:25:53.761 [2024-11-19 11:27:49.161367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.761 [2024-11-19 11:27:49.161392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.761 qpair failed and we were unable to recover it. 00:25:53.761 [2024-11-19 11:27:49.161559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.761 [2024-11-19 11:27:49.161584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.761 qpair failed and we were unable to recover it. 00:25:53.761 [2024-11-19 11:27:49.161803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.761 [2024-11-19 11:27:49.161827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.761 qpair failed and we were unable to recover it. 00:25:53.761 [2024-11-19 11:27:49.162042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.761 [2024-11-19 11:27:49.162065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.761 qpair failed and we were unable to recover it. 00:25:53.761 [2024-11-19 11:27:49.162251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.761 [2024-11-19 11:27:49.162275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.761 qpair failed and we were unable to recover it. 00:25:53.761 [2024-11-19 11:27:49.162503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.761 [2024-11-19 11:27:49.162529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.761 qpair failed and we were unable to recover it. 00:25:53.761 [2024-11-19 11:27:49.162756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.761 [2024-11-19 11:27:49.162780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.761 qpair failed and we were unable to recover it. 00:25:53.761 [2024-11-19 11:27:49.162994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.761 [2024-11-19 11:27:49.163017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.761 qpair failed and we were unable to recover it. 00:25:53.761 [2024-11-19 11:27:49.163198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.761 [2024-11-19 11:27:49.163222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.761 qpair failed and we were unable to recover it. 00:25:53.761 [2024-11-19 11:27:49.163447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.761 [2024-11-19 11:27:49.163473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.761 qpair failed and we were unable to recover it. 00:25:53.761 [2024-11-19 11:27:49.163650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.761 [2024-11-19 11:27:49.163678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.761 qpair failed and we were unable to recover it. 00:25:53.761 [2024-11-19 11:27:49.163803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.761 [2024-11-19 11:27:49.163826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.762 qpair failed and we were unable to recover it. 00:25:53.762 [2024-11-19 11:27:49.164005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.762 [2024-11-19 11:27:49.164029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.762 qpair failed and we were unable to recover it. 00:25:53.762 [2024-11-19 11:27:49.164255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.762 [2024-11-19 11:27:49.164279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.762 qpair failed and we were unable to recover it. 00:25:53.762 [2024-11-19 11:27:49.164509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.762 [2024-11-19 11:27:49.164534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.762 qpair failed and we were unable to recover it. 00:25:53.762 [2024-11-19 11:27:49.164710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.762 [2024-11-19 11:27:49.164733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.762 qpair failed and we were unable to recover it. 00:25:53.762 [2024-11-19 11:27:49.164917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.762 [2024-11-19 11:27:49.164941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.762 qpair failed and we were unable to recover it. 00:25:53.762 [2024-11-19 11:27:49.165139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.762 [2024-11-19 11:27:49.165163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.762 qpair failed and we were unable to recover it. 00:25:53.762 [2024-11-19 11:27:49.165357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.762 [2024-11-19 11:27:49.165387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.762 qpair failed and we were unable to recover it. 00:25:53.762 [2024-11-19 11:27:49.165514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.762 [2024-11-19 11:27:49.165539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.762 qpair failed and we were unable to recover it. 00:25:53.762 [2024-11-19 11:27:49.165703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.762 [2024-11-19 11:27:49.165728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.762 qpair failed and we were unable to recover it. 00:25:53.762 [2024-11-19 11:27:49.165910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.762 [2024-11-19 11:27:49.165935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.762 qpair failed and we were unable to recover it. 00:25:53.762 [2024-11-19 11:27:49.166149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.762 [2024-11-19 11:27:49.166173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.762 qpair failed and we were unable to recover it. 00:25:53.762 [2024-11-19 11:27:49.166315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.762 [2024-11-19 11:27:49.166339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.762 qpair failed and we were unable to recover it. 00:25:53.762 [2024-11-19 11:27:49.166577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.762 [2024-11-19 11:27:49.166601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.762 qpair failed and we were unable to recover it. 00:25:53.762 [2024-11-19 11:27:49.166828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.762 [2024-11-19 11:27:49.166852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.762 qpair failed and we were unable to recover it. 00:25:53.762 [2024-11-19 11:27:49.167077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.762 [2024-11-19 11:27:49.167100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.762 qpair failed and we were unable to recover it. 00:25:53.762 [2024-11-19 11:27:49.167305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.762 [2024-11-19 11:27:49.167328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.762 qpair failed and we were unable to recover it. 00:25:53.762 [2024-11-19 11:27:49.167563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.762 [2024-11-19 11:27:49.167588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.762 qpair failed and we were unable to recover it. 00:25:53.762 [2024-11-19 11:27:49.167822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.762 [2024-11-19 11:27:49.167846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.762 qpair failed and we were unable to recover it. 00:25:53.762 [2024-11-19 11:27:49.168070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.762 [2024-11-19 11:27:49.168094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.762 qpair failed and we were unable to recover it. 00:25:53.762 [2024-11-19 11:27:49.168316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.762 [2024-11-19 11:27:49.168354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.762 qpair failed and we were unable to recover it. 00:25:53.762 [2024-11-19 11:27:49.168591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.762 [2024-11-19 11:27:49.168616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.762 qpair failed and we were unable to recover it. 00:25:53.762 [2024-11-19 11:27:49.168805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.762 [2024-11-19 11:27:49.168829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.762 qpair failed and we were unable to recover it. 00:25:53.762 [2024-11-19 11:27:49.169046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.762 [2024-11-19 11:27:49.169070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.762 qpair failed and we were unable to recover it. 00:25:53.762 [2024-11-19 11:27:49.169267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.762 [2024-11-19 11:27:49.169290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.762 qpair failed and we were unable to recover it. 00:25:53.762 [2024-11-19 11:27:49.169513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.762 [2024-11-19 11:27:49.169540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.762 qpair failed and we were unable to recover it. 00:25:53.762 [2024-11-19 11:27:49.169770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.762 [2024-11-19 11:27:49.169798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.762 qpair failed and we were unable to recover it. 00:25:53.762 [2024-11-19 11:27:49.170020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.762 [2024-11-19 11:27:49.170044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.762 qpair failed and we were unable to recover it. 00:25:53.762 [2024-11-19 11:27:49.170257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.762 [2024-11-19 11:27:49.170280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.762 qpair failed and we were unable to recover it. 00:25:53.762 [2024-11-19 11:27:49.170512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.762 [2024-11-19 11:27:49.170536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.762 qpair failed and we were unable to recover it. 00:25:53.762 [2024-11-19 11:27:49.170759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.762 [2024-11-19 11:27:49.170783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.762 qpair failed and we were unable to recover it. 00:25:53.762 [2024-11-19 11:27:49.171026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.762 [2024-11-19 11:27:49.171050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.763 qpair failed and we were unable to recover it. 00:25:53.763 [2024-11-19 11:27:49.171259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.763 [2024-11-19 11:27:49.171282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.763 qpair failed and we were unable to recover it. 00:25:53.763 [2024-11-19 11:27:49.171474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.763 [2024-11-19 11:27:49.171500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.763 qpair failed and we were unable to recover it. 00:25:53.763 [2024-11-19 11:27:49.171735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.763 [2024-11-19 11:27:49.171759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.763 qpair failed and we were unable to recover it. 00:25:53.763 [2024-11-19 11:27:49.171982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.763 [2024-11-19 11:27:49.172006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.763 qpair failed and we were unable to recover it. 00:25:53.763 [2024-11-19 11:27:49.172183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.763 [2024-11-19 11:27:49.172206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.763 qpair failed and we were unable to recover it. 00:25:53.763 [2024-11-19 11:27:49.172387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.763 [2024-11-19 11:27:49.172413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.763 qpair failed and we were unable to recover it. 00:25:53.763 [2024-11-19 11:27:49.172590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.763 [2024-11-19 11:27:49.172615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.763 qpair failed and we were unable to recover it. 00:25:53.763 [2024-11-19 11:27:49.172768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.763 [2024-11-19 11:27:49.172792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.763 qpair failed and we were unable to recover it. 00:25:53.763 [2024-11-19 11:27:49.173013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.763 [2024-11-19 11:27:49.173036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.763 qpair failed and we were unable to recover it. 00:25:53.763 [2024-11-19 11:27:49.173158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.763 [2024-11-19 11:27:49.173182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.763 qpair failed and we were unable to recover it. 00:25:53.763 [2024-11-19 11:27:49.173407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.763 [2024-11-19 11:27:49.173433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.763 qpair failed and we were unable to recover it. 00:25:53.763 [2024-11-19 11:27:49.173600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.763 [2024-11-19 11:27:49.173625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.763 qpair failed and we were unable to recover it. 00:25:53.763 [2024-11-19 11:27:49.173808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.763 [2024-11-19 11:27:49.173831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.763 qpair failed and we were unable to recover it. 00:25:53.763 [2024-11-19 11:27:49.174048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.763 [2024-11-19 11:27:49.174071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.763 qpair failed and we were unable to recover it. 00:25:53.763 [2024-11-19 11:27:49.174303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.763 [2024-11-19 11:27:49.174327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.763 qpair failed and we were unable to recover it. 00:25:53.763 [2024-11-19 11:27:49.174521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.763 [2024-11-19 11:27:49.174546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.763 qpair failed and we were unable to recover it. 00:25:53.763 [2024-11-19 11:27:49.174763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.763 [2024-11-19 11:27:49.174786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.763 qpair failed and we were unable to recover it. 00:25:53.763 [2024-11-19 11:27:49.175007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.763 [2024-11-19 11:27:49.175031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.763 qpair failed and we were unable to recover it. 00:25:53.763 [2024-11-19 11:27:49.175213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.763 [2024-11-19 11:27:49.175237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.763 qpair failed and we were unable to recover it. 00:25:53.763 [2024-11-19 11:27:49.175469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.763 [2024-11-19 11:27:49.175494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.763 qpair failed and we were unable to recover it. 00:25:53.763 [2024-11-19 11:27:49.175717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.763 [2024-11-19 11:27:49.175740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.763 qpair failed and we were unable to recover it. 00:25:53.763 [2024-11-19 11:27:49.175972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.763 [2024-11-19 11:27:49.175996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.763 qpair failed and we were unable to recover it. 00:25:53.763 [2024-11-19 11:27:49.176163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.763 [2024-11-19 11:27:49.176187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.763 qpair failed and we were unable to recover it. 00:25:53.763 [2024-11-19 11:27:49.176406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.763 [2024-11-19 11:27:49.176431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.763 qpair failed and we were unable to recover it. 00:25:53.763 [2024-11-19 11:27:49.176635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.763 [2024-11-19 11:27:49.176672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.763 qpair failed and we were unable to recover it. 00:25:53.763 [2024-11-19 11:27:49.176893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.763 [2024-11-19 11:27:49.176917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.763 qpair failed and we were unable to recover it. 00:25:53.763 [2024-11-19 11:27:49.177147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.763 [2024-11-19 11:27:49.177170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.763 qpair failed and we were unable to recover it. 00:25:53.763 [2024-11-19 11:27:49.177413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.763 [2024-11-19 11:27:49.177440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.763 qpair failed and we were unable to recover it. 00:25:53.763 [2024-11-19 11:27:49.177664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.763 [2024-11-19 11:27:49.177689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.763 qpair failed and we were unable to recover it. 00:25:53.763 [2024-11-19 11:27:49.177873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.763 [2024-11-19 11:27:49.177897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.763 qpair failed and we were unable to recover it. 00:25:53.763 [2024-11-19 11:27:49.178131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.763 [2024-11-19 11:27:49.178155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.763 qpair failed and we were unable to recover it. 00:25:53.763 [2024-11-19 11:27:49.178330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.763 [2024-11-19 11:27:49.178354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.763 qpair failed and we were unable to recover it. 00:25:53.763 [2024-11-19 11:27:49.178549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.763 [2024-11-19 11:27:49.178573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.763 qpair failed and we were unable to recover it. 00:25:53.763 [2024-11-19 11:27:49.178795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.763 [2024-11-19 11:27:49.178819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.763 qpair failed and we were unable to recover it. 00:25:53.763 [2024-11-19 11:27:49.179020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.763 [2024-11-19 11:27:49.179044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.763 qpair failed and we were unable to recover it. 00:25:53.763 [2024-11-19 11:27:49.179278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.763 [2024-11-19 11:27:49.179302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.763 qpair failed and we were unable to recover it. 00:25:53.763 [2024-11-19 11:27:49.179507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.763 [2024-11-19 11:27:49.179531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.763 qpair failed and we were unable to recover it. 00:25:53.763 [2024-11-19 11:27:49.179756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.763 [2024-11-19 11:27:49.179780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.764 qpair failed and we were unable to recover it. 00:25:53.764 [2024-11-19 11:27:49.180016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.764 [2024-11-19 11:27:49.180040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.764 qpair failed and we were unable to recover it. 00:25:53.764 [2024-11-19 11:27:49.180226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.764 [2024-11-19 11:27:49.180250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.764 qpair failed and we were unable to recover it. 00:25:53.764 [2024-11-19 11:27:49.180423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.764 [2024-11-19 11:27:49.180447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.764 qpair failed and we were unable to recover it. 00:25:53.764 [2024-11-19 11:27:49.180676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.764 [2024-11-19 11:27:49.180701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.764 qpair failed and we were unable to recover it. 00:25:53.764 [2024-11-19 11:27:49.180926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.764 [2024-11-19 11:27:49.180950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.764 qpair failed and we were unable to recover it. 00:25:53.764 [2024-11-19 11:27:49.181178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.764 [2024-11-19 11:27:49.181202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.764 qpair failed and we were unable to recover it. 00:25:53.764 [2024-11-19 11:27:49.181426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.764 [2024-11-19 11:27:49.181451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.764 qpair failed and we were unable to recover it. 00:25:53.764 [2024-11-19 11:27:49.181675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.764 [2024-11-19 11:27:49.181701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.764 qpair failed and we were unable to recover it. 00:25:53.764 [2024-11-19 11:27:49.181903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.764 [2024-11-19 11:27:49.181926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.764 qpair failed and we were unable to recover it. 00:25:53.764 [2024-11-19 11:27:49.182108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.764 [2024-11-19 11:27:49.182132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.764 qpair failed and we were unable to recover it. 00:25:53.764 [2024-11-19 11:27:49.182355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.764 [2024-11-19 11:27:49.182398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.764 qpair failed and we were unable to recover it. 00:25:53.764 [2024-11-19 11:27:49.182582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.764 [2024-11-19 11:27:49.182607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.764 qpair failed and we were unable to recover it. 00:25:53.764 [2024-11-19 11:27:49.182803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.764 [2024-11-19 11:27:49.182826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.764 qpair failed and we were unable to recover it. 00:25:53.764 [2024-11-19 11:27:49.183053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.764 [2024-11-19 11:27:49.183077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.764 qpair failed and we were unable to recover it. 00:25:53.764 [2024-11-19 11:27:49.183254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.764 [2024-11-19 11:27:49.183277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.764 qpair failed and we were unable to recover it. 00:25:53.764 [2024-11-19 11:27:49.183506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.764 [2024-11-19 11:27:49.183531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.764 qpair failed and we were unable to recover it. 00:25:53.764 [2024-11-19 11:27:49.183737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.764 [2024-11-19 11:27:49.183761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.764 qpair failed and we were unable to recover it. 00:25:53.764 [2024-11-19 11:27:49.183997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.764 [2024-11-19 11:27:49.184022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.764 qpair failed and we were unable to recover it. 00:25:53.764 [2024-11-19 11:27:49.184243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.764 [2024-11-19 11:27:49.184266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.764 qpair failed and we were unable to recover it. 00:25:53.764 [2024-11-19 11:27:49.184497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.764 [2024-11-19 11:27:49.184522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.764 qpair failed and we were unable to recover it. 00:25:53.764 [2024-11-19 11:27:49.184759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.764 [2024-11-19 11:27:49.184783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.764 qpair failed and we were unable to recover it. 00:25:53.764 [2024-11-19 11:27:49.184999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.764 [2024-11-19 11:27:49.185023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.764 qpair failed and we were unable to recover it. 00:25:53.764 [2024-11-19 11:27:49.185202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.764 [2024-11-19 11:27:49.185225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.764 qpair failed and we were unable to recover it. 00:25:53.764 [2024-11-19 11:27:49.185443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.764 [2024-11-19 11:27:49.185470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.764 qpair failed and we were unable to recover it. 00:25:53.764 [2024-11-19 11:27:49.185653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.764 [2024-11-19 11:27:49.185698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.764 qpair failed and we were unable to recover it. 00:25:53.764 [2024-11-19 11:27:49.185902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.764 [2024-11-19 11:27:49.185925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.764 qpair failed and we were unable to recover it. 00:25:53.764 [2024-11-19 11:27:49.186081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.764 [2024-11-19 11:27:49.186104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.764 qpair failed and we were unable to recover it. 00:25:53.764 [2024-11-19 11:27:49.186321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.764 [2024-11-19 11:27:49.186344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.764 qpair failed and we were unable to recover it. 00:25:53.764 [2024-11-19 11:27:49.186543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.764 [2024-11-19 11:27:49.186568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.764 qpair failed and we were unable to recover it. 00:25:53.764 [2024-11-19 11:27:49.186792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.765 [2024-11-19 11:27:49.186817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.765 qpair failed and we were unable to recover it. 00:25:53.765 [2024-11-19 11:27:49.186981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.765 [2024-11-19 11:27:49.187007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.765 qpair failed and we were unable to recover it. 00:25:53.765 [2024-11-19 11:27:49.187182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.765 [2024-11-19 11:27:49.187208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.765 qpair failed and we were unable to recover it. 00:25:53.765 [2024-11-19 11:27:49.187424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.765 [2024-11-19 11:27:49.187458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.765 qpair failed and we were unable to recover it. 00:25:53.765 [2024-11-19 11:27:49.187672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.765 [2024-11-19 11:27:49.187698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.765 qpair failed and we were unable to recover it. 00:25:53.765 [2024-11-19 11:27:49.187914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.765 [2024-11-19 11:27:49.187937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.765 qpair failed and we were unable to recover it. 00:25:53.765 [2024-11-19 11:27:49.188163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.765 [2024-11-19 11:27:49.188186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.765 qpair failed and we were unable to recover it. 00:25:53.765 [2024-11-19 11:27:49.188430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.765 [2024-11-19 11:27:49.188470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.765 qpair failed and we were unable to recover it. 00:25:53.765 [2024-11-19 11:27:49.188692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.765 [2024-11-19 11:27:49.188718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.765 qpair failed and we were unable to recover it. 00:25:53.765 [2024-11-19 11:27:49.188880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.765 [2024-11-19 11:27:49.188905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.765 qpair failed and we were unable to recover it. 00:25:53.765 [2024-11-19 11:27:49.189124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.765 [2024-11-19 11:27:49.189149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.765 qpair failed and we were unable to recover it. 00:25:53.765 [2024-11-19 11:27:49.189350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.765 [2024-11-19 11:27:49.189384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.765 qpair failed and we were unable to recover it. 00:25:53.765 [2024-11-19 11:27:49.189594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.765 [2024-11-19 11:27:49.189619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.765 qpair failed and we were unable to recover it. 00:25:53.765 [2024-11-19 11:27:49.189859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.765 [2024-11-19 11:27:49.189882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.765 qpair failed and we were unable to recover it. 00:25:53.765 [2024-11-19 11:27:49.190045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.765 [2024-11-19 11:27:49.190069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.765 qpair failed and we were unable to recover it. 00:25:53.765 [2024-11-19 11:27:49.190245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.765 [2024-11-19 11:27:49.190285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.765 qpair failed and we were unable to recover it. 00:25:53.765 [2024-11-19 11:27:49.190426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.765 [2024-11-19 11:27:49.190452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.765 qpair failed and we were unable to recover it. 00:25:53.765 [2024-11-19 11:27:49.190670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.765 [2024-11-19 11:27:49.190695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:53.765 qpair failed and we were unable to recover it. 00:25:54.051 [2024-11-19 11:27:49.190861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.051 [2024-11-19 11:27:49.190886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.051 qpair failed and we were unable to recover it. 00:25:54.051 [2024-11-19 11:27:49.191100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.051 [2024-11-19 11:27:49.191126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.051 qpair failed and we were unable to recover it. 00:25:54.051 [2024-11-19 11:27:49.191340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.051 [2024-11-19 11:27:49.191369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.051 qpair failed and we were unable to recover it. 00:25:54.051 [2024-11-19 11:27:49.191571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.051 [2024-11-19 11:27:49.191596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.051 qpair failed and we were unable to recover it. 00:25:54.051 [2024-11-19 11:27:49.191805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.051 [2024-11-19 11:27:49.191844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.051 qpair failed and we were unable to recover it. 00:25:54.051 [2024-11-19 11:27:49.192024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053f30 is same with the state(6) to be set 00:25:54.051 [2024-11-19 11:27:49.192302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.051 [2024-11-19 11:27:49.192345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.051 qpair failed and we were unable to recover it. 00:25:54.051 [2024-11-19 11:27:49.192581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.051 [2024-11-19 11:27:49.192612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.051 qpair failed and we were unable to recover it. 00:25:54.051 [2024-11-19 11:27:49.192837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.051 [2024-11-19 11:27:49.192865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.051 qpair failed and we were unable to recover it. 00:25:54.051 [2024-11-19 11:27:49.193076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.051 [2024-11-19 11:27:49.193102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.051 qpair failed and we were unable to recover it. 00:25:54.051 [2024-11-19 11:27:49.193274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.051 [2024-11-19 11:27:49.193300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.051 qpair failed and we were unable to recover it. 00:25:54.051 [2024-11-19 11:27:49.193512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.051 [2024-11-19 11:27:49.193538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.051 qpair failed and we were unable to recover it. 00:25:54.051 [2024-11-19 11:27:49.193746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.051 [2024-11-19 11:27:49.193772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.051 qpair failed and we were unable to recover it. 00:25:54.051 [2024-11-19 11:27:49.193984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.051 [2024-11-19 11:27:49.194010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.051 qpair failed and we were unable to recover it. 00:25:54.051 [2024-11-19 11:27:49.194187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.051 [2024-11-19 11:27:49.194212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.051 qpair failed and we were unable to recover it. 00:25:54.051 [2024-11-19 11:27:49.194394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.051 [2024-11-19 11:27:49.194422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.051 qpair failed and we were unable to recover it. 00:25:54.051 [2024-11-19 11:27:49.194577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.051 [2024-11-19 11:27:49.194602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.051 qpair failed and we were unable to recover it. 00:25:54.051 [2024-11-19 11:27:49.194776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.051 [2024-11-19 11:27:49.194802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.051 qpair failed and we were unable to recover it. 00:25:54.051 [2024-11-19 11:27:49.195008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.051 [2024-11-19 11:27:49.195037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.051 qpair failed and we were unable to recover it. 00:25:54.051 [2024-11-19 11:27:49.195250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.051 [2024-11-19 11:27:49.195275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.051 qpair failed and we were unable to recover it. 00:25:54.051 [2024-11-19 11:27:49.195478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.051 [2024-11-19 11:27:49.195505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.052 qpair failed and we were unable to recover it. 00:25:54.052 [2024-11-19 11:27:49.195684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.052 [2024-11-19 11:27:49.195709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.052 qpair failed and we were unable to recover it. 00:25:54.052 [2024-11-19 11:27:49.195912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.052 [2024-11-19 11:27:49.195937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.052 qpair failed and we were unable to recover it. 00:25:54.052 [2024-11-19 11:27:49.196115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.052 [2024-11-19 11:27:49.196139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.052 qpair failed and we were unable to recover it. 00:25:54.052 [2024-11-19 11:27:49.196346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.052 [2024-11-19 11:27:49.196378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.052 qpair failed and we were unable to recover it. 00:25:54.052 [2024-11-19 11:27:49.196553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.052 [2024-11-19 11:27:49.196579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.052 qpair failed and we were unable to recover it. 00:25:54.052 [2024-11-19 11:27:49.196785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.052 [2024-11-19 11:27:49.196810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.052 qpair failed and we were unable to recover it. 00:25:54.052 [2024-11-19 11:27:49.196957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.052 [2024-11-19 11:27:49.196983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.052 qpair failed and we were unable to recover it. 00:25:54.052 [2024-11-19 11:27:49.197125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.052 [2024-11-19 11:27:49.197151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.052 qpair failed and we were unable to recover it. 00:25:54.052 [2024-11-19 11:27:49.197336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.052 [2024-11-19 11:27:49.197366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.052 qpair failed and we were unable to recover it. 00:25:54.052 [2024-11-19 11:27:49.197512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.052 [2024-11-19 11:27:49.197537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.052 qpair failed and we were unable to recover it. 00:25:54.052 [2024-11-19 11:27:49.197756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.052 [2024-11-19 11:27:49.197781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.052 qpair failed and we were unable to recover it. 00:25:54.052 [2024-11-19 11:27:49.197960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.052 [2024-11-19 11:27:49.197985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.052 qpair failed and we were unable to recover it. 00:25:54.052 [2024-11-19 11:27:49.198158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.052 [2024-11-19 11:27:49.198183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.052 qpair failed and we were unable to recover it. 00:25:54.052 [2024-11-19 11:27:49.198390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.052 [2024-11-19 11:27:49.198416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.052 qpair failed and we were unable to recover it. 00:25:54.052 [2024-11-19 11:27:49.198577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.052 [2024-11-19 11:27:49.198603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.052 qpair failed and we were unable to recover it. 00:25:54.052 [2024-11-19 11:27:49.198816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.052 [2024-11-19 11:27:49.198841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.052 qpair failed and we were unable to recover it. 00:25:54.052 [2024-11-19 11:27:49.198940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.052 [2024-11-19 11:27:49.198965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.052 qpair failed and we were unable to recover it. 00:25:54.052 [2024-11-19 11:27:49.199166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.052 [2024-11-19 11:27:49.199191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.052 qpair failed and we were unable to recover it. 00:25:54.052 [2024-11-19 11:27:49.199374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.052 [2024-11-19 11:27:49.199399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.052 qpair failed and we were unable to recover it. 00:25:54.052 [2024-11-19 11:27:49.199575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.052 [2024-11-19 11:27:49.199601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.052 qpair failed and we were unable to recover it. 00:25:54.052 [2024-11-19 11:27:49.199822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.052 [2024-11-19 11:27:49.199847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.052 qpair failed and we were unable to recover it. 00:25:54.052 [2024-11-19 11:27:49.200057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.052 [2024-11-19 11:27:49.200083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.052 qpair failed and we were unable to recover it. 00:25:54.052 [2024-11-19 11:27:49.200200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.052 [2024-11-19 11:27:49.200226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.052 qpair failed and we were unable to recover it. 00:25:54.052 [2024-11-19 11:27:49.200373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.052 [2024-11-19 11:27:49.200398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.052 qpair failed and we were unable to recover it. 00:25:54.052 [2024-11-19 11:27:49.200569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.052 [2024-11-19 11:27:49.200599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.052 qpair failed and we were unable to recover it. 00:25:54.052 [2024-11-19 11:27:49.200773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.052 [2024-11-19 11:27:49.200799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.052 qpair failed and we were unable to recover it. 00:25:54.052 [2024-11-19 11:27:49.200959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.052 [2024-11-19 11:27:49.200984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.052 qpair failed and we were unable to recover it. 00:25:54.052 [2024-11-19 11:27:49.201175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.052 [2024-11-19 11:27:49.201201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.052 qpair failed and we were unable to recover it. 00:25:54.052 [2024-11-19 11:27:49.201373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.052 [2024-11-19 11:27:49.201399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.052 qpair failed and we were unable to recover it. 00:25:54.052 [2024-11-19 11:27:49.201603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.052 [2024-11-19 11:27:49.201628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.052 qpair failed and we were unable to recover it. 00:25:54.052 [2024-11-19 11:27:49.201780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.052 [2024-11-19 11:27:49.201805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.052 qpair failed and we were unable to recover it. 00:25:54.052 [2024-11-19 11:27:49.202016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.052 [2024-11-19 11:27:49.202041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.052 qpair failed and we were unable to recover it. 00:25:54.052 [2024-11-19 11:27:49.202182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.052 [2024-11-19 11:27:49.202207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.052 qpair failed and we were unable to recover it. 00:25:54.052 [2024-11-19 11:27:49.202415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.053 [2024-11-19 11:27:49.202441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.053 qpair failed and we were unable to recover it. 00:25:54.053 [2024-11-19 11:27:49.202664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.053 [2024-11-19 11:27:49.202689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.053 qpair failed and we were unable to recover it. 00:25:54.053 [2024-11-19 11:27:49.202854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.053 [2024-11-19 11:27:49.202880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.053 qpair failed and we were unable to recover it. 00:25:54.053 [2024-11-19 11:27:49.203088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.053 [2024-11-19 11:27:49.203113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.053 qpair failed and we were unable to recover it. 00:25:54.053 [2024-11-19 11:27:49.203292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.053 [2024-11-19 11:27:49.203317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.053 qpair failed and we were unable to recover it. 00:25:54.053 [2024-11-19 11:27:49.203536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.053 [2024-11-19 11:27:49.203562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.053 qpair failed and we were unable to recover it. 00:25:54.053 [2024-11-19 11:27:49.203741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.053 [2024-11-19 11:27:49.203767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.053 qpair failed and we were unable to recover it. 00:25:54.053 [2024-11-19 11:27:49.203981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.053 [2024-11-19 11:27:49.204006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.053 qpair failed and we were unable to recover it. 00:25:54.053 [2024-11-19 11:27:49.204180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.053 [2024-11-19 11:27:49.204205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.053 qpair failed and we were unable to recover it. 00:25:54.053 [2024-11-19 11:27:49.204388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.053 [2024-11-19 11:27:49.204414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.053 qpair failed and we were unable to recover it. 00:25:54.053 [2024-11-19 11:27:49.204618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.053 [2024-11-19 11:27:49.204643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.053 qpair failed and we were unable to recover it. 00:25:54.053 [2024-11-19 11:27:49.204849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.053 [2024-11-19 11:27:49.204874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.053 qpair failed and we were unable to recover it. 00:25:54.053 [2024-11-19 11:27:49.205027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.053 [2024-11-19 11:27:49.205052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.053 qpair failed and we were unable to recover it. 00:25:54.053 [2024-11-19 11:27:49.205216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.053 [2024-11-19 11:27:49.205241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.053 qpair failed and we were unable to recover it. 00:25:54.053 [2024-11-19 11:27:49.205389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.053 [2024-11-19 11:27:49.205414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.053 qpair failed and we were unable to recover it. 00:25:54.053 [2024-11-19 11:27:49.205633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.053 [2024-11-19 11:27:49.205659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.053 qpair failed and we were unable to recover it. 00:25:54.053 [2024-11-19 11:27:49.205881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.053 [2024-11-19 11:27:49.205905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.053 qpair failed and we were unable to recover it. 00:25:54.053 [2024-11-19 11:27:49.206079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.053 [2024-11-19 11:27:49.206104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.053 qpair failed and we were unable to recover it. 00:25:54.053 [2024-11-19 11:27:49.206311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.053 [2024-11-19 11:27:49.206340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.053 qpair failed and we were unable to recover it. 00:25:54.053 [2024-11-19 11:27:49.206559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.053 [2024-11-19 11:27:49.206584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.053 qpair failed and we were unable to recover it. 00:25:54.053 [2024-11-19 11:27:49.206804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.053 [2024-11-19 11:27:49.206829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.053 qpair failed and we were unable to recover it. 00:25:54.053 [2024-11-19 11:27:49.207057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.053 [2024-11-19 11:27:49.207082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.053 qpair failed and we were unable to recover it. 00:25:54.053 [2024-11-19 11:27:49.207294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.053 [2024-11-19 11:27:49.207320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.053 qpair failed and we were unable to recover it. 00:25:54.053 [2024-11-19 11:27:49.207543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.053 [2024-11-19 11:27:49.207569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.053 qpair failed and we were unable to recover it. 00:25:54.053 [2024-11-19 11:27:49.207736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.053 [2024-11-19 11:27:49.207759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.053 qpair failed and we were unable to recover it. 00:25:54.053 [2024-11-19 11:27:49.207966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.053 [2024-11-19 11:27:49.207989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.053 qpair failed and we were unable to recover it. 00:25:54.053 [2024-11-19 11:27:49.208170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.053 [2024-11-19 11:27:49.208194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.053 qpair failed and we were unable to recover it. 00:25:54.053 [2024-11-19 11:27:49.208377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.053 [2024-11-19 11:27:49.208403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.053 qpair failed and we were unable to recover it. 00:25:54.053 [2024-11-19 11:27:49.208558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.053 [2024-11-19 11:27:49.208584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.053 qpair failed and we were unable to recover it. 00:25:54.053 [2024-11-19 11:27:49.208752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.053 [2024-11-19 11:27:49.208775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.053 qpair failed and we were unable to recover it. 00:25:54.053 [2024-11-19 11:27:49.208953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.053 [2024-11-19 11:27:49.208976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.054 qpair failed and we were unable to recover it. 00:25:54.054 [2024-11-19 11:27:49.209151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.054 [2024-11-19 11:27:49.209174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.054 qpair failed and we were unable to recover it. 00:25:54.054 [2024-11-19 11:27:49.209379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.054 [2024-11-19 11:27:49.209405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.054 qpair failed and we were unable to recover it. 00:25:54.054 [2024-11-19 11:27:49.209571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.054 [2024-11-19 11:27:49.209597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.054 qpair failed and we were unable to recover it. 00:25:54.054 [2024-11-19 11:27:49.209775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.054 [2024-11-19 11:27:49.209798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.054 qpair failed and we were unable to recover it. 00:25:54.054 [2024-11-19 11:27:49.210010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.054 [2024-11-19 11:27:49.210034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.054 qpair failed and we were unable to recover it. 00:25:54.054 [2024-11-19 11:27:49.210258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.054 [2024-11-19 11:27:49.210281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.054 qpair failed and we were unable to recover it. 00:25:54.054 [2024-11-19 11:27:49.210437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.054 [2024-11-19 11:27:49.210464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.054 qpair failed and we were unable to recover it. 00:25:54.054 [2024-11-19 11:27:49.210633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.054 [2024-11-19 11:27:49.210672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.054 qpair failed and we were unable to recover it. 00:25:54.054 [2024-11-19 11:27:49.210880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.054 [2024-11-19 11:27:49.210903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.054 qpair failed and we were unable to recover it. 00:25:54.054 [2024-11-19 11:27:49.211131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.054 [2024-11-19 11:27:49.211154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.054 qpair failed and we were unable to recover it. 00:25:54.054 [2024-11-19 11:27:49.211393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.054 [2024-11-19 11:27:49.211419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.054 qpair failed and we were unable to recover it. 00:25:54.054 [2024-11-19 11:27:49.211595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.054 [2024-11-19 11:27:49.211619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.054 qpair failed and we were unable to recover it. 00:25:54.054 [2024-11-19 11:27:49.211849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.054 [2024-11-19 11:27:49.211872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.054 qpair failed and we were unable to recover it. 00:25:54.054 [2024-11-19 11:27:49.212098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.054 [2024-11-19 11:27:49.212122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.054 qpair failed and we were unable to recover it. 00:25:54.054 [2024-11-19 11:27:49.212315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.054 [2024-11-19 11:27:49.212339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.054 qpair failed and we were unable to recover it. 00:25:54.054 [2024-11-19 11:27:49.212586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.054 [2024-11-19 11:27:49.212611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.054 qpair failed and we were unable to recover it. 00:25:54.054 [2024-11-19 11:27:49.212834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.054 [2024-11-19 11:27:49.212858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.054 qpair failed and we were unable to recover it. 00:25:54.054 [2024-11-19 11:27:49.213064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.054 [2024-11-19 11:27:49.213087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.054 qpair failed and we were unable to recover it. 00:25:54.054 [2024-11-19 11:27:49.213291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.054 [2024-11-19 11:27:49.213314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.054 qpair failed and we were unable to recover it. 00:25:54.054 [2024-11-19 11:27:49.213548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.054 [2024-11-19 11:27:49.213575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.054 qpair failed and we were unable to recover it. 00:25:54.054 [2024-11-19 11:27:49.213743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.054 [2024-11-19 11:27:49.213766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.054 qpair failed and we were unable to recover it. 00:25:54.054 [2024-11-19 11:27:49.213974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.054 [2024-11-19 11:27:49.213997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.054 qpair failed and we were unable to recover it. 00:25:54.054 [2024-11-19 11:27:49.214216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.054 [2024-11-19 11:27:49.214240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.054 qpair failed and we were unable to recover it. 00:25:54.054 [2024-11-19 11:27:49.214478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.054 [2024-11-19 11:27:49.214504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.054 qpair failed and we were unable to recover it. 00:25:54.054 [2024-11-19 11:27:49.214724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.054 [2024-11-19 11:27:49.214747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.054 qpair failed and we were unable to recover it. 00:25:54.054 [2024-11-19 11:27:49.214932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.054 [2024-11-19 11:27:49.214956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.054 qpair failed and we were unable to recover it. 00:25:54.054 [2024-11-19 11:27:49.215173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.054 [2024-11-19 11:27:49.215197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.054 qpair failed and we were unable to recover it. 00:25:54.054 [2024-11-19 11:27:49.215405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.054 [2024-11-19 11:27:49.215431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.054 qpair failed and we were unable to recover it. 00:25:54.054 [2024-11-19 11:27:49.215640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.054 [2024-11-19 11:27:49.215685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.055 qpair failed and we were unable to recover it. 00:25:54.055 [2024-11-19 11:27:49.215888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.055 [2024-11-19 11:27:49.215911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.055 qpair failed and we were unable to recover it. 00:25:54.055 [2024-11-19 11:27:49.216094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.055 [2024-11-19 11:27:49.216117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.055 qpair failed and we were unable to recover it. 00:25:54.055 [2024-11-19 11:27:49.216336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.055 [2024-11-19 11:27:49.216381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.055 qpair failed and we were unable to recover it. 00:25:54.055 [2024-11-19 11:27:49.216550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.055 [2024-11-19 11:27:49.216575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.055 qpair failed and we were unable to recover it. 00:25:54.055 [2024-11-19 11:27:49.216741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.055 [2024-11-19 11:27:49.216764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.055 qpair failed and we were unable to recover it. 00:25:54.055 [2024-11-19 11:27:49.216980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.055 [2024-11-19 11:27:49.217004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.055 qpair failed and we were unable to recover it. 00:25:54.055 [2024-11-19 11:27:49.217203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.055 [2024-11-19 11:27:49.217226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.055 qpair failed and we were unable to recover it. 00:25:54.055 [2024-11-19 11:27:49.217392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.055 [2024-11-19 11:27:49.217433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.055 qpair failed and we were unable to recover it. 00:25:54.055 [2024-11-19 11:27:49.217590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.055 [2024-11-19 11:27:49.217615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.055 qpair failed and we were unable to recover it. 00:25:54.055 [2024-11-19 11:27:49.217790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.055 [2024-11-19 11:27:49.217828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.055 qpair failed and we were unable to recover it. 00:25:54.055 [2024-11-19 11:27:49.217991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.055 [2024-11-19 11:27:49.218014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.055 qpair failed and we were unable to recover it. 00:25:54.055 [2024-11-19 11:27:49.218196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.055 [2024-11-19 11:27:49.218219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.055 qpair failed and we were unable to recover it. 00:25:54.055 [2024-11-19 11:27:49.218456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.055 [2024-11-19 11:27:49.218481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.055 qpair failed and we were unable to recover it. 00:25:54.055 [2024-11-19 11:27:49.218696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.055 [2024-11-19 11:27:49.218734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.055 qpair failed and we were unable to recover it. 00:25:54.055 [2024-11-19 11:27:49.218966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.055 [2024-11-19 11:27:49.218989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.055 qpair failed and we were unable to recover it. 00:25:54.055 [2024-11-19 11:27:49.219214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.055 [2024-11-19 11:27:49.219237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.055 qpair failed and we were unable to recover it. 00:25:54.055 [2024-11-19 11:27:49.219425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.055 [2024-11-19 11:27:49.219451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.055 qpair failed and we were unable to recover it. 00:25:54.055 [2024-11-19 11:27:49.219683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.055 [2024-11-19 11:27:49.219722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.055 qpair failed and we were unable to recover it. 00:25:54.055 [2024-11-19 11:27:49.219889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.055 [2024-11-19 11:27:49.219913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.055 qpair failed and we were unable to recover it. 00:25:54.055 [2024-11-19 11:27:49.220089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.055 [2024-11-19 11:27:49.220112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.055 qpair failed and we were unable to recover it. 00:25:54.055 [2024-11-19 11:27:49.220356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.055 [2024-11-19 11:27:49.220387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.055 qpair failed and we were unable to recover it. 00:25:54.055 [2024-11-19 11:27:49.220611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.055 [2024-11-19 11:27:49.220637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.055 qpair failed and we were unable to recover it. 00:25:54.055 [2024-11-19 11:27:49.220810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.055 [2024-11-19 11:27:49.220833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.055 qpair failed and we were unable to recover it. 00:25:54.055 [2024-11-19 11:27:49.221014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.055 [2024-11-19 11:27:49.221038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.055 qpair failed and we were unable to recover it. 00:25:54.055 [2024-11-19 11:27:49.221232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.055 [2024-11-19 11:27:49.221257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.055 qpair failed and we were unable to recover it. 00:25:54.055 [2024-11-19 11:27:49.221411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.055 [2024-11-19 11:27:49.221437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.055 qpair failed and we were unable to recover it. 00:25:54.055 [2024-11-19 11:27:49.221674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.055 [2024-11-19 11:27:49.221703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.055 qpair failed and we were unable to recover it. 00:25:54.055 [2024-11-19 11:27:49.221852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.055 [2024-11-19 11:27:49.221876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.055 qpair failed and we were unable to recover it. 00:25:54.055 [2024-11-19 11:27:49.222054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.055 [2024-11-19 11:27:49.222077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.055 qpair failed and we were unable to recover it. 00:25:54.055 [2024-11-19 11:27:49.222248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.055 [2024-11-19 11:27:49.222271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.055 qpair failed and we were unable to recover it. 00:25:54.055 [2024-11-19 11:27:49.222494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.055 [2024-11-19 11:27:49.222519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.055 qpair failed and we were unable to recover it. 00:25:54.056 [2024-11-19 11:27:49.222690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.056 [2024-11-19 11:27:49.222713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.056 qpair failed and we were unable to recover it. 00:25:54.056 [2024-11-19 11:27:49.222941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.056 [2024-11-19 11:27:49.222964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.056 qpair failed and we were unable to recover it. 00:25:54.056 [2024-11-19 11:27:49.223151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.056 [2024-11-19 11:27:49.223175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.056 qpair failed and we were unable to recover it. 00:25:54.056 [2024-11-19 11:27:49.223390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.056 [2024-11-19 11:27:49.223414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.056 qpair failed and we were unable to recover it. 00:25:54.056 [2024-11-19 11:27:49.223639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.056 [2024-11-19 11:27:49.223663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.056 qpair failed and we were unable to recover it. 00:25:54.056 [2024-11-19 11:27:49.223857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.056 [2024-11-19 11:27:49.223881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.056 qpair failed and we were unable to recover it. 00:25:54.056 [2024-11-19 11:27:49.224096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.056 [2024-11-19 11:27:49.224134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.056 qpair failed and we were unable to recover it. 00:25:54.056 [2024-11-19 11:27:49.224314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.056 [2024-11-19 11:27:49.224338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.056 qpair failed and we were unable to recover it. 00:25:54.056 [2024-11-19 11:27:49.224525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.056 [2024-11-19 11:27:49.224550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.056 qpair failed and we were unable to recover it. 00:25:54.056 [2024-11-19 11:27:49.224720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.056 [2024-11-19 11:27:49.224758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.056 qpair failed and we were unable to recover it. 00:25:54.056 [2024-11-19 11:27:49.224936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.056 [2024-11-19 11:27:49.224960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.056 qpair failed and we were unable to recover it. 00:25:54.056 [2024-11-19 11:27:49.225186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.056 [2024-11-19 11:27:49.225210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.056 qpair failed and we were unable to recover it. 00:25:54.056 [2024-11-19 11:27:49.225397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.056 [2024-11-19 11:27:49.225421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.056 qpair failed and we were unable to recover it. 00:25:54.056 [2024-11-19 11:27:49.225615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.056 [2024-11-19 11:27:49.225639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.056 qpair failed and we were unable to recover it. 00:25:54.056 [2024-11-19 11:27:49.225805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.056 [2024-11-19 11:27:49.225828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.056 qpair failed and we were unable to recover it. 00:25:54.056 [2024-11-19 11:27:49.226026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.056 [2024-11-19 11:27:49.226049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.056 qpair failed and we were unable to recover it. 00:25:54.056 [2024-11-19 11:27:49.226213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.056 [2024-11-19 11:27:49.226237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.056 qpair failed and we were unable to recover it. 00:25:54.056 [2024-11-19 11:27:49.226465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.056 [2024-11-19 11:27:49.226489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.056 qpair failed and we were unable to recover it. 00:25:54.056 [2024-11-19 11:27:49.226656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.056 [2024-11-19 11:27:49.226679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.056 qpair failed and we were unable to recover it. 00:25:54.056 [2024-11-19 11:27:49.226865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.056 [2024-11-19 11:27:49.226889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.056 qpair failed and we were unable to recover it. 00:25:54.056 [2024-11-19 11:27:49.227081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.056 [2024-11-19 11:27:49.227105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.056 qpair failed and we were unable to recover it. 00:25:54.056 [2024-11-19 11:27:49.227316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.056 [2024-11-19 11:27:49.227339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.056 qpair failed and we were unable to recover it. 00:25:54.056 [2024-11-19 11:27:49.227552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.056 [2024-11-19 11:27:49.227583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.056 qpair failed and we were unable to recover it. 00:25:54.056 [2024-11-19 11:27:49.227759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.056 [2024-11-19 11:27:49.227783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.056 qpair failed and we were unable to recover it. 00:25:54.056 [2024-11-19 11:27:49.227961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.056 [2024-11-19 11:27:49.227984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.056 qpair failed and we were unable to recover it. 00:25:54.056 [2024-11-19 11:27:49.228168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.056 [2024-11-19 11:27:49.228191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.056 qpair failed and we were unable to recover it. 00:25:54.056 [2024-11-19 11:27:49.228419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.056 [2024-11-19 11:27:49.228444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.057 qpair failed and we were unable to recover it. 00:25:54.057 [2024-11-19 11:27:49.228665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.057 [2024-11-19 11:27:49.228704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.057 qpair failed and we were unable to recover it. 00:25:54.057 [2024-11-19 11:27:49.228883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.057 [2024-11-19 11:27:49.228907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.057 qpair failed and we were unable to recover it. 00:25:54.057 [2024-11-19 11:27:49.229092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.057 [2024-11-19 11:27:49.229116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.057 qpair failed and we were unable to recover it. 00:25:54.057 [2024-11-19 11:27:49.229325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.057 [2024-11-19 11:27:49.229348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.057 qpair failed and we were unable to recover it. 00:25:54.057 [2024-11-19 11:27:49.229598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.057 [2024-11-19 11:27:49.229622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.057 qpair failed and we were unable to recover it. 00:25:54.057 [2024-11-19 11:27:49.229807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.057 [2024-11-19 11:27:49.229831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.057 qpair failed and we were unable to recover it. 00:25:54.057 [2024-11-19 11:27:49.230031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.057 [2024-11-19 11:27:49.230054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.057 qpair failed and we were unable to recover it. 00:25:54.057 [2024-11-19 11:27:49.230230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.057 [2024-11-19 11:27:49.230253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.057 qpair failed and we were unable to recover it. 00:25:54.057 [2024-11-19 11:27:49.230443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.057 [2024-11-19 11:27:49.230468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.057 qpair failed and we were unable to recover it. 00:25:54.057 [2024-11-19 11:27:49.230662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.057 [2024-11-19 11:27:49.230685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.057 qpair failed and we were unable to recover it. 00:25:54.057 [2024-11-19 11:27:49.230899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.057 [2024-11-19 11:27:49.230922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.057 qpair failed and we were unable to recover it. 00:25:54.057 [2024-11-19 11:27:49.231151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.057 [2024-11-19 11:27:49.231175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.057 qpair failed and we were unable to recover it. 00:25:54.057 [2024-11-19 11:27:49.231325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.057 [2024-11-19 11:27:49.231347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.057 qpair failed and we were unable to recover it. 00:25:54.057 [2024-11-19 11:27:49.231542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.057 [2024-11-19 11:27:49.231567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.057 qpair failed and we were unable to recover it. 00:25:54.057 [2024-11-19 11:27:49.231791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.057 [2024-11-19 11:27:49.231815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.057 qpair failed and we were unable to recover it. 00:25:54.057 [2024-11-19 11:27:49.231975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.057 [2024-11-19 11:27:49.231997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.057 qpair failed and we were unable to recover it. 00:25:54.057 [2024-11-19 11:27:49.232229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.057 [2024-11-19 11:27:49.232253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.057 qpair failed and we were unable to recover it. 00:25:54.057 [2024-11-19 11:27:49.232459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.057 [2024-11-19 11:27:49.232484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.057 qpair failed and we were unable to recover it. 00:25:54.057 [2024-11-19 11:27:49.232696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.057 [2024-11-19 11:27:49.232719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.057 qpair failed and we were unable to recover it. 00:25:54.057 [2024-11-19 11:27:49.232953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.057 [2024-11-19 11:27:49.232977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.057 qpair failed and we were unable to recover it. 00:25:54.057 [2024-11-19 11:27:49.233170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.057 [2024-11-19 11:27:49.233194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.057 qpair failed and we were unable to recover it. 00:25:54.057 [2024-11-19 11:27:49.233400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.057 [2024-11-19 11:27:49.233425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.057 qpair failed and we were unable to recover it. 00:25:54.057 [2024-11-19 11:27:49.233639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.057 [2024-11-19 11:27:49.233664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.057 qpair failed and we were unable to recover it. 00:25:54.057 [2024-11-19 11:27:49.233900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.057 [2024-11-19 11:27:49.233923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.057 qpair failed and we were unable to recover it. 00:25:54.057 [2024-11-19 11:27:49.234103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.057 [2024-11-19 11:27:49.234127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.057 qpair failed and we were unable to recover it. 00:25:54.057 [2024-11-19 11:27:49.234347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.057 [2024-11-19 11:27:49.234392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.057 qpair failed and we were unable to recover it. 00:25:54.057 [2024-11-19 11:27:49.234614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.057 [2024-11-19 11:27:49.234638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.057 qpair failed and we were unable to recover it. 00:25:54.057 [2024-11-19 11:27:49.234812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.057 [2024-11-19 11:27:49.234835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.057 qpair failed and we were unable to recover it. 00:25:54.057 [2024-11-19 11:27:49.235010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.057 [2024-11-19 11:27:49.235034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.057 qpair failed and we were unable to recover it. 00:25:54.057 [2024-11-19 11:27:49.235142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.057 [2024-11-19 11:27:49.235166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.057 qpair failed and we were unable to recover it. 00:25:54.057 [2024-11-19 11:27:49.235401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.057 [2024-11-19 11:27:49.235425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.057 qpair failed and we were unable to recover it. 00:25:54.057 [2024-11-19 11:27:49.235617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.057 [2024-11-19 11:27:49.235642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.057 qpair failed and we were unable to recover it. 00:25:54.057 [2024-11-19 11:27:49.235864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.057 [2024-11-19 11:27:49.235887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.058 qpair failed and we were unable to recover it. 00:25:54.058 [2024-11-19 11:27:49.236056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.058 [2024-11-19 11:27:49.236079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.058 qpair failed and we were unable to recover it. 00:25:54.058 [2024-11-19 11:27:49.236263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.058 [2024-11-19 11:27:49.236287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.058 qpair failed and we were unable to recover it. 00:25:54.058 [2024-11-19 11:27:49.236520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.058 [2024-11-19 11:27:49.236545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.058 qpair failed and we were unable to recover it. 00:25:54.058 [2024-11-19 11:27:49.236735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.058 [2024-11-19 11:27:49.236772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.058 qpair failed and we were unable to recover it. 00:25:54.058 [2024-11-19 11:27:49.236991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.058 [2024-11-19 11:27:49.237015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.058 qpair failed and we were unable to recover it. 00:25:54.058 [2024-11-19 11:27:49.237191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.058 [2024-11-19 11:27:49.237215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.058 qpair failed and we were unable to recover it. 00:25:54.058 [2024-11-19 11:27:49.237436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.058 [2024-11-19 11:27:49.237460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.058 qpair failed and we were unable to recover it. 00:25:54.058 [2024-11-19 11:27:49.237681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.058 [2024-11-19 11:27:49.237706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.058 qpair failed and we were unable to recover it. 00:25:54.058 [2024-11-19 11:27:49.237931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.058 [2024-11-19 11:27:49.237955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.058 qpair failed and we were unable to recover it. 00:25:54.058 [2024-11-19 11:27:49.238164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.058 [2024-11-19 11:27:49.238187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.058 qpair failed and we were unable to recover it. 00:25:54.058 [2024-11-19 11:27:49.238410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.058 [2024-11-19 11:27:49.238436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.058 qpair failed and we were unable to recover it. 00:25:54.058 [2024-11-19 11:27:49.238650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.058 [2024-11-19 11:27:49.238673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.058 qpair failed and we were unable to recover it. 00:25:54.058 [2024-11-19 11:27:49.238877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.058 [2024-11-19 11:27:49.238900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.058 qpair failed and we were unable to recover it. 00:25:54.058 [2024-11-19 11:27:49.239077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.058 [2024-11-19 11:27:49.239100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.058 qpair failed and we were unable to recover it. 00:25:54.058 [2024-11-19 11:27:49.239300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.058 [2024-11-19 11:27:49.239323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.058 qpair failed and we were unable to recover it. 00:25:54.058 [2024-11-19 11:27:49.239569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.058 [2024-11-19 11:27:49.239594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.058 qpair failed and we were unable to recover it. 00:25:54.058 [2024-11-19 11:27:49.239783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.058 [2024-11-19 11:27:49.239807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.058 qpair failed and we were unable to recover it. 00:25:54.058 [2024-11-19 11:27:49.239972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.058 [2024-11-19 11:27:49.239996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.058 qpair failed and we were unable to recover it. 00:25:54.058 [2024-11-19 11:27:49.240198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.058 [2024-11-19 11:27:49.240221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.058 qpair failed and we were unable to recover it. 00:25:54.058 [2024-11-19 11:27:49.240444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.058 [2024-11-19 11:27:49.240470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.058 qpair failed and we were unable to recover it. 00:25:54.058 [2024-11-19 11:27:49.240704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.058 [2024-11-19 11:27:49.240728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.058 qpair failed and we were unable to recover it. 00:25:54.058 [2024-11-19 11:27:49.240877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.058 [2024-11-19 11:27:49.240900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.058 qpair failed and we were unable to recover it. 00:25:54.058 [2024-11-19 11:27:49.241081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.058 [2024-11-19 11:27:49.241105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.058 qpair failed and we were unable to recover it. 00:25:54.058 [2024-11-19 11:27:49.241330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.058 [2024-11-19 11:27:49.241354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.058 qpair failed and we were unable to recover it. 00:25:54.058 [2024-11-19 11:27:49.241476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.058 [2024-11-19 11:27:49.241499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.058 qpair failed and we were unable to recover it. 00:25:54.058 [2024-11-19 11:27:49.241732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.058 [2024-11-19 11:27:49.241756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.058 qpair failed and we were unable to recover it. 00:25:54.058 [2024-11-19 11:27:49.241942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.058 [2024-11-19 11:27:49.241966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.058 qpair failed and we were unable to recover it. 00:25:54.058 [2024-11-19 11:27:49.242146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.058 [2024-11-19 11:27:49.242169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.058 qpair failed and we were unable to recover it. 00:25:54.058 [2024-11-19 11:27:49.242399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.058 [2024-11-19 11:27:49.242424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.058 qpair failed and we were unable to recover it. 00:25:54.058 [2024-11-19 11:27:49.242632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.058 [2024-11-19 11:27:49.242656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.058 qpair failed and we were unable to recover it. 00:25:54.058 [2024-11-19 11:27:49.242817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.058 [2024-11-19 11:27:49.242845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.058 qpair failed and we were unable to recover it. 00:25:54.058 [2024-11-19 11:27:49.243071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.059 [2024-11-19 11:27:49.243095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.059 qpair failed and we were unable to recover it. 00:25:54.059 [2024-11-19 11:27:49.243326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.059 [2024-11-19 11:27:49.243349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.059 qpair failed and we were unable to recover it. 00:25:54.059 [2024-11-19 11:27:49.243500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.059 [2024-11-19 11:27:49.243525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.059 qpair failed and we were unable to recover it. 00:25:54.059 [2024-11-19 11:27:49.243740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.059 [2024-11-19 11:27:49.243764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.059 qpair failed and we were unable to recover it. 00:25:54.059 [2024-11-19 11:27:49.243936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.059 [2024-11-19 11:27:49.243959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.059 qpair failed and we were unable to recover it. 00:25:54.059 [2024-11-19 11:27:49.244163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.059 [2024-11-19 11:27:49.244186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.059 qpair failed and we were unable to recover it. 00:25:54.059 [2024-11-19 11:27:49.244411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.059 [2024-11-19 11:27:49.244437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.059 qpair failed and we were unable to recover it. 00:25:54.059 [2024-11-19 11:27:49.244568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.059 [2024-11-19 11:27:49.244592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.059 qpair failed and we were unable to recover it. 00:25:54.059 [2024-11-19 11:27:49.244713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.059 [2024-11-19 11:27:49.244752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.059 qpair failed and we were unable to recover it. 00:25:54.059 [2024-11-19 11:27:49.244970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.059 [2024-11-19 11:27:49.244995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.059 qpair failed and we were unable to recover it. 00:25:54.059 [2024-11-19 11:27:49.245180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.059 [2024-11-19 11:27:49.245205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.059 qpair failed and we were unable to recover it. 00:25:54.059 [2024-11-19 11:27:49.245411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.059 [2024-11-19 11:27:49.245436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.059 qpair failed and we were unable to recover it. 00:25:54.059 [2024-11-19 11:27:49.245601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.059 [2024-11-19 11:27:49.245625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.059 qpair failed and we were unable to recover it. 00:25:54.059 [2024-11-19 11:27:49.245856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.059 [2024-11-19 11:27:49.245880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.059 qpair failed and we were unable to recover it. 00:25:54.059 [2024-11-19 11:27:49.246094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.059 [2024-11-19 11:27:49.246117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.059 qpair failed and we were unable to recover it. 00:25:54.059 [2024-11-19 11:27:49.246317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.059 [2024-11-19 11:27:49.246341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.059 qpair failed and we were unable to recover it. 00:25:54.059 [2024-11-19 11:27:49.246555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.059 [2024-11-19 11:27:49.246580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.059 qpair failed and we were unable to recover it. 00:25:54.059 [2024-11-19 11:27:49.246786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.059 [2024-11-19 11:27:49.246825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.059 qpair failed and we were unable to recover it. 00:25:54.059 [2024-11-19 11:27:49.247047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.059 [2024-11-19 11:27:49.247071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.059 qpair failed and we were unable to recover it. 00:25:54.059 [2024-11-19 11:27:49.247248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.059 [2024-11-19 11:27:49.247271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.059 qpair failed and we were unable to recover it. 00:25:54.059 [2024-11-19 11:27:49.247478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.059 [2024-11-19 11:27:49.247503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.059 qpair failed and we were unable to recover it. 00:25:54.059 [2024-11-19 11:27:49.247737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.059 [2024-11-19 11:27:49.247761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.059 qpair failed and we were unable to recover it. 00:25:54.059 [2024-11-19 11:27:49.247950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.059 [2024-11-19 11:27:49.247974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.059 qpair failed and we were unable to recover it. 00:25:54.059 [2024-11-19 11:27:49.248171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.059 [2024-11-19 11:27:49.248194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.059 qpair failed and we were unable to recover it. 00:25:54.059 [2024-11-19 11:27:49.248429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.059 [2024-11-19 11:27:49.248455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.059 qpair failed and we were unable to recover it. 00:25:54.059 [2024-11-19 11:27:49.248582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.059 [2024-11-19 11:27:49.248606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.059 qpair failed and we were unable to recover it. 00:25:54.059 [2024-11-19 11:27:49.248796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.059 [2024-11-19 11:27:49.248823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.059 qpair failed and we were unable to recover it. 00:25:54.059 [2024-11-19 11:27:49.249047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.059 [2024-11-19 11:27:49.249070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.059 qpair failed and we were unable to recover it. 00:25:54.059 [2024-11-19 11:27:49.249257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.059 [2024-11-19 11:27:49.249281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.059 qpair failed and we were unable to recover it. 00:25:54.059 [2024-11-19 11:27:49.249496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.059 [2024-11-19 11:27:49.249520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.059 qpair failed and we were unable to recover it. 00:25:54.059 [2024-11-19 11:27:49.249707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.059 [2024-11-19 11:27:49.249731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.059 qpair failed and we were unable to recover it. 00:25:54.059 [2024-11-19 11:27:49.249950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.059 [2024-11-19 11:27:49.249974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.059 qpair failed and we were unable to recover it. 00:25:54.059 [2024-11-19 11:27:49.250182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.060 [2024-11-19 11:27:49.250205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.060 qpair failed and we were unable to recover it. 00:25:54.060 [2024-11-19 11:27:49.250431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.060 [2024-11-19 11:27:49.250456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.060 qpair failed and we were unable to recover it. 00:25:54.060 [2024-11-19 11:27:49.250622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.060 [2024-11-19 11:27:49.250646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.060 qpair failed and we were unable to recover it. 00:25:54.060 [2024-11-19 11:27:49.250860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.060 [2024-11-19 11:27:49.250883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.060 qpair failed and we were unable to recover it. 00:25:54.060 [2024-11-19 11:27:49.251066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.060 [2024-11-19 11:27:49.251090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.060 qpair failed and we were unable to recover it. 00:25:54.060 [2024-11-19 11:27:49.251261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.060 [2024-11-19 11:27:49.251285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.060 qpair failed and we were unable to recover it. 00:25:54.060 [2024-11-19 11:27:49.251499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.060 [2024-11-19 11:27:49.251523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.060 qpair failed and we were unable to recover it. 00:25:54.060 [2024-11-19 11:27:49.251756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.060 [2024-11-19 11:27:49.251780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.060 qpair failed and we were unable to recover it. 00:25:54.060 [2024-11-19 11:27:49.251973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.060 [2024-11-19 11:27:49.251997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.060 qpair failed and we were unable to recover it. 00:25:54.060 [2024-11-19 11:27:49.252186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.060 [2024-11-19 11:27:49.252208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.060 qpair failed and we were unable to recover it. 00:25:54.060 [2024-11-19 11:27:49.252396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.060 [2024-11-19 11:27:49.252437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.060 qpair failed and we were unable to recover it. 00:25:54.060 [2024-11-19 11:27:49.252610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.060 [2024-11-19 11:27:49.252655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.060 qpair failed and we were unable to recover it. 00:25:54.060 [2024-11-19 11:27:49.252872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.060 [2024-11-19 11:27:49.252895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.060 qpair failed and we were unable to recover it. 00:25:54.060 [2024-11-19 11:27:49.253122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.060 [2024-11-19 11:27:49.253145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.060 qpair failed and we were unable to recover it. 00:25:54.060 [2024-11-19 11:27:49.253356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.060 [2024-11-19 11:27:49.253385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.060 qpair failed and we were unable to recover it. 00:25:54.060 [2024-11-19 11:27:49.253543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.060 [2024-11-19 11:27:49.253567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.060 qpair failed and we were unable to recover it. 00:25:54.060 [2024-11-19 11:27:49.253778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.060 [2024-11-19 11:27:49.253802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.060 qpair failed and we were unable to recover it. 00:25:54.060 [2024-11-19 11:27:49.254007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.060 [2024-11-19 11:27:49.254031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.060 qpair failed and we were unable to recover it. 00:25:54.060 [2024-11-19 11:27:49.254178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.060 [2024-11-19 11:27:49.254201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.060 qpair failed and we were unable to recover it. 00:25:54.060 [2024-11-19 11:27:49.254432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.060 [2024-11-19 11:27:49.254458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.060 qpair failed and we were unable to recover it. 00:25:54.060 [2024-11-19 11:27:49.254664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.060 [2024-11-19 11:27:49.254687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.060 qpair failed and we were unable to recover it. 00:25:54.060 [2024-11-19 11:27:49.254854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.060 [2024-11-19 11:27:49.254881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.060 qpair failed and we were unable to recover it. 00:25:54.060 [2024-11-19 11:27:49.255098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.060 [2024-11-19 11:27:49.255122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.060 qpair failed and we were unable to recover it. 00:25:54.060 [2024-11-19 11:27:49.255301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.060 [2024-11-19 11:27:49.255325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.060 qpair failed and we were unable to recover it. 00:25:54.060 [2024-11-19 11:27:49.255535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.060 [2024-11-19 11:27:49.255559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.060 qpair failed and we were unable to recover it. 00:25:54.060 [2024-11-19 11:27:49.255758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.060 [2024-11-19 11:27:49.255782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.060 qpair failed and we were unable to recover it. 00:25:54.060 [2024-11-19 11:27:49.255980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.060 [2024-11-19 11:27:49.256004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.060 qpair failed and we were unable to recover it. 00:25:54.060 [2024-11-19 11:27:49.256135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.060 [2024-11-19 11:27:49.256157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.060 qpair failed and we were unable to recover it. 00:25:54.060 [2024-11-19 11:27:49.256337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.060 [2024-11-19 11:27:49.256385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.060 qpair failed and we were unable to recover it. 00:25:54.060 [2024-11-19 11:27:49.256565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.060 [2024-11-19 11:27:49.256589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.060 qpair failed and we were unable to recover it. 00:25:54.060 [2024-11-19 11:27:49.256762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.060 [2024-11-19 11:27:49.256785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.061 qpair failed and we were unable to recover it. 00:25:54.061 [2024-11-19 11:27:49.257011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.061 [2024-11-19 11:27:49.257035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.061 qpair failed and we were unable to recover it. 00:25:54.061 [2024-11-19 11:27:49.257253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.061 [2024-11-19 11:27:49.257276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.061 qpair failed and we were unable to recover it. 00:25:54.061 [2024-11-19 11:27:49.257500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.061 [2024-11-19 11:27:49.257525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.061 qpair failed and we were unable to recover it. 00:25:54.061 [2024-11-19 11:27:49.257759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.061 [2024-11-19 11:27:49.257784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.061 qpair failed and we were unable to recover it. 00:25:54.061 [2024-11-19 11:27:49.257954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.061 [2024-11-19 11:27:49.257978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.061 qpair failed and we were unable to recover it. 00:25:54.061 [2024-11-19 11:27:49.258192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.061 [2024-11-19 11:27:49.258215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.061 qpair failed and we were unable to recover it. 00:25:54.061 [2024-11-19 11:27:49.258440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.061 [2024-11-19 11:27:49.258465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.061 qpair failed and we were unable to recover it. 00:25:54.061 [2024-11-19 11:27:49.258631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.061 [2024-11-19 11:27:49.258654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.061 qpair failed and we were unable to recover it. 00:25:54.061 [2024-11-19 11:27:49.258883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.061 [2024-11-19 11:27:49.258907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.061 qpair failed and we were unable to recover it. 00:25:54.061 [2024-11-19 11:27:49.259083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.061 [2024-11-19 11:27:49.259107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.061 qpair failed and we were unable to recover it. 00:25:54.061 [2024-11-19 11:27:49.259301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.061 [2024-11-19 11:27:49.259324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.061 qpair failed and we were unable to recover it. 00:25:54.061 [2024-11-19 11:27:49.259534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.061 [2024-11-19 11:27:49.259560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.061 qpair failed and we were unable to recover it. 00:25:54.061 [2024-11-19 11:27:49.259717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.061 [2024-11-19 11:27:49.259741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.061 qpair failed and we were unable to recover it. 00:25:54.061 [2024-11-19 11:27:49.259964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.061 [2024-11-19 11:27:49.259987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.061 qpair failed and we were unable to recover it. 00:25:54.061 [2024-11-19 11:27:49.260160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.061 [2024-11-19 11:27:49.260183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.061 qpair failed and we were unable to recover it. 00:25:54.061 [2024-11-19 11:27:49.260377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.061 [2024-11-19 11:27:49.260402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.061 qpair failed and we were unable to recover it. 00:25:54.061 [2024-11-19 11:27:49.260572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.061 [2024-11-19 11:27:49.260597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.061 qpair failed and we were unable to recover it. 00:25:54.061 [2024-11-19 11:27:49.260816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.061 [2024-11-19 11:27:49.260839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.061 qpair failed and we were unable to recover it. 00:25:54.061 [2024-11-19 11:27:49.261074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.061 [2024-11-19 11:27:49.261098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.061 qpair failed and we were unable to recover it. 00:25:54.061 [2024-11-19 11:27:49.261302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.061 [2024-11-19 11:27:49.261326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.061 qpair failed and we were unable to recover it. 00:25:54.061 [2024-11-19 11:27:49.261579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.061 [2024-11-19 11:27:49.261626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.061 qpair failed and we were unable to recover it. 00:25:54.061 [2024-11-19 11:27:49.261899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.061 [2024-11-19 11:27:49.261931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.061 qpair failed and we were unable to recover it. 00:25:54.061 [2024-11-19 11:27:49.262180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.061 [2024-11-19 11:27:49.262210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.061 qpair failed and we were unable to recover it. 00:25:54.061 [2024-11-19 11:27:49.262453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.061 [2024-11-19 11:27:49.262488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.061 qpair failed and we were unable to recover it. 00:25:54.061 [2024-11-19 11:27:49.262701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.061 [2024-11-19 11:27:49.262747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.061 qpair failed and we were unable to recover it. 00:25:54.061 [2024-11-19 11:27:49.262943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.061 [2024-11-19 11:27:49.262974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.061 qpair failed and we were unable to recover it. 00:25:54.061 [2024-11-19 11:27:49.263231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.061 [2024-11-19 11:27:49.263265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.061 qpair failed and we were unable to recover it. 00:25:54.061 [2024-11-19 11:27:49.263470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.061 [2024-11-19 11:27:49.263506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.061 qpair failed and we were unable to recover it. 00:25:54.061 [2024-11-19 11:27:49.263703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.061 [2024-11-19 11:27:49.263735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.061 qpair failed and we were unable to recover it. 00:25:54.062 [2024-11-19 11:27:49.263934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.062 [2024-11-19 11:27:49.263969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.062 qpair failed and we were unable to recover it. 00:25:54.062 [2024-11-19 11:27:49.264152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.062 [2024-11-19 11:27:49.264188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.062 qpair failed and we were unable to recover it. 00:25:54.062 [2024-11-19 11:27:49.264380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.062 [2024-11-19 11:27:49.264415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.062 qpair failed and we were unable to recover it. 00:25:54.062 [2024-11-19 11:27:49.264591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.062 [2024-11-19 11:27:49.264630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.062 qpair failed and we were unable to recover it. 00:25:54.062 [2024-11-19 11:27:49.264791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.062 [2024-11-19 11:27:49.264831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.062 qpair failed and we were unable to recover it. 00:25:54.062 [2024-11-19 11:27:49.264960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.062 [2024-11-19 11:27:49.264997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.062 qpair failed and we were unable to recover it. 00:25:54.062 [2024-11-19 11:27:49.265190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.062 [2024-11-19 11:27:49.265228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.062 qpair failed and we were unable to recover it. 00:25:54.062 [2024-11-19 11:27:49.265413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.062 [2024-11-19 11:27:49.265451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.062 qpair failed and we were unable to recover it. 00:25:54.062 [2024-11-19 11:27:49.265584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.062 [2024-11-19 11:27:49.265621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.062 qpair failed and we were unable to recover it. 00:25:54.062 [2024-11-19 11:27:49.265806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.062 [2024-11-19 11:27:49.265840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.062 qpair failed and we were unable to recover it. 00:25:54.062 [2024-11-19 11:27:49.266010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.062 [2024-11-19 11:27:49.266043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.062 qpair failed and we were unable to recover it. 00:25:54.062 [2024-11-19 11:27:49.266148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.062 [2024-11-19 11:27:49.266181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.062 qpair failed and we were unable to recover it. 00:25:54.062 [2024-11-19 11:27:49.266379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.062 [2024-11-19 11:27:49.266420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.062 qpair failed and we were unable to recover it. 00:25:54.062 [2024-11-19 11:27:49.266578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.062 [2024-11-19 11:27:49.266612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.062 qpair failed and we were unable to recover it. 00:25:54.062 [2024-11-19 11:27:49.266762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.062 [2024-11-19 11:27:49.266796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.062 qpair failed and we were unable to recover it. 00:25:54.062 [2024-11-19 11:27:49.266971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.062 [2024-11-19 11:27:49.267010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.062 qpair failed and we were unable to recover it. 00:25:54.062 [2024-11-19 11:27:49.267162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.062 [2024-11-19 11:27:49.267197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.062 qpair failed and we were unable to recover it. 00:25:54.062 [2024-11-19 11:27:49.267326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.062 [2024-11-19 11:27:49.267360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.062 qpair failed and we were unable to recover it. 00:25:54.062 [2024-11-19 11:27:49.267493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.062 [2024-11-19 11:27:49.267527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.062 qpair failed and we were unable to recover it. 00:25:54.062 [2024-11-19 11:27:49.267704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.062 [2024-11-19 11:27:49.267738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.062 qpair failed and we were unable to recover it. 00:25:54.062 [2024-11-19 11:27:49.267882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.062 [2024-11-19 11:27:49.267916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.062 qpair failed and we were unable to recover it. 00:25:54.062 [2024-11-19 11:27:49.268061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.062 [2024-11-19 11:27:49.268095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.062 qpair failed and we were unable to recover it. 00:25:54.062 [2024-11-19 11:27:49.268269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.062 [2024-11-19 11:27:49.268303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.062 qpair failed and we were unable to recover it. 00:25:54.062 [2024-11-19 11:27:49.268490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.062 [2024-11-19 11:27:49.268524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.062 qpair failed and we were unable to recover it. 00:25:54.062 [2024-11-19 11:27:49.268674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.062 [2024-11-19 11:27:49.268707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.062 qpair failed and we were unable to recover it. 00:25:54.062 [2024-11-19 11:27:49.268846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.062 [2024-11-19 11:27:49.268880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.062 qpair failed and we were unable to recover it. 00:25:54.062 [2024-11-19 11:27:49.269022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.062 [2024-11-19 11:27:49.269068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.063 qpair failed and we were unable to recover it. 00:25:54.063 [2024-11-19 11:27:49.269264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.063 [2024-11-19 11:27:49.269300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.063 qpair failed and we were unable to recover it. 00:25:54.063 [2024-11-19 11:27:49.269476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.063 [2024-11-19 11:27:49.269510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.063 qpair failed and we were unable to recover it. 00:25:54.063 [2024-11-19 11:27:49.269660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.063 [2024-11-19 11:27:49.269696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.063 qpair failed and we were unable to recover it. 00:25:54.063 [2024-11-19 11:27:49.269930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.063 [2024-11-19 11:27:49.269965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.063 qpair failed and we were unable to recover it. 00:25:54.063 [2024-11-19 11:27:49.270200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.063 [2024-11-19 11:27:49.270238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.063 qpair failed and we were unable to recover it. 00:25:54.063 [2024-11-19 11:27:49.270386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.063 [2024-11-19 11:27:49.270421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.063 qpair failed and we were unable to recover it. 00:25:54.063 [2024-11-19 11:27:49.270534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.063 [2024-11-19 11:27:49.270568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.063 qpair failed and we were unable to recover it. 00:25:54.063 [2024-11-19 11:27:49.270735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.063 [2024-11-19 11:27:49.270769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.063 qpair failed and we were unable to recover it. 00:25:54.063 [2024-11-19 11:27:49.270960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.063 [2024-11-19 11:27:49.270993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.063 qpair failed and we were unable to recover it. 00:25:54.063 [2024-11-19 11:27:49.271148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.063 [2024-11-19 11:27:49.271183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.063 qpair failed and we were unable to recover it. 00:25:54.063 [2024-11-19 11:27:49.271379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.063 [2024-11-19 11:27:49.271417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.063 qpair failed and we were unable to recover it. 00:25:54.063 [2024-11-19 11:27:49.271534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.063 [2024-11-19 11:27:49.271568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.063 qpair failed and we were unable to recover it. 00:25:54.063 [2024-11-19 11:27:49.271768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.063 [2024-11-19 11:27:49.271804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.063 qpair failed and we were unable to recover it. 00:25:54.063 [2024-11-19 11:27:49.271964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.063 [2024-11-19 11:27:49.271997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.063 qpair failed and we were unable to recover it. 00:25:54.063 [2024-11-19 11:27:49.272173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.063 [2024-11-19 11:27:49.272208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.063 qpair failed and we were unable to recover it. 00:25:54.063 [2024-11-19 11:27:49.272342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.063 [2024-11-19 11:27:49.272387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.063 qpair failed and we were unable to recover it. 00:25:54.063 [2024-11-19 11:27:49.272518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.063 [2024-11-19 11:27:49.272554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.063 qpair failed and we were unable to recover it. 00:25:54.063 [2024-11-19 11:27:49.272752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.063 [2024-11-19 11:27:49.272800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.063 qpair failed and we were unable to recover it. 00:25:54.063 [2024-11-19 11:27:49.272969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.063 [2024-11-19 11:27:49.273001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.063 qpair failed and we were unable to recover it. 00:25:54.063 [2024-11-19 11:27:49.273191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.063 [2024-11-19 11:27:49.273224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.063 qpair failed and we were unable to recover it. 00:25:54.063 [2024-11-19 11:27:49.273407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.063 [2024-11-19 11:27:49.273442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.063 qpair failed and we were unable to recover it. 00:25:54.063 [2024-11-19 11:27:49.273569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.063 [2024-11-19 11:27:49.273602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.063 qpair failed and we were unable to recover it. 00:25:54.063 [2024-11-19 11:27:49.273771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.063 [2024-11-19 11:27:49.273816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.063 qpair failed and we were unable to recover it. 00:25:54.063 [2024-11-19 11:27:49.274029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.063 [2024-11-19 11:27:49.274058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.063 qpair failed and we were unable to recover it. 00:25:54.063 [2024-11-19 11:27:49.274224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.063 [2024-11-19 11:27:49.274256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.063 qpair failed and we were unable to recover it. 00:25:54.063 [2024-11-19 11:27:49.274429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.063 [2024-11-19 11:27:49.274463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.063 qpair failed and we were unable to recover it. 00:25:54.063 [2024-11-19 11:27:49.274625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.063 [2024-11-19 11:27:49.274659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.063 qpair failed and we were unable to recover it. 00:25:54.063 [2024-11-19 11:27:49.274818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.063 [2024-11-19 11:27:49.274849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.063 qpair failed and we were unable to recover it. 00:25:54.063 [2024-11-19 11:27:49.275011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.063 [2024-11-19 11:27:49.275064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.064 qpair failed and we were unable to recover it. 00:25:54.064 [2024-11-19 11:27:49.275271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.064 [2024-11-19 11:27:49.275303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.064 qpair failed and we were unable to recover it. 00:25:54.064 [2024-11-19 11:27:49.275468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.064 [2024-11-19 11:27:49.275503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.064 qpair failed and we were unable to recover it. 00:25:54.064 [2024-11-19 11:27:49.275689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.064 [2024-11-19 11:27:49.275735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.064 qpair failed and we were unable to recover it. 00:25:54.064 [2024-11-19 11:27:49.275915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.064 [2024-11-19 11:27:49.275953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.064 qpair failed and we were unable to recover it. 00:25:54.064 [2024-11-19 11:27:49.276108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.064 [2024-11-19 11:27:49.276151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.064 qpair failed and we were unable to recover it. 00:25:54.064 [2024-11-19 11:27:49.276335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.064 [2024-11-19 11:27:49.276373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.064 qpair failed and we were unable to recover it. 00:25:54.064 [2024-11-19 11:27:49.276515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.064 [2024-11-19 11:27:49.276548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.064 qpair failed and we were unable to recover it. 00:25:54.064 [2024-11-19 11:27:49.276738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.064 [2024-11-19 11:27:49.276782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.064 qpair failed and we were unable to recover it. 00:25:54.064 [2024-11-19 11:27:49.276947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.064 [2024-11-19 11:27:49.276976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.064 qpair failed and we were unable to recover it. 00:25:54.064 [2024-11-19 11:27:49.277160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.064 [2024-11-19 11:27:49.277191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.064 qpair failed and we were unable to recover it. 00:25:54.064 [2024-11-19 11:27:49.277354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.064 [2024-11-19 11:27:49.277408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.064 qpair failed and we were unable to recover it. 00:25:54.064 [2024-11-19 11:27:49.277543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.064 [2024-11-19 11:27:49.277579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.064 qpair failed and we were unable to recover it. 00:25:54.064 [2024-11-19 11:27:49.277711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.064 [2024-11-19 11:27:49.277759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.064 qpair failed and we were unable to recover it. 00:25:54.064 [2024-11-19 11:27:49.277904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.064 [2024-11-19 11:27:49.277948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.064 qpair failed and we were unable to recover it. 00:25:54.064 [2024-11-19 11:27:49.278117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.064 [2024-11-19 11:27:49.278148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.064 qpair failed and we were unable to recover it. 00:25:54.064 [2024-11-19 11:27:49.278342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.064 [2024-11-19 11:27:49.278414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.064 qpair failed and we were unable to recover it. 00:25:54.064 [2024-11-19 11:27:49.278563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.064 [2024-11-19 11:27:49.278596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.064 qpair failed and we were unable to recover it. 00:25:54.064 [2024-11-19 11:27:49.278807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.064 [2024-11-19 11:27:49.278838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.064 qpair failed and we were unable to recover it. 00:25:54.064 [2024-11-19 11:27:49.279031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.064 [2024-11-19 11:27:49.279062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.064 qpair failed and we were unable to recover it. 00:25:54.064 [2024-11-19 11:27:49.279220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.064 [2024-11-19 11:27:49.279250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.064 qpair failed and we were unable to recover it. 00:25:54.064 [2024-11-19 11:27:49.279459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.064 [2024-11-19 11:27:49.279491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.064 qpair failed and we were unable to recover it. 00:25:54.064 [2024-11-19 11:27:49.279682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.064 [2024-11-19 11:27:49.279711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.064 qpair failed and we were unable to recover it. 00:25:54.064 [2024-11-19 11:27:49.279888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.064 [2024-11-19 11:27:49.279918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.064 qpair failed and we were unable to recover it. 00:25:54.064 [2024-11-19 11:27:49.280087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.064 [2024-11-19 11:27:49.280119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.064 qpair failed and we were unable to recover it. 00:25:54.064 [2024-11-19 11:27:49.280307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.064 [2024-11-19 11:27:49.280339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.064 qpair failed and we were unable to recover it. 00:25:54.064 [2024-11-19 11:27:49.280531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.064 [2024-11-19 11:27:49.280566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.064 qpair failed and we were unable to recover it. 00:25:54.064 [2024-11-19 11:27:49.280745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.064 [2024-11-19 11:27:49.280783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.064 qpair failed and we were unable to recover it. 00:25:54.064 [2024-11-19 11:27:49.280969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.064 [2024-11-19 11:27:49.280995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.064 qpair failed and we were unable to recover it. 00:25:54.064 [2024-11-19 11:27:49.281115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.064 [2024-11-19 11:27:49.281155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.064 qpair failed and we were unable to recover it. 00:25:54.064 [2024-11-19 11:27:49.281274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.065 [2024-11-19 11:27:49.281298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.065 qpair failed and we were unable to recover it. 00:25:54.065 [2024-11-19 11:27:49.281467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.065 [2024-11-19 11:27:49.281493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.065 qpair failed and we were unable to recover it. 00:25:54.065 [2024-11-19 11:27:49.281633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.065 [2024-11-19 11:27:49.281658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.065 qpair failed and we were unable to recover it. 00:25:54.065 [2024-11-19 11:27:49.281833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.065 [2024-11-19 11:27:49.281857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.065 qpair failed and we were unable to recover it. 00:25:54.065 [2024-11-19 11:27:49.281960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.065 [2024-11-19 11:27:49.282000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.065 qpair failed and we were unable to recover it. 00:25:54.065 [2024-11-19 11:27:49.282178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.065 [2024-11-19 11:27:49.282216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.065 qpair failed and we were unable to recover it. 00:25:54.065 [2024-11-19 11:27:49.282434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.065 [2024-11-19 11:27:49.282460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.065 qpair failed and we were unable to recover it. 00:25:54.065 [2024-11-19 11:27:49.282589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.065 [2024-11-19 11:27:49.282614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.065 qpair failed and we were unable to recover it. 00:25:54.065 [2024-11-19 11:27:49.282703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.065 [2024-11-19 11:27:49.282742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.065 qpair failed and we were unable to recover it. 00:25:54.065 [2024-11-19 11:27:49.282946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.065 [2024-11-19 11:27:49.282969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.065 qpair failed and we were unable to recover it. 00:25:54.065 [2024-11-19 11:27:49.283174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.065 [2024-11-19 11:27:49.283203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.065 qpair failed and we were unable to recover it. 00:25:54.065 [2024-11-19 11:27:49.283325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.065 [2024-11-19 11:27:49.283348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.065 qpair failed and we were unable to recover it. 00:25:54.065 [2024-11-19 11:27:49.283482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.065 [2024-11-19 11:27:49.283507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.065 qpair failed and we were unable to recover it. 00:25:54.065 [2024-11-19 11:27:49.283684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.065 [2024-11-19 11:27:49.283709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.065 qpair failed and we were unable to recover it. 00:25:54.065 [2024-11-19 11:27:49.284009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.065 [2024-11-19 11:27:49.284033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.065 qpair failed and we were unable to recover it. 00:25:54.065 [2024-11-19 11:27:49.284157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.065 [2024-11-19 11:27:49.284180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.065 qpair failed and we were unable to recover it. 00:25:54.065 [2024-11-19 11:27:49.284391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.065 [2024-11-19 11:27:49.284434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.065 qpair failed and we were unable to recover it. 00:25:54.065 [2024-11-19 11:27:49.284549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.065 [2024-11-19 11:27:49.284573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.065 qpair failed and we were unable to recover it. 00:25:54.065 [2024-11-19 11:27:49.284748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.065 [2024-11-19 11:27:49.284772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.065 qpair failed and we were unable to recover it. 00:25:54.065 [2024-11-19 11:27:49.284916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.065 [2024-11-19 11:27:49.284941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.065 qpair failed and we were unable to recover it. 00:25:54.065 [2024-11-19 11:27:49.285119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.065 [2024-11-19 11:27:49.285143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.065 qpair failed and we were unable to recover it. 00:25:54.065 [2024-11-19 11:27:49.285245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.065 [2024-11-19 11:27:49.285268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.065 qpair failed and we were unable to recover it. 00:25:54.065 [2024-11-19 11:27:49.285437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.065 [2024-11-19 11:27:49.285464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.065 qpair failed and we were unable to recover it. 00:25:54.065 [2024-11-19 11:27:49.285555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.065 [2024-11-19 11:27:49.285579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.065 qpair failed and we were unable to recover it. 00:25:54.065 [2024-11-19 11:27:49.285815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.065 [2024-11-19 11:27:49.285839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.065 qpair failed and we were unable to recover it. 00:25:54.065 [2024-11-19 11:27:49.286002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.065 [2024-11-19 11:27:49.286033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.065 qpair failed and we were unable to recover it. 00:25:54.065 [2024-11-19 11:27:49.286193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.065 [2024-11-19 11:27:49.286217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.065 qpair failed and we were unable to recover it. 00:25:54.065 [2024-11-19 11:27:49.286378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.065 [2024-11-19 11:27:49.286403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.065 qpair failed and we were unable to recover it. 00:25:54.065 [2024-11-19 11:27:49.286574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.065 [2024-11-19 11:27:49.286599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.065 qpair failed and we were unable to recover it. 00:25:54.065 [2024-11-19 11:27:49.286698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.065 [2024-11-19 11:27:49.286722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.066 qpair failed and we were unable to recover it. 00:25:54.066 [2024-11-19 11:27:49.286853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.066 [2024-11-19 11:27:49.286877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.066 qpair failed and we were unable to recover it. 00:25:54.066 [2024-11-19 11:27:49.287092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.066 [2024-11-19 11:27:49.287117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.066 qpair failed and we were unable to recover it. 00:25:54.066 [2024-11-19 11:27:49.287324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.066 [2024-11-19 11:27:49.287370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.066 qpair failed and we were unable to recover it. 00:25:54.066 [2024-11-19 11:27:49.287534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.066 [2024-11-19 11:27:49.287560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.066 qpair failed and we were unable to recover it. 00:25:54.066 [2024-11-19 11:27:49.287689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.066 [2024-11-19 11:27:49.287713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.066 qpair failed and we were unable to recover it. 00:25:54.066 [2024-11-19 11:27:49.287889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.066 [2024-11-19 11:27:49.287912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.066 qpair failed and we were unable to recover it. 00:25:54.066 [2024-11-19 11:27:49.288171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.066 [2024-11-19 11:27:49.288195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.066 qpair failed and we were unable to recover it. 00:25:54.066 [2024-11-19 11:27:49.288377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.066 [2024-11-19 11:27:49.288421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.066 qpair failed and we were unable to recover it. 00:25:54.066 [2024-11-19 11:27:49.288590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.066 [2024-11-19 11:27:49.288615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.066 qpair failed and we were unable to recover it. 00:25:54.066 [2024-11-19 11:27:49.288763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.066 [2024-11-19 11:27:49.288785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.066 qpair failed and we were unable to recover it. 00:25:54.066 [2024-11-19 11:27:49.288924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.066 [2024-11-19 11:27:49.288947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.066 qpair failed and we were unable to recover it. 00:25:54.066 [2024-11-19 11:27:49.289073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.066 [2024-11-19 11:27:49.289098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.066 qpair failed and we were unable to recover it. 00:25:54.066 [2024-11-19 11:27:49.289249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.066 [2024-11-19 11:27:49.289277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.066 qpair failed and we were unable to recover it. 00:25:54.066 [2024-11-19 11:27:49.289436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.066 [2024-11-19 11:27:49.289461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.066 qpair failed and we were unable to recover it. 00:25:54.066 [2024-11-19 11:27:49.289630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.066 [2024-11-19 11:27:49.289656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.066 qpair failed and we were unable to recover it. 00:25:54.066 [2024-11-19 11:27:49.289833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.066 [2024-11-19 11:27:49.289858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.066 qpair failed and we were unable to recover it. 00:25:54.066 [2024-11-19 11:27:49.290091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.066 [2024-11-19 11:27:49.290115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.066 qpair failed and we were unable to recover it. 00:25:54.066 [2024-11-19 11:27:49.290294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.066 [2024-11-19 11:27:49.290318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.066 qpair failed and we were unable to recover it. 00:25:54.066 [2024-11-19 11:27:49.290467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.066 [2024-11-19 11:27:49.290491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.066 qpair failed and we were unable to recover it. 00:25:54.066 [2024-11-19 11:27:49.290675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.066 [2024-11-19 11:27:49.290699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.066 qpair failed and we were unable to recover it. 00:25:54.066 [2024-11-19 11:27:49.290918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.066 [2024-11-19 11:27:49.290947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.066 qpair failed and we were unable to recover it. 00:25:54.066 [2024-11-19 11:27:49.291089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.066 [2024-11-19 11:27:49.291112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.066 qpair failed and we were unable to recover it. 00:25:54.066 [2024-11-19 11:27:49.291203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.066 [2024-11-19 11:27:49.291226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.066 qpair failed and we were unable to recover it. 00:25:54.066 [2024-11-19 11:27:49.291466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.066 [2024-11-19 11:27:49.291491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.066 qpair failed and we were unable to recover it. 00:25:54.066 [2024-11-19 11:27:49.291630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.066 [2024-11-19 11:27:49.291668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.066 qpair failed and we were unable to recover it. 00:25:54.066 [2024-11-19 11:27:49.291905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.066 [2024-11-19 11:27:49.291929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.066 qpair failed and we were unable to recover it. 00:25:54.066 [2024-11-19 11:27:49.292146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.066 [2024-11-19 11:27:49.292170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.066 qpair failed and we were unable to recover it. 00:25:54.066 [2024-11-19 11:27:49.292411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.066 [2024-11-19 11:27:49.292436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.066 qpair failed and we were unable to recover it. 00:25:54.066 [2024-11-19 11:27:49.292609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.066 [2024-11-19 11:27:49.292633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.066 qpair failed and we were unable to recover it. 00:25:54.066 [2024-11-19 11:27:49.292882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.067 [2024-11-19 11:27:49.292906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.067 qpair failed and we were unable to recover it. 00:25:54.067 [2024-11-19 11:27:49.293075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.067 [2024-11-19 11:27:49.293098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.067 qpair failed and we were unable to recover it. 00:25:54.067 [2024-11-19 11:27:49.293296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.067 [2024-11-19 11:27:49.293321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.067 qpair failed and we were unable to recover it. 00:25:54.067 [2024-11-19 11:27:49.293474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.067 [2024-11-19 11:27:49.293500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.067 qpair failed and we were unable to recover it. 00:25:54.067 [2024-11-19 11:27:49.293697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.067 [2024-11-19 11:27:49.293721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.067 qpair failed and we were unable to recover it. 00:25:54.067 [2024-11-19 11:27:49.293905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.067 [2024-11-19 11:27:49.293928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.067 qpair failed and we were unable to recover it. 00:25:54.067 [2024-11-19 11:27:49.294092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.067 [2024-11-19 11:27:49.294133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.067 qpair failed and we were unable to recover it. 00:25:54.067 [2024-11-19 11:27:49.294334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.067 [2024-11-19 11:27:49.294357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.067 qpair failed and we were unable to recover it. 00:25:54.067 [2024-11-19 11:27:49.294496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.067 [2024-11-19 11:27:49.294521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.067 qpair failed and we were unable to recover it. 00:25:54.067 [2024-11-19 11:27:49.294604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.067 [2024-11-19 11:27:49.294629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.067 qpair failed and we were unable to recover it. 00:25:54.067 [2024-11-19 11:27:49.294844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.067 [2024-11-19 11:27:49.294867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.067 qpair failed and we were unable to recover it. 00:25:54.067 [2024-11-19 11:27:49.295045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.067 [2024-11-19 11:27:49.295069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.067 qpair failed and we were unable to recover it. 00:25:54.067 [2024-11-19 11:27:49.295220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.067 [2024-11-19 11:27:49.295243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.067 qpair failed and we were unable to recover it. 00:25:54.067 [2024-11-19 11:27:49.295369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.067 [2024-11-19 11:27:49.295393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.067 qpair failed and we were unable to recover it. 00:25:54.067 [2024-11-19 11:27:49.295558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.067 [2024-11-19 11:27:49.295583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.067 qpair failed and we were unable to recover it. 00:25:54.067 [2024-11-19 11:27:49.295770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.067 [2024-11-19 11:27:49.295795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.067 qpair failed and we were unable to recover it. 00:25:54.067 [2024-11-19 11:27:49.296023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.067 [2024-11-19 11:27:49.296046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.067 qpair failed and we were unable to recover it. 00:25:54.067 [2024-11-19 11:27:49.296186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.067 [2024-11-19 11:27:49.296210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.067 qpair failed and we were unable to recover it. 00:25:54.067 [2024-11-19 11:27:49.296436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.067 [2024-11-19 11:27:49.296461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.067 qpair failed and we were unable to recover it. 00:25:54.067 [2024-11-19 11:27:49.296598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.067 [2024-11-19 11:27:49.296622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.067 qpair failed and we were unable to recover it. 00:25:54.067 [2024-11-19 11:27:49.296817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.067 [2024-11-19 11:27:49.296841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.067 qpair failed and we were unable to recover it. 00:25:54.067 [2024-11-19 11:27:49.297032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.067 [2024-11-19 11:27:49.297055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.067 qpair failed and we were unable to recover it. 00:25:54.067 [2024-11-19 11:27:49.297215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.067 [2024-11-19 11:27:49.297237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.067 qpair failed and we were unable to recover it. 00:25:54.067 [2024-11-19 11:27:49.297377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.067 [2024-11-19 11:27:49.297426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.067 qpair failed and we were unable to recover it. 00:25:54.067 [2024-11-19 11:27:49.297556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.067 [2024-11-19 11:27:49.297595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.067 qpair failed and we were unable to recover it. 00:25:54.067 [2024-11-19 11:27:49.297763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.067 [2024-11-19 11:27:49.297786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.067 qpair failed and we were unable to recover it. 00:25:54.067 [2024-11-19 11:27:49.298026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.067 [2024-11-19 11:27:49.298050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.067 qpair failed and we were unable to recover it. 00:25:54.067 [2024-11-19 11:27:49.298224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.067 [2024-11-19 11:27:49.298248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.067 qpair failed and we were unable to recover it. 00:25:54.067 [2024-11-19 11:27:49.298413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.067 [2024-11-19 11:27:49.298437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.067 qpair failed and we were unable to recover it. 00:25:54.068 [2024-11-19 11:27:49.298577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.068 [2024-11-19 11:27:49.298601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.068 qpair failed and we were unable to recover it. 00:25:54.068 [2024-11-19 11:27:49.298752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.068 [2024-11-19 11:27:49.298790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.068 qpair failed and we were unable to recover it. 00:25:54.068 [2024-11-19 11:27:49.298931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.068 [2024-11-19 11:27:49.298960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.068 qpair failed and we were unable to recover it. 00:25:54.068 [2024-11-19 11:27:49.299118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.068 [2024-11-19 11:27:49.299143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.068 qpair failed and we were unable to recover it. 00:25:54.068 [2024-11-19 11:27:49.299279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.068 [2024-11-19 11:27:49.299318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.068 qpair failed and we were unable to recover it. 00:25:54.068 [2024-11-19 11:27:49.299507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.068 [2024-11-19 11:27:49.299532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.068 qpair failed and we were unable to recover it. 00:25:54.068 [2024-11-19 11:27:49.299685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.068 [2024-11-19 11:27:49.299709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.068 qpair failed and we were unable to recover it. 00:25:54.068 [2024-11-19 11:27:49.299920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.068 [2024-11-19 11:27:49.299944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.068 qpair failed and we were unable to recover it. 00:25:54.068 [2024-11-19 11:27:49.300113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.068 [2024-11-19 11:27:49.300135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.068 qpair failed and we were unable to recover it. 00:25:54.068 [2024-11-19 11:27:49.300271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.068 [2024-11-19 11:27:49.300301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.068 qpair failed and we were unable to recover it. 00:25:54.068 [2024-11-19 11:27:49.300499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.068 [2024-11-19 11:27:49.300525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.068 qpair failed and we were unable to recover it. 00:25:54.068 [2024-11-19 11:27:49.300654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.068 [2024-11-19 11:27:49.300678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.068 qpair failed and we were unable to recover it. 00:25:54.068 [2024-11-19 11:27:49.300871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.068 [2024-11-19 11:27:49.300894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.068 qpair failed and we were unable to recover it. 00:25:54.068 [2024-11-19 11:27:49.301055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.068 [2024-11-19 11:27:49.301093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.068 qpair failed and we were unable to recover it. 00:25:54.068 [2024-11-19 11:27:49.301275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.068 [2024-11-19 11:27:49.301298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.068 qpair failed and we were unable to recover it. 00:25:54.068 [2024-11-19 11:27:49.301455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.068 [2024-11-19 11:27:49.301481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.068 qpair failed and we were unable to recover it. 00:25:54.068 [2024-11-19 11:27:49.301645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.068 [2024-11-19 11:27:49.301671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.068 qpair failed and we were unable to recover it. 00:25:54.068 [2024-11-19 11:27:49.301946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.068 [2024-11-19 11:27:49.301969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.068 qpair failed and we were unable to recover it. 00:25:54.068 [2024-11-19 11:27:49.302142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.068 [2024-11-19 11:27:49.302165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.068 qpair failed and we were unable to recover it. 00:25:54.068 [2024-11-19 11:27:49.302343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.068 [2024-11-19 11:27:49.302373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.068 qpair failed and we were unable to recover it. 00:25:54.068 [2024-11-19 11:27:49.302486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.068 [2024-11-19 11:27:49.302510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.068 qpair failed and we were unable to recover it. 00:25:54.068 [2024-11-19 11:27:49.302715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.068 [2024-11-19 11:27:49.302738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.068 qpair failed and we were unable to recover it. 00:25:54.068 [2024-11-19 11:27:49.302977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.068 [2024-11-19 11:27:49.303000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.068 qpair failed and we were unable to recover it. 00:25:54.068 [2024-11-19 11:27:49.303159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.068 [2024-11-19 11:27:49.303181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.068 qpair failed and we were unable to recover it. 00:25:54.068 [2024-11-19 11:27:49.303321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.068 [2024-11-19 11:27:49.303345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.068 qpair failed and we were unable to recover it. 00:25:54.068 [2024-11-19 11:27:49.303495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.068 [2024-11-19 11:27:49.303535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.068 qpair failed and we were unable to recover it. 00:25:54.068 [2024-11-19 11:27:49.303699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.068 [2024-11-19 11:27:49.303722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.068 qpair failed and we were unable to recover it. 00:25:54.068 [2024-11-19 11:27:49.303912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.068 [2024-11-19 11:27:49.303936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.069 qpair failed and we were unable to recover it. 00:25:54.069 [2024-11-19 11:27:49.304108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.069 [2024-11-19 11:27:49.304132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.069 qpair failed and we were unable to recover it. 00:25:54.069 [2024-11-19 11:27:49.304359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.069 [2024-11-19 11:27:49.304398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.069 qpair failed and we were unable to recover it. 00:25:54.069 [2024-11-19 11:27:49.304569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.069 [2024-11-19 11:27:49.304595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.069 qpair failed and we were unable to recover it. 00:25:54.069 [2024-11-19 11:27:49.304799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.069 [2024-11-19 11:27:49.304823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.069 qpair failed and we were unable to recover it. 00:25:54.069 [2024-11-19 11:27:49.305045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.069 [2024-11-19 11:27:49.305068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.069 qpair failed and we were unable to recover it. 00:25:54.069 [2024-11-19 11:27:49.305195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.069 [2024-11-19 11:27:49.305218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.069 qpair failed and we were unable to recover it. 00:25:54.069 [2024-11-19 11:27:49.305316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.069 [2024-11-19 11:27:49.305356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.069 qpair failed and we were unable to recover it. 00:25:54.069 [2024-11-19 11:27:49.305514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.069 [2024-11-19 11:27:49.305538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.069 qpair failed and we were unable to recover it. 00:25:54.069 [2024-11-19 11:27:49.305687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.069 [2024-11-19 11:27:49.305712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.069 qpair failed and we were unable to recover it. 00:25:54.069 [2024-11-19 11:27:49.305824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.069 [2024-11-19 11:27:49.305849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.069 qpair failed and we were unable to recover it. 00:25:54.069 [2024-11-19 11:27:49.306008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.069 [2024-11-19 11:27:49.306031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.069 qpair failed and we were unable to recover it. 00:25:54.069 [2024-11-19 11:27:49.306206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.069 [2024-11-19 11:27:49.306230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.069 qpair failed and we were unable to recover it. 00:25:54.069 [2024-11-19 11:27:49.306398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.069 [2024-11-19 11:27:49.306424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.069 qpair failed and we were unable to recover it. 00:25:54.069 [2024-11-19 11:27:49.306548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.069 [2024-11-19 11:27:49.306572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.069 qpair failed and we were unable to recover it. 00:25:54.069 [2024-11-19 11:27:49.306792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.069 [2024-11-19 11:27:49.306820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.069 qpair failed and we were unable to recover it. 00:25:54.069 [2024-11-19 11:27:49.306966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.069 [2024-11-19 11:27:49.306989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.069 qpair failed and we were unable to recover it. 00:25:54.069 [2024-11-19 11:27:49.307155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.069 [2024-11-19 11:27:49.307179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.069 qpair failed and we were unable to recover it. 00:25:54.069 [2024-11-19 11:27:49.307284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.069 [2024-11-19 11:27:49.307308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.069 qpair failed and we were unable to recover it. 00:25:54.069 [2024-11-19 11:27:49.307478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.069 [2024-11-19 11:27:49.307503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.069 qpair failed and we were unable to recover it. 00:25:54.069 [2024-11-19 11:27:49.307672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.069 [2024-11-19 11:27:49.307711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.069 qpair failed and we were unable to recover it. 00:25:54.069 [2024-11-19 11:27:49.307845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.069 [2024-11-19 11:27:49.307869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.069 qpair failed and we were unable to recover it. 00:25:54.069 [2024-11-19 11:27:49.308035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.069 [2024-11-19 11:27:49.308074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.069 qpair failed and we were unable to recover it. 00:25:54.069 [2024-11-19 11:27:49.308247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.069 [2024-11-19 11:27:49.308270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.069 qpair failed and we were unable to recover it. 00:25:54.069 [2024-11-19 11:27:49.308452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.069 [2024-11-19 11:27:49.308478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.069 qpair failed and we were unable to recover it. 00:25:54.069 [2024-11-19 11:27:49.308649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.069 [2024-11-19 11:27:49.308689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.069 qpair failed and we were unable to recover it. 00:25:54.069 [2024-11-19 11:27:49.308866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.069 [2024-11-19 11:27:49.308890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.069 qpair failed and we were unable to recover it. 00:25:54.069 [2024-11-19 11:27:49.309110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.069 [2024-11-19 11:27:49.309134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.069 qpair failed and we were unable to recover it. 00:25:54.069 [2024-11-19 11:27:49.309375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.069 [2024-11-19 11:27:49.309400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.069 qpair failed and we were unable to recover it. 00:25:54.069 [2024-11-19 11:27:49.309605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.070 [2024-11-19 11:27:49.309630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.070 qpair failed and we were unable to recover it. 00:25:54.070 [2024-11-19 11:27:49.309797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.070 [2024-11-19 11:27:49.309821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.070 qpair failed and we were unable to recover it. 00:25:54.070 [2024-11-19 11:27:49.310017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.070 [2024-11-19 11:27:49.310041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.070 qpair failed and we were unable to recover it. 00:25:54.070 [2024-11-19 11:27:49.310251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.070 [2024-11-19 11:27:49.310275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.070 qpair failed and we were unable to recover it. 00:25:54.070 [2024-11-19 11:27:49.310505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.070 [2024-11-19 11:27:49.310531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.070 qpair failed and we were unable to recover it. 00:25:54.070 [2024-11-19 11:27:49.310755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.070 [2024-11-19 11:27:49.310780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.070 qpair failed and we were unable to recover it. 00:25:54.070 [2024-11-19 11:27:49.310951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.070 [2024-11-19 11:27:49.310975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.070 qpair failed and we were unable to recover it. 00:25:54.070 [2024-11-19 11:27:49.311197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.070 [2024-11-19 11:27:49.311221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.070 qpair failed and we were unable to recover it. 00:25:54.070 [2024-11-19 11:27:49.311400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.070 [2024-11-19 11:27:49.311426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.070 qpair failed and we were unable to recover it. 00:25:54.070 [2024-11-19 11:27:49.311574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.070 [2024-11-19 11:27:49.311598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.070 qpair failed and we were unable to recover it. 00:25:54.070 [2024-11-19 11:27:49.311821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.070 [2024-11-19 11:27:49.311845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.070 qpair failed and we were unable to recover it. 00:25:54.070 [2024-11-19 11:27:49.311992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.070 [2024-11-19 11:27:49.312016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.070 qpair failed and we were unable to recover it. 00:25:54.070 [2024-11-19 11:27:49.312112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.070 [2024-11-19 11:27:49.312136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.070 qpair failed and we were unable to recover it. 00:25:54.070 [2024-11-19 11:27:49.312308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.070 [2024-11-19 11:27:49.312339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.070 qpair failed and we were unable to recover it. 00:25:54.070 [2024-11-19 11:27:49.312521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.070 [2024-11-19 11:27:49.312546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.070 qpair failed and we were unable to recover it. 00:25:54.070 [2024-11-19 11:27:49.312658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.070 [2024-11-19 11:27:49.312697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.070 qpair failed and we were unable to recover it. 00:25:54.070 [2024-11-19 11:27:49.312782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.070 [2024-11-19 11:27:49.312806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.070 qpair failed and we were unable to recover it. 00:25:54.070 [2024-11-19 11:27:49.313050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.070 [2024-11-19 11:27:49.313074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.070 qpair failed and we were unable to recover it. 00:25:54.070 [2024-11-19 11:27:49.313244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.070 [2024-11-19 11:27:49.313268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.070 qpair failed and we were unable to recover it. 00:25:54.070 [2024-11-19 11:27:49.313474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.070 [2024-11-19 11:27:49.313500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.070 qpair failed and we were unable to recover it. 00:25:54.070 [2024-11-19 11:27:49.313643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.070 [2024-11-19 11:27:49.313668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.070 qpair failed and we were unable to recover it. 00:25:54.070 [2024-11-19 11:27:49.313823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.070 [2024-11-19 11:27:49.313846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.070 qpair failed and we were unable to recover it. 00:25:54.070 [2024-11-19 11:27:49.314008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.070 [2024-11-19 11:27:49.314032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.070 qpair failed and we were unable to recover it. 00:25:54.070 [2024-11-19 11:27:49.314260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.070 [2024-11-19 11:27:49.314283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.070 qpair failed and we were unable to recover it. 00:25:54.070 [2024-11-19 11:27:49.314495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.070 [2024-11-19 11:27:49.314521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.070 qpair failed and we were unable to recover it. 00:25:54.070 [2024-11-19 11:27:49.314667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.070 [2024-11-19 11:27:49.314692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.070 qpair failed and we were unable to recover it. 00:25:54.070 [2024-11-19 11:27:49.314844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.070 [2024-11-19 11:27:49.314885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.070 qpair failed and we were unable to recover it. 00:25:54.070 [2024-11-19 11:27:49.315055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.070 [2024-11-19 11:27:49.315080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.070 qpair failed and we were unable to recover it. 00:25:54.070 [2024-11-19 11:27:49.315284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.070 [2024-11-19 11:27:49.315310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.070 qpair failed and we were unable to recover it. 00:25:54.070 [2024-11-19 11:27:49.315552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.071 [2024-11-19 11:27:49.315578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.071 qpair failed and we were unable to recover it. 00:25:54.071 [2024-11-19 11:27:49.315744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.071 [2024-11-19 11:27:49.315773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.071 qpair failed and we were unable to recover it. 00:25:54.071 [2024-11-19 11:27:49.315918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.071 [2024-11-19 11:27:49.315958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.071 qpair failed and we were unable to recover it. 00:25:54.071 [2024-11-19 11:27:49.316102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.071 [2024-11-19 11:27:49.316141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.071 qpair failed and we were unable to recover it. 00:25:54.071 [2024-11-19 11:27:49.316272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.071 [2024-11-19 11:27:49.316296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.071 qpair failed and we were unable to recover it. 00:25:54.071 [2024-11-19 11:27:49.316443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.071 [2024-11-19 11:27:49.316470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.071 qpair failed and we were unable to recover it. 00:25:54.071 [2024-11-19 11:27:49.316682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.071 [2024-11-19 11:27:49.316721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.071 qpair failed and we were unable to recover it. 00:25:54.071 [2024-11-19 11:27:49.316954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.071 [2024-11-19 11:27:49.316979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.071 qpair failed and we were unable to recover it. 00:25:54.071 [2024-11-19 11:27:49.317130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.071 [2024-11-19 11:27:49.317155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.071 qpair failed and we were unable to recover it. 00:25:54.071 [2024-11-19 11:27:49.317277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.071 [2024-11-19 11:27:49.317302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.071 qpair failed and we were unable to recover it. 00:25:54.071 [2024-11-19 11:27:49.317500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.071 [2024-11-19 11:27:49.317525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.071 qpair failed and we were unable to recover it. 00:25:54.071 [2024-11-19 11:27:49.317677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.071 [2024-11-19 11:27:49.317702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.071 qpair failed and we were unable to recover it. 00:25:54.071 [2024-11-19 11:27:49.317858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.071 [2024-11-19 11:27:49.317883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.071 qpair failed and we were unable to recover it. 00:25:54.071 [2024-11-19 11:27:49.318050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.071 [2024-11-19 11:27:49.318075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.071 qpair failed and we were unable to recover it. 00:25:54.071 [2024-11-19 11:27:49.318252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.071 [2024-11-19 11:27:49.318288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.071 qpair failed and we were unable to recover it. 00:25:54.071 [2024-11-19 11:27:49.318494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.071 [2024-11-19 11:27:49.318520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.071 qpair failed and we were unable to recover it. 00:25:54.071 [2024-11-19 11:27:49.318703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.071 [2024-11-19 11:27:49.318727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.071 qpair failed and we were unable to recover it. 00:25:54.071 [2024-11-19 11:27:49.318946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.071 [2024-11-19 11:27:49.318971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.071 qpair failed and we were unable to recover it. 00:25:54.071 [2024-11-19 11:27:49.319168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.071 [2024-11-19 11:27:49.319194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.071 qpair failed and we were unable to recover it. 00:25:54.071 [2024-11-19 11:27:49.319324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.071 [2024-11-19 11:27:49.319349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.071 qpair failed and we were unable to recover it. 00:25:54.071 [2024-11-19 11:27:49.319499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.071 [2024-11-19 11:27:49.319524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.071 qpair failed and we were unable to recover it. 00:25:54.071 [2024-11-19 11:27:49.319682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.071 [2024-11-19 11:27:49.319709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.071 qpair failed and we were unable to recover it. 00:25:54.071 [2024-11-19 11:27:49.319870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.071 [2024-11-19 11:27:49.319894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.071 qpair failed and we were unable to recover it. 00:25:54.071 [2024-11-19 11:27:49.320119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.071 [2024-11-19 11:27:49.320144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.071 qpair failed and we were unable to recover it. 00:25:54.071 [2024-11-19 11:27:49.320425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.071 [2024-11-19 11:27:49.320470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.071 qpair failed and we were unable to recover it. 00:25:54.071 [2024-11-19 11:27:49.320662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.071 [2024-11-19 11:27:49.320686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.071 qpair failed and we were unable to recover it. 00:25:54.071 [2024-11-19 11:27:49.320872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.071 [2024-11-19 11:27:49.320897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.071 qpair failed and we were unable to recover it. 00:25:54.071 [2024-11-19 11:27:49.321139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.071 [2024-11-19 11:27:49.321165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.071 qpair failed and we were unable to recover it. 00:25:54.071 [2024-11-19 11:27:49.321295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.071 [2024-11-19 11:27:49.321318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.071 qpair failed and we were unable to recover it. 00:25:54.071 [2024-11-19 11:27:49.321565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.071 [2024-11-19 11:27:49.321591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.071 qpair failed and we were unable to recover it. 00:25:54.071 [2024-11-19 11:27:49.321807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.071 [2024-11-19 11:27:49.321831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.071 qpair failed and we were unable to recover it. 00:25:54.071 [2024-11-19 11:27:49.321978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.071 [2024-11-19 11:27:49.322002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.071 qpair failed and we were unable to recover it. 00:25:54.072 [2024-11-19 11:27:49.322177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.072 [2024-11-19 11:27:49.322202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.072 qpair failed and we were unable to recover it. 00:25:54.072 [2024-11-19 11:27:49.322400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.072 [2024-11-19 11:27:49.322426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.072 qpair failed and we were unable to recover it. 00:25:54.072 [2024-11-19 11:27:49.322636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.072 [2024-11-19 11:27:49.322661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.072 qpair failed and we were unable to recover it. 00:25:54.072 [2024-11-19 11:27:49.322841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.072 [2024-11-19 11:27:49.322865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.072 qpair failed and we were unable to recover it. 00:25:54.072 [2024-11-19 11:27:49.323072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.072 [2024-11-19 11:27:49.323096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.072 qpair failed and we were unable to recover it. 00:25:54.072 [2024-11-19 11:27:49.323290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.072 [2024-11-19 11:27:49.323312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.072 qpair failed and we were unable to recover it. 00:25:54.072 [2024-11-19 11:27:49.323450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.072 [2024-11-19 11:27:49.323485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.072 qpair failed and we were unable to recover it. 00:25:54.072 [2024-11-19 11:27:49.323646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.072 [2024-11-19 11:27:49.323685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.072 qpair failed and we were unable to recover it. 00:25:54.072 [2024-11-19 11:27:49.323828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.072 [2024-11-19 11:27:49.323853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.072 qpair failed and we were unable to recover it. 00:25:54.072 [2024-11-19 11:27:49.324071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.072 [2024-11-19 11:27:49.324096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.072 qpair failed and we were unable to recover it. 00:25:54.072 [2024-11-19 11:27:49.324385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.072 [2024-11-19 11:27:49.324410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.072 qpair failed and we were unable to recover it. 00:25:54.072 [2024-11-19 11:27:49.324575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.072 [2024-11-19 11:27:49.324600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.072 qpair failed and we were unable to recover it. 00:25:54.072 [2024-11-19 11:27:49.324738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.072 [2024-11-19 11:27:49.324762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.072 qpair failed and we were unable to recover it. 00:25:54.072 [2024-11-19 11:27:49.324946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.072 [2024-11-19 11:27:49.324972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.072 qpair failed and we were unable to recover it. 00:25:54.072 [2024-11-19 11:27:49.325148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.072 [2024-11-19 11:27:49.325172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.072 qpair failed and we were unable to recover it. 00:25:54.072 [2024-11-19 11:27:49.325292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.072 [2024-11-19 11:27:49.325316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.072 qpair failed and we were unable to recover it. 00:25:54.072 [2024-11-19 11:27:49.325498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.072 [2024-11-19 11:27:49.325523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.072 qpair failed and we were unable to recover it. 00:25:54.072 [2024-11-19 11:27:49.325692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.072 [2024-11-19 11:27:49.325717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.072 qpair failed and we were unable to recover it. 00:25:54.072 [2024-11-19 11:27:49.325916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.072 [2024-11-19 11:27:49.325941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.072 qpair failed and we were unable to recover it. 00:25:54.072 [2024-11-19 11:27:49.326086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.072 [2024-11-19 11:27:49.326111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.072 qpair failed and we were unable to recover it. 00:25:54.072 [2024-11-19 11:27:49.326324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.072 [2024-11-19 11:27:49.326347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.072 qpair failed and we were unable to recover it. 00:25:54.072 [2024-11-19 11:27:49.326491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.072 [2024-11-19 11:27:49.326532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.072 qpair failed and we were unable to recover it. 00:25:54.072 [2024-11-19 11:27:49.326758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.072 [2024-11-19 11:27:49.326783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.072 qpair failed and we were unable to recover it. 00:25:54.072 [2024-11-19 11:27:49.327004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.072 [2024-11-19 11:27:49.327027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.072 qpair failed and we were unable to recover it. 00:25:54.072 [2024-11-19 11:27:49.327213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.072 [2024-11-19 11:27:49.327238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.072 qpair failed and we were unable to recover it. 00:25:54.072 [2024-11-19 11:27:49.327464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.072 [2024-11-19 11:27:49.327489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.072 qpair failed and we were unable to recover it. 00:25:54.072 [2024-11-19 11:27:49.327621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.072 [2024-11-19 11:27:49.327659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.072 qpair failed and we were unable to recover it. 00:25:54.072 [2024-11-19 11:27:49.327851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.072 [2024-11-19 11:27:49.327876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.072 qpair failed and we were unable to recover it. 00:25:54.072 [2024-11-19 11:27:49.328097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.072 [2024-11-19 11:27:49.328121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.072 qpair failed and we were unable to recover it. 00:25:54.073 [2024-11-19 11:27:49.328254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.073 [2024-11-19 11:27:49.328279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.073 qpair failed and we were unable to recover it. 00:25:54.073 [2024-11-19 11:27:49.328425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.073 [2024-11-19 11:27:49.328451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.073 qpair failed and we were unable to recover it. 00:25:54.073 [2024-11-19 11:27:49.328662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.073 [2024-11-19 11:27:49.328687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.073 qpair failed and we were unable to recover it. 00:25:54.073 [2024-11-19 11:27:49.328866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.073 [2024-11-19 11:27:49.328893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.073 qpair failed and we were unable to recover it. 00:25:54.073 [2024-11-19 11:27:49.329131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.073 [2024-11-19 11:27:49.329156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.073 qpair failed and we were unable to recover it. 00:25:54.073 [2024-11-19 11:27:49.329386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.073 [2024-11-19 11:27:49.329427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.073 qpair failed and we were unable to recover it. 00:25:54.073 [2024-11-19 11:27:49.329621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.073 [2024-11-19 11:27:49.329648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.073 qpair failed and we were unable to recover it. 00:25:54.073 [2024-11-19 11:27:49.329795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.073 [2024-11-19 11:27:49.329820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.073 qpair failed and we were unable to recover it. 00:25:54.073 [2024-11-19 11:27:49.330067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.073 [2024-11-19 11:27:49.330093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.073 qpair failed and we were unable to recover it. 00:25:54.073 [2024-11-19 11:27:49.330288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.073 [2024-11-19 11:27:49.330311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.073 qpair failed and we were unable to recover it. 00:25:54.073 [2024-11-19 11:27:49.330493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.073 [2024-11-19 11:27:49.330518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.073 qpair failed and we were unable to recover it. 00:25:54.073 [2024-11-19 11:27:49.330731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.073 [2024-11-19 11:27:49.330756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.073 qpair failed and we were unable to recover it. 00:25:54.073 [2024-11-19 11:27:49.330965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.073 [2024-11-19 11:27:49.330989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.073 qpair failed and we were unable to recover it. 00:25:54.073 [2024-11-19 11:27:49.331197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.073 [2024-11-19 11:27:49.331222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.073 qpair failed and we were unable to recover it. 00:25:54.073 [2024-11-19 11:27:49.331375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.073 [2024-11-19 11:27:49.331416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.073 qpair failed and we were unable to recover it. 00:25:54.073 [2024-11-19 11:27:49.331614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.073 [2024-11-19 11:27:49.331654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.073 qpair failed and we were unable to recover it. 00:25:54.073 [2024-11-19 11:27:49.331779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.073 [2024-11-19 11:27:49.331803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.073 qpair failed and we were unable to recover it. 00:25:54.073 [2024-11-19 11:27:49.332011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.073 [2024-11-19 11:27:49.332035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.073 qpair failed and we were unable to recover it. 00:25:54.073 [2024-11-19 11:27:49.332241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.073 [2024-11-19 11:27:49.332264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.073 qpair failed and we were unable to recover it. 00:25:54.073 [2024-11-19 11:27:49.332455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.073 [2024-11-19 11:27:49.332482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.073 qpair failed and we were unable to recover it. 00:25:54.073 [2024-11-19 11:27:49.332650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.073 [2024-11-19 11:27:49.332675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.073 qpair failed and we were unable to recover it. 00:25:54.073 [2024-11-19 11:27:49.332896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.073 [2024-11-19 11:27:49.332920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.073 qpair failed and we were unable to recover it. 00:25:54.073 [2024-11-19 11:27:49.333102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.073 [2024-11-19 11:27:49.333126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.073 qpair failed and we were unable to recover it. 00:25:54.073 [2024-11-19 11:27:49.333302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.073 [2024-11-19 11:27:49.333325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.073 qpair failed and we were unable to recover it. 00:25:54.073 [2024-11-19 11:27:49.333439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.073 [2024-11-19 11:27:49.333464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.073 qpair failed and we were unable to recover it. 00:25:54.073 [2024-11-19 11:27:49.333613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.073 [2024-11-19 11:27:49.333637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.073 qpair failed and we were unable to recover it. 00:25:54.073 [2024-11-19 11:27:49.333821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.073 [2024-11-19 11:27:49.333845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.073 qpair failed and we were unable to recover it. 00:25:54.073 [2024-11-19 11:27:49.333947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.073 [2024-11-19 11:27:49.333972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.073 qpair failed and we were unable to recover it. 00:25:54.073 [2024-11-19 11:27:49.334103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.073 [2024-11-19 11:27:49.334128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.073 qpair failed and we were unable to recover it. 00:25:54.073 [2024-11-19 11:27:49.334358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.073 [2024-11-19 11:27:49.334413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.073 qpair failed and we were unable to recover it. 00:25:54.074 [2024-11-19 11:27:49.334553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.074 [2024-11-19 11:27:49.334577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.074 qpair failed and we were unable to recover it. 00:25:54.074 [2024-11-19 11:27:49.334769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.074 [2024-11-19 11:27:49.334794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.074 qpair failed and we were unable to recover it. 00:25:54.074 [2024-11-19 11:27:49.334996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.074 [2024-11-19 11:27:49.335021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.074 qpair failed and we were unable to recover it. 00:25:54.074 [2024-11-19 11:27:49.335242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.074 [2024-11-19 11:27:49.335265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.074 qpair failed and we were unable to recover it. 00:25:54.074 [2024-11-19 11:27:49.335443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.074 [2024-11-19 11:27:49.335470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.074 qpair failed and we were unable to recover it. 00:25:54.074 [2024-11-19 11:27:49.335597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.074 [2024-11-19 11:27:49.335622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.074 qpair failed and we were unable to recover it. 00:25:54.074 [2024-11-19 11:27:49.335818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.074 [2024-11-19 11:27:49.335843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.074 qpair failed and we were unable to recover it. 00:25:54.074 [2024-11-19 11:27:49.336017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.074 [2024-11-19 11:27:49.336041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.074 qpair failed and we were unable to recover it. 00:25:54.074 [2024-11-19 11:27:49.336201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.074 [2024-11-19 11:27:49.336224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.074 qpair failed and we were unable to recover it. 00:25:54.074 [2024-11-19 11:27:49.336411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.074 [2024-11-19 11:27:49.336436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.074 qpair failed and we were unable to recover it. 00:25:54.074 [2024-11-19 11:27:49.336627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.074 [2024-11-19 11:27:49.336652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.074 qpair failed and we were unable to recover it. 00:25:54.074 [2024-11-19 11:27:49.336874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.074 [2024-11-19 11:27:49.336899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.074 qpair failed and we were unable to recover it. 00:25:54.074 [2024-11-19 11:27:49.337002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.074 [2024-11-19 11:27:49.337041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.074 qpair failed and we were unable to recover it. 00:25:54.074 [2024-11-19 11:27:49.337229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.074 [2024-11-19 11:27:49.337258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.074 qpair failed and we were unable to recover it. 00:25:54.074 [2024-11-19 11:27:49.337382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.074 [2024-11-19 11:27:49.337409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.074 qpair failed and we were unable to recover it. 00:25:54.074 [2024-11-19 11:27:49.337619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.074 [2024-11-19 11:27:49.337661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.074 qpair failed and we were unable to recover it. 00:25:54.074 [2024-11-19 11:27:49.337774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.074 [2024-11-19 11:27:49.337814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.074 qpair failed and we were unable to recover it. 00:25:54.074 [2024-11-19 11:27:49.338015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.074 [2024-11-19 11:27:49.338040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.074 qpair failed and we were unable to recover it. 00:25:54.074 [2024-11-19 11:27:49.338246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.074 [2024-11-19 11:27:49.338268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.074 qpair failed and we were unable to recover it. 00:25:54.074 [2024-11-19 11:27:49.338523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.074 [2024-11-19 11:27:49.338548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.074 qpair failed and we were unable to recover it. 00:25:54.074 [2024-11-19 11:27:49.338701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.074 [2024-11-19 11:27:49.338725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.074 qpair failed and we were unable to recover it. 00:25:54.074 [2024-11-19 11:27:49.338896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.074 [2024-11-19 11:27:49.338920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.074 qpair failed and we were unable to recover it. 00:25:54.074 [2024-11-19 11:27:49.339061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.074 [2024-11-19 11:27:49.339086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.074 qpair failed and we were unable to recover it. 00:25:54.074 [2024-11-19 11:27:49.339227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.074 [2024-11-19 11:27:49.339252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.074 qpair failed and we were unable to recover it. 00:25:54.074 [2024-11-19 11:27:49.339437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.074 [2024-11-19 11:27:49.339463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.074 qpair failed and we were unable to recover it. 00:25:54.074 [2024-11-19 11:27:49.339684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.074 [2024-11-19 11:27:49.339709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.074 qpair failed and we were unable to recover it. 00:25:54.074 [2024-11-19 11:27:49.339899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.074 [2024-11-19 11:27:49.339925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.074 qpair failed and we were unable to recover it. 00:25:54.074 [2024-11-19 11:27:49.340116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.075 [2024-11-19 11:27:49.340140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.075 qpair failed and we were unable to recover it. 00:25:54.075 [2024-11-19 11:27:49.340341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.075 [2024-11-19 11:27:49.340394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.075 qpair failed and we were unable to recover it. 00:25:54.075 [2024-11-19 11:27:49.340522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.075 [2024-11-19 11:27:49.340546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.075 qpair failed and we were unable to recover it. 00:25:54.075 [2024-11-19 11:27:49.340762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.075 [2024-11-19 11:27:49.340785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.075 qpair failed and we were unable to recover it. 00:25:54.075 [2024-11-19 11:27:49.341022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.075 [2024-11-19 11:27:49.341045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.075 qpair failed and we were unable to recover it. 00:25:54.075 [2024-11-19 11:27:49.341227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.075 [2024-11-19 11:27:49.341252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.075 qpair failed and we were unable to recover it. 00:25:54.075 [2024-11-19 11:27:49.341359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.075 [2024-11-19 11:27:49.341390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.075 qpair failed and we were unable to recover it. 00:25:54.075 [2024-11-19 11:27:49.341511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.075 [2024-11-19 11:27:49.341536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.075 qpair failed and we were unable to recover it. 00:25:54.075 [2024-11-19 11:27:49.341664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.075 [2024-11-19 11:27:49.341698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.075 qpair failed and we were unable to recover it. 00:25:54.075 [2024-11-19 11:27:49.341880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.075 [2024-11-19 11:27:49.341919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.075 qpair failed and we were unable to recover it. 00:25:54.075 [2024-11-19 11:27:49.342026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.075 [2024-11-19 11:27:49.342049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.075 qpair failed and we were unable to recover it. 00:25:54.075 [2024-11-19 11:27:49.342294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.075 [2024-11-19 11:27:49.342319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.075 qpair failed and we were unable to recover it. 00:25:54.075 [2024-11-19 11:27:49.342527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.075 [2024-11-19 11:27:49.342552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.075 qpair failed and we were unable to recover it. 00:25:54.075 [2024-11-19 11:27:49.342716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.075 [2024-11-19 11:27:49.342742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.075 qpair failed and we were unable to recover it. 00:25:54.075 [2024-11-19 11:27:49.342971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.075 [2024-11-19 11:27:49.342995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.075 qpair failed and we were unable to recover it. 00:25:54.075 [2024-11-19 11:27:49.343099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.075 [2024-11-19 11:27:49.343122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.075 qpair failed and we were unable to recover it. 00:25:54.075 [2024-11-19 11:27:49.343286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.075 [2024-11-19 11:27:49.343310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.075 qpair failed and we were unable to recover it. 00:25:54.075 [2024-11-19 11:27:49.343472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.075 [2024-11-19 11:27:49.343498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.075 qpair failed and we were unable to recover it. 00:25:54.075 [2024-11-19 11:27:49.343704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.075 [2024-11-19 11:27:49.343744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.075 qpair failed and we were unable to recover it. 00:25:54.075 [2024-11-19 11:27:49.343850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.075 [2024-11-19 11:27:49.343874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.075 qpair failed and we were unable to recover it. 00:25:54.075 [2024-11-19 11:27:49.344085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.075 [2024-11-19 11:27:49.344125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.075 qpair failed and we were unable to recover it. 00:25:54.075 [2024-11-19 11:27:49.344314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.075 [2024-11-19 11:27:49.344338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.075 qpair failed and we were unable to recover it. 00:25:54.075 [2024-11-19 11:27:49.344529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.075 [2024-11-19 11:27:49.344554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.075 qpair failed and we were unable to recover it. 00:25:54.075 [2024-11-19 11:27:49.344730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.075 [2024-11-19 11:27:49.344754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.075 qpair failed and we were unable to recover it. 00:25:54.075 [2024-11-19 11:27:49.344964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.075 [2024-11-19 11:27:49.344988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.075 qpair failed and we were unable to recover it. 00:25:54.075 [2024-11-19 11:27:49.345173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.075 [2024-11-19 11:27:49.345197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.075 qpair failed and we were unable to recover it. 00:25:54.075 [2024-11-19 11:27:49.345377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.075 [2024-11-19 11:27:49.345422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.075 qpair failed and we were unable to recover it. 00:25:54.075 [2024-11-19 11:27:49.345591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.075 [2024-11-19 11:27:49.345616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.075 qpair failed and we were unable to recover it. 00:25:54.075 [2024-11-19 11:27:49.345763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.075 [2024-11-19 11:27:49.345803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.075 qpair failed and we were unable to recover it. 00:25:54.075 [2024-11-19 11:27:49.345991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.075 [2024-11-19 11:27:49.346015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.075 qpair failed and we were unable to recover it. 00:25:54.075 [2024-11-19 11:27:49.346194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.075 [2024-11-19 11:27:49.346216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.075 qpair failed and we were unable to recover it. 00:25:54.075 [2024-11-19 11:27:49.346401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.075 [2024-11-19 11:27:49.346427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.075 qpair failed and we were unable to recover it. 00:25:54.075 [2024-11-19 11:27:49.346593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.075 [2024-11-19 11:27:49.346618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.075 qpair failed and we were unable to recover it. 00:25:54.076 [2024-11-19 11:27:49.346811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.076 [2024-11-19 11:27:49.346835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.076 qpair failed and we were unable to recover it. 00:25:54.076 [2024-11-19 11:27:49.347028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.076 [2024-11-19 11:27:49.347053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.076 qpair failed and we were unable to recover it. 00:25:54.076 [2024-11-19 11:27:49.347224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.076 [2024-11-19 11:27:49.347249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.076 qpair failed and we were unable to recover it. 00:25:54.076 [2024-11-19 11:27:49.347428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.076 [2024-11-19 11:27:49.347472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.076 qpair failed and we were unable to recover it. 00:25:54.076 [2024-11-19 11:27:49.347625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.076 [2024-11-19 11:27:49.347650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.076 qpair failed and we were unable to recover it. 00:25:54.076 [2024-11-19 11:27:49.347813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.076 [2024-11-19 11:27:49.347837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.076 qpair failed and we were unable to recover it. 00:25:54.076 [2024-11-19 11:27:49.348071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.076 [2024-11-19 11:27:49.348095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.076 qpair failed and we were unable to recover it. 00:25:54.076 [2024-11-19 11:27:49.348284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.076 [2024-11-19 11:27:49.348309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.076 qpair failed and we were unable to recover it. 00:25:54.076 [2024-11-19 11:27:49.348520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.076 [2024-11-19 11:27:49.348546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.076 qpair failed and we were unable to recover it. 00:25:54.076 [2024-11-19 11:27:49.348720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.076 [2024-11-19 11:27:49.348743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.076 qpair failed and we were unable to recover it. 00:25:54.076 [2024-11-19 11:27:49.348922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.076 [2024-11-19 11:27:49.348946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.076 qpair failed and we were unable to recover it. 00:25:54.076 [2024-11-19 11:27:49.349118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.076 [2024-11-19 11:27:49.349157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.076 qpair failed and we were unable to recover it. 00:25:54.076 [2024-11-19 11:27:49.349378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.076 [2024-11-19 11:27:49.349405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.076 qpair failed and we were unable to recover it. 00:25:54.076 [2024-11-19 11:27:49.349587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.076 [2024-11-19 11:27:49.349612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.076 qpair failed and we were unable to recover it. 00:25:54.076 [2024-11-19 11:27:49.349779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.076 [2024-11-19 11:27:49.349804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.076 qpair failed and we were unable to recover it. 00:25:54.076 [2024-11-19 11:27:49.349985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.076 [2024-11-19 11:27:49.350010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.076 qpair failed and we were unable to recover it. 00:25:54.076 [2024-11-19 11:27:49.350180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.076 [2024-11-19 11:27:49.350203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.076 qpair failed and we were unable to recover it. 00:25:54.076 [2024-11-19 11:27:49.350403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.076 [2024-11-19 11:27:49.350430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.076 qpair failed and we were unable to recover it. 00:25:54.076 [2024-11-19 11:27:49.350612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.076 [2024-11-19 11:27:49.350637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.076 qpair failed and we were unable to recover it. 00:25:54.076 [2024-11-19 11:27:49.350808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.076 [2024-11-19 11:27:49.350833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.076 qpair failed and we were unable to recover it. 00:25:54.076 [2024-11-19 11:27:49.351043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.076 [2024-11-19 11:27:49.351067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.076 qpair failed and we were unable to recover it. 00:25:54.076 [2024-11-19 11:27:49.351303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.076 [2024-11-19 11:27:49.351326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.076 qpair failed and we were unable to recover it. 00:25:54.076 [2024-11-19 11:27:49.351530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.076 [2024-11-19 11:27:49.351556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.076 qpair failed and we were unable to recover it. 00:25:54.076 [2024-11-19 11:27:49.351701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.076 [2024-11-19 11:27:49.351725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.076 qpair failed and we were unable to recover it. 00:25:54.076 [2024-11-19 11:27:49.351965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.076 [2024-11-19 11:27:49.351990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.076 qpair failed and we were unable to recover it. 00:25:54.076 [2024-11-19 11:27:49.352146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.076 [2024-11-19 11:27:49.352171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.076 qpair failed and we were unable to recover it. 00:25:54.076 [2024-11-19 11:27:49.352382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.076 [2024-11-19 11:27:49.352408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.076 qpair failed and we were unable to recover it. 00:25:54.076 [2024-11-19 11:27:49.352613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.076 [2024-11-19 11:27:49.352637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.076 qpair failed and we were unable to recover it. 00:25:54.076 [2024-11-19 11:27:49.352843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.076 [2024-11-19 11:27:49.352867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.076 qpair failed and we were unable to recover it. 00:25:54.076 [2024-11-19 11:27:49.353084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.076 [2024-11-19 11:27:49.353108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.076 qpair failed and we were unable to recover it. 00:25:54.076 [2024-11-19 11:27:49.353240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.076 [2024-11-19 11:27:49.353263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.076 qpair failed and we were unable to recover it. 00:25:54.076 [2024-11-19 11:27:49.353412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.076 [2024-11-19 11:27:49.353437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.077 qpair failed and we were unable to recover it. 00:25:54.077 [2024-11-19 11:27:49.353573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.077 [2024-11-19 11:27:49.353598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.077 qpair failed and we were unable to recover it. 00:25:54.077 [2024-11-19 11:27:49.353817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.077 [2024-11-19 11:27:49.353846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.077 qpair failed and we were unable to recover it. 00:25:54.077 [2024-11-19 11:27:49.354123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.077 [2024-11-19 11:27:49.354149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.077 qpair failed and we were unable to recover it. 00:25:54.077 [2024-11-19 11:27:49.354360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.077 [2024-11-19 11:27:49.354393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.077 qpair failed and we were unable to recover it. 00:25:54.077 [2024-11-19 11:27:49.354494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.077 [2024-11-19 11:27:49.354535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.077 qpair failed and we were unable to recover it. 00:25:54.077 [2024-11-19 11:27:49.354689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.077 [2024-11-19 11:27:49.354713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.077 qpair failed and we were unable to recover it. 00:25:54.077 [2024-11-19 11:27:49.354958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.077 [2024-11-19 11:27:49.354983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.077 qpair failed and we were unable to recover it. 00:25:54.077 [2024-11-19 11:27:49.355131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.077 [2024-11-19 11:27:49.355154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.077 qpair failed and we were unable to recover it. 00:25:54.077 [2024-11-19 11:27:49.355295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.077 [2024-11-19 11:27:49.355321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.077 qpair failed and we were unable to recover it. 00:25:54.077 [2024-11-19 11:27:49.355483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.077 [2024-11-19 11:27:49.355508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.077 qpair failed and we were unable to recover it. 00:25:54.077 [2024-11-19 11:27:49.355690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.077 [2024-11-19 11:27:49.355729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.077 qpair failed and we were unable to recover it. 00:25:54.077 [2024-11-19 11:27:49.355946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.077 [2024-11-19 11:27:49.355970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.077 qpair failed and we were unable to recover it. 00:25:54.077 [2024-11-19 11:27:49.356147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.077 [2024-11-19 11:27:49.356171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.077 qpair failed and we were unable to recover it. 00:25:54.077 [2024-11-19 11:27:49.356353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.077 [2024-11-19 11:27:49.356409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.077 qpair failed and we were unable to recover it. 00:25:54.077 [2024-11-19 11:27:49.356637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.077 [2024-11-19 11:27:49.356663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.077 qpair failed and we were unable to recover it. 00:25:54.077 [2024-11-19 11:27:49.356844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.077 [2024-11-19 11:27:49.356869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.077 qpair failed and we were unable to recover it. 00:25:54.077 [2024-11-19 11:27:49.357001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.077 [2024-11-19 11:27:49.357026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.077 qpair failed and we were unable to recover it. 00:25:54.077 [2024-11-19 11:27:49.357180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.077 [2024-11-19 11:27:49.357203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.077 qpair failed and we were unable to recover it. 00:25:54.077 [2024-11-19 11:27:49.357452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.077 [2024-11-19 11:27:49.357477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.077 qpair failed and we were unable to recover it. 00:25:54.077 [2024-11-19 11:27:49.357569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.077 [2024-11-19 11:27:49.357594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.077 qpair failed and we were unable to recover it. 00:25:54.077 [2024-11-19 11:27:49.357788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.077 [2024-11-19 11:27:49.357812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.077 qpair failed and we were unable to recover it. 00:25:54.077 [2024-11-19 11:27:49.357971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.077 [2024-11-19 11:27:49.357996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.077 qpair failed and we were unable to recover it. 00:25:54.077 [2024-11-19 11:27:49.358214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.077 [2024-11-19 11:27:49.358239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.077 qpair failed and we were unable to recover it. 00:25:54.077 [2024-11-19 11:27:49.358428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.077 [2024-11-19 11:27:49.358454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.077 qpair failed and we were unable to recover it. 00:25:54.077 [2024-11-19 11:27:49.358645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.077 [2024-11-19 11:27:49.358669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.077 qpair failed and we were unable to recover it. 00:25:54.077 [2024-11-19 11:27:49.358870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.077 [2024-11-19 11:27:49.358894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.077 qpair failed and we were unable to recover it. 00:25:54.077 [2024-11-19 11:27:49.359069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.077 [2024-11-19 11:27:49.359094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.077 qpair failed and we were unable to recover it. 00:25:54.077 [2024-11-19 11:27:49.359266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.077 [2024-11-19 11:27:49.359290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.077 qpair failed and we were unable to recover it. 00:25:54.077 [2024-11-19 11:27:49.359418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.077 [2024-11-19 11:27:49.359444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.077 qpair failed and we were unable to recover it. 00:25:54.077 [2024-11-19 11:27:49.359596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.077 [2024-11-19 11:27:49.359622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.077 qpair failed and we were unable to recover it. 00:25:54.077 [2024-11-19 11:27:49.359810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.077 [2024-11-19 11:27:49.359844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.077 qpair failed and we were unable to recover it. 00:25:54.077 [2024-11-19 11:27:49.359988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.078 [2024-11-19 11:27:49.360028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.078 qpair failed and we were unable to recover it. 00:25:54.078 [2024-11-19 11:27:49.360202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.078 [2024-11-19 11:27:49.360227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.078 qpair failed and we were unable to recover it. 00:25:54.078 [2024-11-19 11:27:49.360429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.078 [2024-11-19 11:27:49.360455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.078 qpair failed and we were unable to recover it. 00:25:54.078 [2024-11-19 11:27:49.360653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.078 [2024-11-19 11:27:49.360676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.078 qpair failed and we were unable to recover it. 00:25:54.078 [2024-11-19 11:27:49.360817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.078 [2024-11-19 11:27:49.360855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.078 qpair failed and we were unable to recover it. 00:25:54.078 [2024-11-19 11:27:49.361061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.078 [2024-11-19 11:27:49.361087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.078 qpair failed and we were unable to recover it. 00:25:54.078 [2024-11-19 11:27:49.361250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.078 [2024-11-19 11:27:49.361275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.078 qpair failed and we were unable to recover it. 00:25:54.078 [2024-11-19 11:27:49.361463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.078 [2024-11-19 11:27:49.361490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.078 qpair failed and we were unable to recover it. 00:25:54.078 [2024-11-19 11:27:49.361652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.078 [2024-11-19 11:27:49.361677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.078 qpair failed and we were unable to recover it. 00:25:54.078 [2024-11-19 11:27:49.361845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.078 [2024-11-19 11:27:49.361872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.078 qpair failed and we were unable to recover it. 00:25:54.078 [2024-11-19 11:27:49.362049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.078 [2024-11-19 11:27:49.362077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.078 qpair failed and we were unable to recover it. 00:25:54.078 [2024-11-19 11:27:49.362239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.078 [2024-11-19 11:27:49.362267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.078 qpair failed and we were unable to recover it. 00:25:54.078 [2024-11-19 11:27:49.362438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.078 [2024-11-19 11:27:49.362464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.078 qpair failed and we were unable to recover it. 00:25:54.078 [2024-11-19 11:27:49.362679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.078 [2024-11-19 11:27:49.362703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.078 qpair failed and we were unable to recover it. 00:25:54.078 [2024-11-19 11:27:49.362913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.078 [2024-11-19 11:27:49.362937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.078 qpair failed and we were unable to recover it. 00:25:54.078 [2024-11-19 11:27:49.363086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.078 [2024-11-19 11:27:49.363110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.078 qpair failed and we were unable to recover it. 00:25:54.078 [2024-11-19 11:27:49.363337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.078 [2024-11-19 11:27:49.363381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.078 qpair failed and we were unable to recover it. 00:25:54.078 [2024-11-19 11:27:49.363600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.078 [2024-11-19 11:27:49.363627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.078 qpair failed and we were unable to recover it. 00:25:54.078 [2024-11-19 11:27:49.363789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.078 [2024-11-19 11:27:49.363822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.078 qpair failed and we were unable to recover it. 00:25:54.078 [2024-11-19 11:27:49.363987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.078 [2024-11-19 11:27:49.364023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.078 qpair failed and we were unable to recover it. 00:25:54.078 [2024-11-19 11:27:49.364203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.078 [2024-11-19 11:27:49.364228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.078 qpair failed and we were unable to recover it. 00:25:54.078 [2024-11-19 11:27:49.364424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.078 [2024-11-19 11:27:49.364450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.078 qpair failed and we were unable to recover it. 00:25:54.078 [2024-11-19 11:27:49.364594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.078 [2024-11-19 11:27:49.364620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.078 qpair failed and we were unable to recover it. 00:25:54.078 [2024-11-19 11:27:49.364830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.078 [2024-11-19 11:27:49.364854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.078 qpair failed and we were unable to recover it. 00:25:54.078 [2024-11-19 11:27:49.365085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.078 [2024-11-19 11:27:49.365110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.078 qpair failed and we were unable to recover it. 00:25:54.078 [2024-11-19 11:27:49.365251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.078 [2024-11-19 11:27:49.365277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.078 qpair failed and we were unable to recover it. 00:25:54.078 [2024-11-19 11:27:49.365508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.078 [2024-11-19 11:27:49.365534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.078 qpair failed and we were unable to recover it. 00:25:54.078 [2024-11-19 11:27:49.365678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.078 [2024-11-19 11:27:49.365703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.078 qpair failed and we were unable to recover it. 00:25:54.078 [2024-11-19 11:27:49.365911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.078 [2024-11-19 11:27:49.365938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.078 qpair failed and we were unable to recover it. 00:25:54.078 [2024-11-19 11:27:49.366100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.078 [2024-11-19 11:27:49.366126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.078 qpair failed and we were unable to recover it. 00:25:54.078 [2024-11-19 11:27:49.366375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.078 [2024-11-19 11:27:49.366401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.078 qpair failed and we were unable to recover it. 00:25:54.078 [2024-11-19 11:27:49.366567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.078 [2024-11-19 11:27:49.366592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.078 qpair failed and we were unable to recover it. 00:25:54.078 [2024-11-19 11:27:49.366700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.079 [2024-11-19 11:27:49.366724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.079 qpair failed and we were unable to recover it. 00:25:54.079 [2024-11-19 11:27:49.366887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.079 [2024-11-19 11:27:49.366925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.079 qpair failed and we were unable to recover it. 00:25:54.079 [2024-11-19 11:27:49.367086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.079 [2024-11-19 11:27:49.367110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.079 qpair failed and we were unable to recover it. 00:25:54.079 [2024-11-19 11:27:49.367343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.079 [2024-11-19 11:27:49.367373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.079 qpair failed and we were unable to recover it. 00:25:54.079 [2024-11-19 11:27:49.367542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.079 [2024-11-19 11:27:49.367566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.079 qpair failed and we were unable to recover it. 00:25:54.079 [2024-11-19 11:27:49.367788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.079 [2024-11-19 11:27:49.367813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.079 qpair failed and we were unable to recover it. 00:25:54.079 [2024-11-19 11:27:49.367971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.079 [2024-11-19 11:27:49.367995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.079 qpair failed and we were unable to recover it. 00:25:54.079 [2024-11-19 11:27:49.368211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.079 [2024-11-19 11:27:49.368236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.079 qpair failed and we were unable to recover it. 00:25:54.079 [2024-11-19 11:27:49.368468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.079 [2024-11-19 11:27:49.368495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.079 qpair failed and we were unable to recover it. 00:25:54.079 [2024-11-19 11:27:49.368666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.079 [2024-11-19 11:27:49.368691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.079 qpair failed and we were unable to recover it. 00:25:54.079 [2024-11-19 11:27:49.368869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.079 [2024-11-19 11:27:49.368893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.079 qpair failed and we were unable to recover it. 00:25:54.079 [2024-11-19 11:27:49.369078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.079 [2024-11-19 11:27:49.369102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.079 qpair failed and we were unable to recover it. 00:25:54.079 [2024-11-19 11:27:49.369314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.079 [2024-11-19 11:27:49.369338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.079 qpair failed and we were unable to recover it. 00:25:54.079 [2024-11-19 11:27:49.369458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.079 [2024-11-19 11:27:49.369483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.079 qpair failed and we were unable to recover it. 00:25:54.079 [2024-11-19 11:27:49.369663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.079 [2024-11-19 11:27:49.369688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.079 qpair failed and we were unable to recover it. 00:25:54.079 [2024-11-19 11:27:49.369871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.079 [2024-11-19 11:27:49.369896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.079 qpair failed and we were unable to recover it. 00:25:54.079 [2024-11-19 11:27:49.370049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.079 [2024-11-19 11:27:49.370072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.079 qpair failed and we were unable to recover it. 00:25:54.079 [2024-11-19 11:27:49.370251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.079 [2024-11-19 11:27:49.370276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.079 qpair failed and we were unable to recover it. 00:25:54.079 [2024-11-19 11:27:49.370416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.079 [2024-11-19 11:27:49.370446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.079 qpair failed and we were unable to recover it. 00:25:54.079 [2024-11-19 11:27:49.370628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.079 [2024-11-19 11:27:49.370667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.079 qpair failed and we were unable to recover it. 00:25:54.079 [2024-11-19 11:27:49.370874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.079 [2024-11-19 11:27:49.370897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.079 qpair failed and we were unable to recover it. 00:25:54.079 [2024-11-19 11:27:49.371116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.079 [2024-11-19 11:27:49.371140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.079 qpair failed and we were unable to recover it. 00:25:54.079 [2024-11-19 11:27:49.371371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.079 [2024-11-19 11:27:49.371397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.079 qpair failed and we were unable to recover it. 00:25:54.079 [2024-11-19 11:27:49.371586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.079 [2024-11-19 11:27:49.371611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.079 qpair failed and we were unable to recover it. 00:25:54.079 [2024-11-19 11:27:49.371797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.079 [2024-11-19 11:27:49.371823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.079 qpair failed and we were unable to recover it. 00:25:54.079 [2024-11-19 11:27:49.371988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.079 [2024-11-19 11:27:49.372027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.079 qpair failed and we were unable to recover it. 00:25:54.079 [2024-11-19 11:27:49.372192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.079 [2024-11-19 11:27:49.372216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.080 qpair failed and we were unable to recover it. 00:25:54.080 [2024-11-19 11:27:49.372415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.080 [2024-11-19 11:27:49.372441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.080 qpair failed and we were unable to recover it. 00:25:54.080 [2024-11-19 11:27:49.372660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.080 [2024-11-19 11:27:49.372684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.080 qpair failed and we were unable to recover it. 00:25:54.080 [2024-11-19 11:27:49.372928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.080 [2024-11-19 11:27:49.372954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.080 qpair failed and we were unable to recover it. 00:25:54.080 [2024-11-19 11:27:49.373130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.080 [2024-11-19 11:27:49.373154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.080 qpair failed and we were unable to recover it. 00:25:54.080 [2024-11-19 11:27:49.373285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.080 [2024-11-19 11:27:49.373328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.080 qpair failed and we were unable to recover it. 00:25:54.080 [2024-11-19 11:27:49.373519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.080 [2024-11-19 11:27:49.373546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.080 qpair failed and we were unable to recover it. 00:25:54.080 [2024-11-19 11:27:49.373764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.080 [2024-11-19 11:27:49.373788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.080 qpair failed and we were unable to recover it. 00:25:54.080 [2024-11-19 11:27:49.374017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.080 [2024-11-19 11:27:49.374040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.080 qpair failed and we were unable to recover it. 00:25:54.080 [2024-11-19 11:27:49.374145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.080 [2024-11-19 11:27:49.374170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.080 qpair failed and we were unable to recover it. 00:25:54.080 [2024-11-19 11:27:49.374315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.080 [2024-11-19 11:27:49.374341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.080 qpair failed and we were unable to recover it. 00:25:54.080 [2024-11-19 11:27:49.374526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.080 [2024-11-19 11:27:49.374566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.080 qpair failed and we were unable to recover it. 00:25:54.080 [2024-11-19 11:27:49.374833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.080 [2024-11-19 11:27:49.374869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.080 qpair failed and we were unable to recover it. 00:25:54.080 [2024-11-19 11:27:49.375103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.080 [2024-11-19 11:27:49.375136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.080 qpair failed and we were unable to recover it. 00:25:54.080 [2024-11-19 11:27:49.375298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.080 [2024-11-19 11:27:49.375322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.080 qpair failed and we were unable to recover it. 00:25:54.080 [2024-11-19 11:27:49.375553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.080 [2024-11-19 11:27:49.375579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.080 qpair failed and we were unable to recover it. 00:25:54.080 [2024-11-19 11:27:49.375720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.080 [2024-11-19 11:27:49.375744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.080 qpair failed and we were unable to recover it. 00:25:54.080 [2024-11-19 11:27:49.375917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.080 [2024-11-19 11:27:49.375941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.080 qpair failed and we were unable to recover it. 00:25:54.080 [2024-11-19 11:27:49.376180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.080 [2024-11-19 11:27:49.376205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.080 qpair failed and we were unable to recover it. 00:25:54.080 [2024-11-19 11:27:49.376381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.080 [2024-11-19 11:27:49.376406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.080 qpair failed and we were unable to recover it. 00:25:54.080 [2024-11-19 11:27:49.376562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.080 [2024-11-19 11:27:49.376587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.080 qpair failed and we were unable to recover it. 00:25:54.080 [2024-11-19 11:27:49.376839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.080 [2024-11-19 11:27:49.376863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.080 qpair failed and we were unable to recover it. 00:25:54.080 [2024-11-19 11:27:49.377001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.080 [2024-11-19 11:27:49.377024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.080 qpair failed and we were unable to recover it. 00:25:54.080 [2024-11-19 11:27:49.377262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.080 [2024-11-19 11:27:49.377287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.080 qpair failed and we were unable to recover it. 00:25:54.080 [2024-11-19 11:27:49.377482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.080 [2024-11-19 11:27:49.377508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.080 qpair failed and we were unable to recover it. 00:25:54.080 [2024-11-19 11:27:49.377716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.080 [2024-11-19 11:27:49.377741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.080 qpair failed and we were unable to recover it. 00:25:54.080 [2024-11-19 11:27:49.377917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.080 [2024-11-19 11:27:49.377940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.080 qpair failed and we were unable to recover it. 00:25:54.080 [2024-11-19 11:27:49.378132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.080 [2024-11-19 11:27:49.378156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.080 qpair failed and we were unable to recover it. 00:25:54.080 [2024-11-19 11:27:49.378387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.080 [2024-11-19 11:27:49.378413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.080 qpair failed and we were unable to recover it. 00:25:54.080 [2024-11-19 11:27:49.378561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.080 [2024-11-19 11:27:49.378586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.080 qpair failed and we were unable to recover it. 00:25:54.080 [2024-11-19 11:27:49.378756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.080 [2024-11-19 11:27:49.378780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.080 qpair failed and we were unable to recover it. 00:25:54.080 [2024-11-19 11:27:49.378971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.080 [2024-11-19 11:27:49.378995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.080 qpair failed and we were unable to recover it. 00:25:54.080 [2024-11-19 11:27:49.379171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.081 [2024-11-19 11:27:49.379199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.081 qpair failed and we were unable to recover it. 00:25:54.081 [2024-11-19 11:27:49.379433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.081 [2024-11-19 11:27:49.379459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.081 qpair failed and we were unable to recover it. 00:25:54.081 [2024-11-19 11:27:49.379598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.081 [2024-11-19 11:27:49.379638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.081 qpair failed and we were unable to recover it. 00:25:54.081 [2024-11-19 11:27:49.379806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.081 [2024-11-19 11:27:49.379828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.081 qpair failed and we were unable to recover it. 00:25:54.081 [2024-11-19 11:27:49.379977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.081 [2024-11-19 11:27:49.380001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.081 qpair failed and we were unable to recover it. 00:25:54.081 [2024-11-19 11:27:49.380236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.081 [2024-11-19 11:27:49.380260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.081 qpair failed and we were unable to recover it. 00:25:54.081 [2024-11-19 11:27:49.380391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.081 [2024-11-19 11:27:49.380432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.081 qpair failed and we were unable to recover it. 00:25:54.081 [2024-11-19 11:27:49.380616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.081 [2024-11-19 11:27:49.380642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.081 qpair failed and we were unable to recover it. 00:25:54.081 [2024-11-19 11:27:49.380868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.081 [2024-11-19 11:27:49.380894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.081 qpair failed and we were unable to recover it. 00:25:54.081 [2024-11-19 11:27:49.381099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.081 [2024-11-19 11:27:49.381123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.081 qpair failed and we were unable to recover it. 00:25:54.081 [2024-11-19 11:27:49.381299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.081 [2024-11-19 11:27:49.381322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.081 qpair failed and we were unable to recover it. 00:25:54.081 [2024-11-19 11:27:49.381493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.081 [2024-11-19 11:27:49.381519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.081 qpair failed and we were unable to recover it. 00:25:54.081 [2024-11-19 11:27:49.381745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.081 [2024-11-19 11:27:49.381769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.081 qpair failed and we were unable to recover it. 00:25:54.081 [2024-11-19 11:27:49.382005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.081 [2024-11-19 11:27:49.382030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.081 qpair failed and we were unable to recover it. 00:25:54.081 [2024-11-19 11:27:49.382198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.081 [2024-11-19 11:27:49.382224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.081 qpair failed and we were unable to recover it. 00:25:54.081 [2024-11-19 11:27:49.382376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.081 [2024-11-19 11:27:49.382402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.081 qpair failed and we were unable to recover it. 00:25:54.081 [2024-11-19 11:27:49.382568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.081 [2024-11-19 11:27:49.382595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.081 qpair failed and we were unable to recover it. 00:25:54.081 [2024-11-19 11:27:49.382819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.081 [2024-11-19 11:27:49.382843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.081 qpair failed and we were unable to recover it. 00:25:54.081 [2024-11-19 11:27:49.382994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.081 [2024-11-19 11:27:49.383017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.081 qpair failed and we were unable to recover it. 00:25:54.081 [2024-11-19 11:27:49.383196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.081 [2024-11-19 11:27:49.383221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.081 qpair failed and we were unable to recover it. 00:25:54.081 [2024-11-19 11:27:49.383381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.081 [2024-11-19 11:27:49.383406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.081 qpair failed and we were unable to recover it. 00:25:54.081 [2024-11-19 11:27:49.383605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.081 [2024-11-19 11:27:49.383630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.081 qpair failed and we were unable to recover it. 00:25:54.081 [2024-11-19 11:27:49.383821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.081 [2024-11-19 11:27:49.383845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.081 qpair failed and we were unable to recover it. 00:25:54.081 [2024-11-19 11:27:49.383999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.081 [2024-11-19 11:27:49.384040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.081 qpair failed and we were unable to recover it. 00:25:54.081 [2024-11-19 11:27:49.384219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.081 [2024-11-19 11:27:49.384258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.081 qpair failed and we were unable to recover it. 00:25:54.081 [2024-11-19 11:27:49.384439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.081 [2024-11-19 11:27:49.384465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.081 qpair failed and we were unable to recover it. 00:25:54.081 [2024-11-19 11:27:49.384594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.081 [2024-11-19 11:27:49.384634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.081 qpair failed and we were unable to recover it. 00:25:54.081 [2024-11-19 11:27:49.384876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.081 [2024-11-19 11:27:49.384915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.081 qpair failed and we were unable to recover it. 00:25:54.081 [2024-11-19 11:27:49.385143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.081 [2024-11-19 11:27:49.385178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.081 qpair failed and we were unable to recover it. 00:25:54.081 [2024-11-19 11:27:49.385416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.081 [2024-11-19 11:27:49.385450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.081 qpair failed and we were unable to recover it. 00:25:54.082 [2024-11-19 11:27:49.385701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.082 [2024-11-19 11:27:49.385726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.082 qpair failed and we were unable to recover it. 00:25:54.082 [2024-11-19 11:27:49.385933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.082 [2024-11-19 11:27:49.385958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.082 qpair failed and we were unable to recover it. 00:25:54.082 [2024-11-19 11:27:49.386144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.082 [2024-11-19 11:27:49.386190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.082 qpair failed and we were unable to recover it. 00:25:54.082 [2024-11-19 11:27:49.386397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.082 [2024-11-19 11:27:49.386437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.082 qpair failed and we were unable to recover it. 00:25:54.082 [2024-11-19 11:27:49.386623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.082 [2024-11-19 11:27:49.386664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.082 qpair failed and we were unable to recover it. 00:25:54.082 [2024-11-19 11:27:49.386766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.082 [2024-11-19 11:27:49.386790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.082 qpair failed and we were unable to recover it. 00:25:54.082 [2024-11-19 11:27:49.386975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.082 [2024-11-19 11:27:49.387015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.082 qpair failed and we were unable to recover it. 00:25:54.082 [2024-11-19 11:27:49.387186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.082 [2024-11-19 11:27:49.387211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.082 qpair failed and we were unable to recover it. 00:25:54.082 [2024-11-19 11:27:49.387414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.082 [2024-11-19 11:27:49.387441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.082 qpair failed and we were unable to recover it. 00:25:54.082 [2024-11-19 11:27:49.387662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.082 [2024-11-19 11:27:49.387686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.082 qpair failed and we were unable to recover it. 00:25:54.082 [2024-11-19 11:27:49.387852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.082 [2024-11-19 11:27:49.387876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.082 qpair failed and we were unable to recover it. 00:25:54.082 [2024-11-19 11:27:49.388056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.082 [2024-11-19 11:27:49.388081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.082 qpair failed and we were unable to recover it. 00:25:54.082 [2024-11-19 11:27:49.388327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.082 [2024-11-19 11:27:49.388375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.082 qpair failed and we were unable to recover it. 00:25:54.082 [2024-11-19 11:27:49.388542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.082 [2024-11-19 11:27:49.388573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.082 qpair failed and we were unable to recover it. 00:25:54.082 [2024-11-19 11:27:49.388743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.082 [2024-11-19 11:27:49.388768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.082 qpair failed and we were unable to recover it. 00:25:54.082 [2024-11-19 11:27:49.388991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.082 [2024-11-19 11:27:49.389016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.082 qpair failed and we were unable to recover it. 00:25:54.082 [2024-11-19 11:27:49.389187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.082 [2024-11-19 11:27:49.389212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.082 qpair failed and we were unable to recover it. 00:25:54.082 [2024-11-19 11:27:49.389359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.082 [2024-11-19 11:27:49.389390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.082 qpair failed and we were unable to recover it. 00:25:54.082 [2024-11-19 11:27:49.389599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.082 [2024-11-19 11:27:49.389624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.082 qpair failed and we were unable to recover it. 00:25:54.082 [2024-11-19 11:27:49.389846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.082 [2024-11-19 11:27:49.389869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.082 qpair failed and we were unable to recover it. 00:25:54.082 [2024-11-19 11:27:49.390004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.082 [2024-11-19 11:27:49.390029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.082 qpair failed and we were unable to recover it. 00:25:54.082 [2024-11-19 11:27:49.390229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.082 [2024-11-19 11:27:49.390254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.082 qpair failed and we were unable to recover it. 00:25:54.082 [2024-11-19 11:27:49.390432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.082 [2024-11-19 11:27:49.390457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.082 qpair failed and we were unable to recover it. 00:25:54.082 [2024-11-19 11:27:49.390666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.082 [2024-11-19 11:27:49.390691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.082 qpair failed and we were unable to recover it. 00:25:54.082 [2024-11-19 11:27:49.390886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.082 [2024-11-19 11:27:49.390911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.082 qpair failed and we were unable to recover it. 00:25:54.082 [2024-11-19 11:27:49.391082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.082 [2024-11-19 11:27:49.391105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.082 qpair failed and we were unable to recover it. 00:25:54.082 [2024-11-19 11:27:49.391329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.082 [2024-11-19 11:27:49.391376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.082 qpair failed and we were unable to recover it. 00:25:54.082 [2024-11-19 11:27:49.391525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.082 [2024-11-19 11:27:49.391551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.082 qpair failed and we were unable to recover it. 00:25:54.082 [2024-11-19 11:27:49.391691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.082 [2024-11-19 11:27:49.391715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.082 qpair failed and we were unable to recover it. 00:25:54.082 [2024-11-19 11:27:49.391912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.083 [2024-11-19 11:27:49.391937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.083 qpair failed and we were unable to recover it. 00:25:54.083 [2024-11-19 11:27:49.392113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.083 [2024-11-19 11:27:49.392139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.083 qpair failed and we were unable to recover it. 00:25:54.083 [2024-11-19 11:27:49.392291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.083 [2024-11-19 11:27:49.392316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.083 qpair failed and we were unable to recover it. 00:25:54.083 [2024-11-19 11:27:49.392550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.083 [2024-11-19 11:27:49.392576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.083 qpair failed and we were unable to recover it. 00:25:54.083 [2024-11-19 11:27:49.392764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.083 [2024-11-19 11:27:49.392788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.083 qpair failed and we were unable to recover it. 00:25:54.083 [2024-11-19 11:27:49.392939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.083 [2024-11-19 11:27:49.392962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.083 qpair failed and we were unable to recover it. 00:25:54.083 [2024-11-19 11:27:49.393094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.083 [2024-11-19 11:27:49.393119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.083 qpair failed and we were unable to recover it. 00:25:54.083 [2024-11-19 11:27:49.393315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.083 [2024-11-19 11:27:49.393339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.083 qpair failed and we were unable to recover it. 00:25:54.083 [2024-11-19 11:27:49.393478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.083 [2024-11-19 11:27:49.393504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.083 qpair failed and we were unable to recover it. 00:25:54.083 [2024-11-19 11:27:49.393689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.083 [2024-11-19 11:27:49.393714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.083 qpair failed and we were unable to recover it. 00:25:54.083 [2024-11-19 11:27:49.393923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.083 [2024-11-19 11:27:49.393946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.083 qpair failed and we were unable to recover it. 00:25:54.083 [2024-11-19 11:27:49.394068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.083 [2024-11-19 11:27:49.394092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.083 qpair failed and we were unable to recover it. 00:25:54.083 [2024-11-19 11:27:49.394324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.083 [2024-11-19 11:27:49.394371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.083 qpair failed and we were unable to recover it. 00:25:54.083 [2024-11-19 11:27:49.394542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.083 [2024-11-19 11:27:49.394567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.083 qpair failed and we were unable to recover it. 00:25:54.083 [2024-11-19 11:27:49.394752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.083 [2024-11-19 11:27:49.394775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.083 qpair failed and we were unable to recover it. 00:25:54.083 [2024-11-19 11:27:49.394914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.083 [2024-11-19 11:27:49.394946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.083 qpair failed and we were unable to recover it. 00:25:54.083 [2024-11-19 11:27:49.395074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.083 [2024-11-19 11:27:49.395099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.083 qpair failed and we were unable to recover it. 00:25:54.083 [2024-11-19 11:27:49.395276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.083 [2024-11-19 11:27:49.395300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.083 qpair failed and we were unable to recover it. 00:25:54.083 [2024-11-19 11:27:49.395451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.083 [2024-11-19 11:27:49.395478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.083 qpair failed and we were unable to recover it. 00:25:54.083 [2024-11-19 11:27:49.395640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.083 [2024-11-19 11:27:49.395665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.083 qpair failed and we were unable to recover it. 00:25:54.083 [2024-11-19 11:27:49.395854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.083 [2024-11-19 11:27:49.395878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.083 qpair failed and we were unable to recover it. 00:25:54.083 [2024-11-19 11:27:49.396008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.083 [2024-11-19 11:27:49.396033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.083 qpair failed and we were unable to recover it. 00:25:54.083 [2024-11-19 11:27:49.396237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.083 [2024-11-19 11:27:49.396268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.083 qpair failed and we were unable to recover it. 00:25:54.083 [2024-11-19 11:27:49.396430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.083 [2024-11-19 11:27:49.396456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.083 qpair failed and we were unable to recover it. 00:25:54.083 [2024-11-19 11:27:49.396655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.083 [2024-11-19 11:27:49.396694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.083 qpair failed and we were unable to recover it. 00:25:54.083 [2024-11-19 11:27:49.396813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.083 [2024-11-19 11:27:49.396837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.083 qpair failed and we were unable to recover it. 00:25:54.083 [2024-11-19 11:27:49.396996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.083 [2024-11-19 11:27:49.397021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.083 qpair failed and we were unable to recover it. 00:25:54.083 [2024-11-19 11:27:49.397170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.083 [2024-11-19 11:27:49.397194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.083 qpair failed and we were unable to recover it. 00:25:54.083 [2024-11-19 11:27:49.397376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.083 [2024-11-19 11:27:49.397403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.083 qpair failed and we were unable to recover it. 00:25:54.083 [2024-11-19 11:27:49.397546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.083 [2024-11-19 11:27:49.397572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.083 qpair failed and we were unable to recover it. 00:25:54.083 [2024-11-19 11:27:49.397693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.083 [2024-11-19 11:27:49.397717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.083 qpair failed and we were unable to recover it. 00:25:54.083 [2024-11-19 11:27:49.397852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.083 [2024-11-19 11:27:49.397877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.083 qpair failed and we were unable to recover it. 00:25:54.083 [2024-11-19 11:27:49.398085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.083 [2024-11-19 11:27:49.398108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.083 qpair failed and we were unable to recover it. 00:25:54.083 [2024-11-19 11:27:49.398276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.083 [2024-11-19 11:27:49.398300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.083 qpair failed and we were unable to recover it. 00:25:54.083 [2024-11-19 11:27:49.398426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.083 [2024-11-19 11:27:49.398452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.083 qpair failed and we were unable to recover it. 00:25:54.083 [2024-11-19 11:27:49.398568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.084 [2024-11-19 11:27:49.398593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.084 qpair failed and we were unable to recover it. 00:25:54.084 [2024-11-19 11:27:49.398797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.084 [2024-11-19 11:27:49.398822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.084 qpair failed and we were unable to recover it. 00:25:54.084 [2024-11-19 11:27:49.399020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.084 [2024-11-19 11:27:49.399045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.084 qpair failed and we were unable to recover it. 00:25:54.084 [2024-11-19 11:27:49.399168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.084 [2024-11-19 11:27:49.399194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.084 qpair failed and we were unable to recover it. 00:25:54.084 [2024-11-19 11:27:49.399422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.084 [2024-11-19 11:27:49.399448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.084 qpair failed and we were unable to recover it. 00:25:54.084 [2024-11-19 11:27:49.399572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.084 [2024-11-19 11:27:49.399598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.084 qpair failed and we were unable to recover it. 00:25:54.084 [2024-11-19 11:27:49.399746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.084 [2024-11-19 11:27:49.399786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.084 qpair failed and we were unable to recover it. 00:25:54.084 [2024-11-19 11:27:49.399964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.084 [2024-11-19 11:27:49.399988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.084 qpair failed and we were unable to recover it. 00:25:54.084 [2024-11-19 11:27:49.400167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.084 [2024-11-19 11:27:49.400192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.084 qpair failed and we were unable to recover it. 00:25:54.084 [2024-11-19 11:27:49.400397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.084 [2024-11-19 11:27:49.400424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.084 qpair failed and we were unable to recover it. 00:25:54.084 [2024-11-19 11:27:49.400603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.084 [2024-11-19 11:27:49.400628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.084 qpair failed and we were unable to recover it. 00:25:54.084 [2024-11-19 11:27:49.400799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.084 [2024-11-19 11:27:49.400822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.084 qpair failed and we were unable to recover it. 00:25:54.084 [2024-11-19 11:27:49.400929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.084 [2024-11-19 11:27:49.400954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.084 qpair failed and we were unable to recover it. 00:25:54.084 [2024-11-19 11:27:49.401139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.084 [2024-11-19 11:27:49.401163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.084 qpair failed and we were unable to recover it. 00:25:54.084 [2024-11-19 11:27:49.401313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.084 [2024-11-19 11:27:49.401341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.084 qpair failed and we were unable to recover it. 00:25:54.084 [2024-11-19 11:27:49.401527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.084 [2024-11-19 11:27:49.401553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.084 qpair failed and we were unable to recover it. 00:25:54.084 [2024-11-19 11:27:49.401716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.084 [2024-11-19 11:27:49.401741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.084 qpair failed and we were unable to recover it. 00:25:54.084 [2024-11-19 11:27:49.401930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.084 [2024-11-19 11:27:49.401956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.084 qpair failed and we were unable to recover it. 00:25:54.084 [2024-11-19 11:27:49.402061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.084 [2024-11-19 11:27:49.402085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.084 qpair failed and we were unable to recover it. 00:25:54.084 [2024-11-19 11:27:49.402214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.084 [2024-11-19 11:27:49.402239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.084 qpair failed and we were unable to recover it. 00:25:54.084 [2024-11-19 11:27:49.402451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.084 [2024-11-19 11:27:49.402490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.084 qpair failed and we were unable to recover it. 00:25:54.084 [2024-11-19 11:27:49.402644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.084 [2024-11-19 11:27:49.402686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.084 qpair failed and we were unable to recover it. 00:25:54.084 [2024-11-19 11:27:49.402858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.084 [2024-11-19 11:27:49.402899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.084 qpair failed and we were unable to recover it. 00:25:54.084 [2024-11-19 11:27:49.403053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.084 [2024-11-19 11:27:49.403077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.084 qpair failed and we were unable to recover it. 00:25:54.084 [2024-11-19 11:27:49.403293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.084 [2024-11-19 11:27:49.403318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.084 qpair failed and we were unable to recover it. 00:25:54.084 [2024-11-19 11:27:49.403477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.084 [2024-11-19 11:27:49.403504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.084 qpair failed and we were unable to recover it. 00:25:54.084 [2024-11-19 11:27:49.403639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.084 [2024-11-19 11:27:49.403680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.084 qpair failed and we were unable to recover it. 00:25:54.084 [2024-11-19 11:27:49.403889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.084 [2024-11-19 11:27:49.403915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.084 qpair failed and we were unable to recover it. 00:25:54.084 [2024-11-19 11:27:49.404155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.084 [2024-11-19 11:27:49.404182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.084 qpair failed and we were unable to recover it. 00:25:54.084 [2024-11-19 11:27:49.404352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.084 [2024-11-19 11:27:49.404407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.084 qpair failed and we were unable to recover it. 00:25:54.084 [2024-11-19 11:27:49.404572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.084 [2024-11-19 11:27:49.404598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.084 qpair failed and we were unable to recover it. 00:25:54.084 [2024-11-19 11:27:49.404749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.085 [2024-11-19 11:27:49.404773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.085 qpair failed and we were unable to recover it. 00:25:54.085 [2024-11-19 11:27:49.405024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.085 [2024-11-19 11:27:49.405049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.085 qpair failed and we were unable to recover it. 00:25:54.085 [2024-11-19 11:27:49.405201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.085 [2024-11-19 11:27:49.405226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.085 qpair failed and we were unable to recover it. 00:25:54.085 [2024-11-19 11:27:49.405377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.085 [2024-11-19 11:27:49.405403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.085 qpair failed and we were unable to recover it. 00:25:54.085 [2024-11-19 11:27:49.405507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.085 [2024-11-19 11:27:49.405533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.085 qpair failed and we were unable to recover it. 00:25:54.085 [2024-11-19 11:27:49.405680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.085 [2024-11-19 11:27:49.405719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.085 qpair failed and we were unable to recover it. 00:25:54.085 [2024-11-19 11:27:49.405896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.085 [2024-11-19 11:27:49.405919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.085 qpair failed and we were unable to recover it. 00:25:54.085 [2024-11-19 11:27:49.406117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.085 [2024-11-19 11:27:49.406141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.085 qpair failed and we were unable to recover it. 00:25:54.085 [2024-11-19 11:27:49.406242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.085 [2024-11-19 11:27:49.406265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.085 qpair failed and we were unable to recover it. 00:25:54.085 [2024-11-19 11:27:49.406403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.085 [2024-11-19 11:27:49.406429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.085 qpair failed and we were unable to recover it. 00:25:54.085 [2024-11-19 11:27:49.406554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.085 [2024-11-19 11:27:49.406584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.085 qpair failed and we were unable to recover it. 00:25:54.085 [2024-11-19 11:27:49.406712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.085 [2024-11-19 11:27:49.406751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.085 qpair failed and we were unable to recover it. 00:25:54.085 [2024-11-19 11:27:49.406889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.085 [2024-11-19 11:27:49.406915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.085 qpair failed and we were unable to recover it. 00:25:54.085 [2024-11-19 11:27:49.407082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.085 [2024-11-19 11:27:49.407122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.085 qpair failed and we were unable to recover it. 00:25:54.085 [2024-11-19 11:27:49.407277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.085 [2024-11-19 11:27:49.407301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.085 qpair failed and we were unable to recover it. 00:25:54.085 [2024-11-19 11:27:49.407455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.085 [2024-11-19 11:27:49.407480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.085 qpair failed and we were unable to recover it. 00:25:54.085 [2024-11-19 11:27:49.407582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.085 [2024-11-19 11:27:49.407607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.085 qpair failed and we were unable to recover it. 00:25:54.085 [2024-11-19 11:27:49.407746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.085 [2024-11-19 11:27:49.407770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.085 qpair failed and we were unable to recover it. 00:25:54.085 [2024-11-19 11:27:49.407866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.085 [2024-11-19 11:27:49.407891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.085 qpair failed and we were unable to recover it. 00:25:54.085 [2024-11-19 11:27:49.408041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.085 [2024-11-19 11:27:49.408066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.085 qpair failed and we were unable to recover it. 00:25:54.085 [2024-11-19 11:27:49.408189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.085 [2024-11-19 11:27:49.408229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.085 qpair failed and we were unable to recover it. 00:25:54.085 [2024-11-19 11:27:49.408437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.085 [2024-11-19 11:27:49.408463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.085 qpair failed and we were unable to recover it. 00:25:54.085 [2024-11-19 11:27:49.408628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.085 [2024-11-19 11:27:49.408663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.085 qpair failed and we were unable to recover it. 00:25:54.085 [2024-11-19 11:27:49.408828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.085 [2024-11-19 11:27:49.408852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.085 qpair failed and we were unable to recover it. 00:25:54.085 [2024-11-19 11:27:49.409056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.085 [2024-11-19 11:27:49.409081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.085 qpair failed and we were unable to recover it. 00:25:54.085 [2024-11-19 11:27:49.409231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.085 [2024-11-19 11:27:49.409256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.085 qpair failed and we were unable to recover it. 00:25:54.085 [2024-11-19 11:27:49.409442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.085 [2024-11-19 11:27:49.409467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.085 qpair failed and we were unable to recover it. 00:25:54.085 [2024-11-19 11:27:49.409589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.085 [2024-11-19 11:27:49.409614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.085 qpair failed and we were unable to recover it. 00:25:54.085 [2024-11-19 11:27:49.409731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.086 [2024-11-19 11:27:49.409757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.086 qpair failed and we were unable to recover it. 00:25:54.086 [2024-11-19 11:27:49.409898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.086 [2024-11-19 11:27:49.409923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.086 qpair failed and we were unable to recover it. 00:25:54.086 [2024-11-19 11:27:49.410100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.086 [2024-11-19 11:27:49.410126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.086 qpair failed and we were unable to recover it. 00:25:54.086 [2024-11-19 11:27:49.410279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.086 [2024-11-19 11:27:49.410303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.086 qpair failed and we were unable to recover it. 00:25:54.086 [2024-11-19 11:27:49.410451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.086 [2024-11-19 11:27:49.410478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.086 qpair failed and we were unable to recover it. 00:25:54.086 [2024-11-19 11:27:49.410572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.086 [2024-11-19 11:27:49.410598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.086 qpair failed and we were unable to recover it. 00:25:54.086 [2024-11-19 11:27:49.410696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.086 [2024-11-19 11:27:49.410721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.086 qpair failed and we were unable to recover it. 00:25:54.086 [2024-11-19 11:27:49.410840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.086 [2024-11-19 11:27:49.410865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.086 qpair failed and we were unable to recover it. 00:25:54.086 [2024-11-19 11:27:49.410988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.086 [2024-11-19 11:27:49.411013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.086 qpair failed and we were unable to recover it. 00:25:54.086 [2024-11-19 11:27:49.411122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.086 [2024-11-19 11:27:49.411152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.086 qpair failed and we were unable to recover it. 00:25:54.086 [2024-11-19 11:27:49.411322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.086 [2024-11-19 11:27:49.411346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.086 qpair failed and we were unable to recover it. 00:25:54.086 [2024-11-19 11:27:49.411471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.086 [2024-11-19 11:27:49.411497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.086 qpair failed and we were unable to recover it. 00:25:54.086 [2024-11-19 11:27:49.411623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.086 [2024-11-19 11:27:49.411648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.086 qpair failed and we were unable to recover it. 00:25:54.086 [2024-11-19 11:27:49.411783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.086 [2024-11-19 11:27:49.411822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.086 qpair failed and we were unable to recover it. 00:25:54.086 [2024-11-19 11:27:49.411920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.086 [2024-11-19 11:27:49.411946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.086 qpair failed and we were unable to recover it. 00:25:54.086 [2024-11-19 11:27:49.412085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.086 [2024-11-19 11:27:49.412111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.086 qpair failed and we were unable to recover it. 00:25:54.086 [2024-11-19 11:27:49.412306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.086 [2024-11-19 11:27:49.412331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.086 qpair failed and we were unable to recover it. 00:25:54.086 [2024-11-19 11:27:49.412469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.086 [2024-11-19 11:27:49.412496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.086 qpair failed and we were unable to recover it. 00:25:54.086 [2024-11-19 11:27:49.412650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.086 [2024-11-19 11:27:49.412689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.086 qpair failed and we were unable to recover it. 00:25:54.086 [2024-11-19 11:27:49.412821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.086 [2024-11-19 11:27:49.412846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.086 qpair failed and we were unable to recover it. 00:25:54.086 [2024-11-19 11:27:49.413000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.086 [2024-11-19 11:27:49.413024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.086 qpair failed and we were unable to recover it. 00:25:54.086 [2024-11-19 11:27:49.413176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.086 [2024-11-19 11:27:49.413201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.086 qpair failed and we were unable to recover it. 00:25:54.086 [2024-11-19 11:27:49.413352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.086 [2024-11-19 11:27:49.413386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.086 qpair failed and we were unable to recover it. 00:25:54.086 [2024-11-19 11:27:49.413556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.086 [2024-11-19 11:27:49.413581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.086 qpair failed and we were unable to recover it. 00:25:54.086 [2024-11-19 11:27:49.413687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.086 [2024-11-19 11:27:49.413711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.086 qpair failed and we were unable to recover it. 00:25:54.086 [2024-11-19 11:27:49.413824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.086 [2024-11-19 11:27:49.413849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.086 qpair failed and we were unable to recover it. 00:25:54.086 [2024-11-19 11:27:49.413987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.086 [2024-11-19 11:27:49.414026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.086 qpair failed and we were unable to recover it. 00:25:54.086 [2024-11-19 11:27:49.414192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.086 [2024-11-19 11:27:49.414217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.086 qpair failed and we were unable to recover it. 00:25:54.086 [2024-11-19 11:27:49.414357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.086 [2024-11-19 11:27:49.414403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.086 qpair failed and we were unable to recover it. 00:25:54.086 [2024-11-19 11:27:49.414507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.087 [2024-11-19 11:27:49.414532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.087 qpair failed and we were unable to recover it. 00:25:54.087 [2024-11-19 11:27:49.414656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.087 [2024-11-19 11:27:49.414682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.087 qpair failed and we were unable to recover it. 00:25:54.087 [2024-11-19 11:27:49.414859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.087 [2024-11-19 11:27:49.414882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.087 qpair failed and we were unable to recover it. 00:25:54.087 [2024-11-19 11:27:49.415055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.087 [2024-11-19 11:27:49.415080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.087 qpair failed and we were unable to recover it. 00:25:54.087 [2024-11-19 11:27:49.415245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.087 [2024-11-19 11:27:49.415271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.087 qpair failed and we were unable to recover it. 00:25:54.087 [2024-11-19 11:27:49.415402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.087 [2024-11-19 11:27:49.415428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.087 qpair failed and we were unable to recover it. 00:25:54.087 [2024-11-19 11:27:49.415517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.087 [2024-11-19 11:27:49.415542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.087 qpair failed and we were unable to recover it. 00:25:54.087 [2024-11-19 11:27:49.415697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.087 [2024-11-19 11:27:49.415722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.087 qpair failed and we were unable to recover it. 00:25:54.087 [2024-11-19 11:27:49.415824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.087 [2024-11-19 11:27:49.415864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.087 qpair failed and we were unable to recover it. 00:25:54.087 [2024-11-19 11:27:49.415964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.087 [2024-11-19 11:27:49.415988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.087 qpair failed and we were unable to recover it. 00:25:54.087 [2024-11-19 11:27:49.416104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.087 [2024-11-19 11:27:49.416129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.087 qpair failed and we were unable to recover it. 00:25:54.087 [2024-11-19 11:27:49.416245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.087 [2024-11-19 11:27:49.416270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.087 qpair failed and we were unable to recover it. 00:25:54.087 [2024-11-19 11:27:49.416428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.087 [2024-11-19 11:27:49.416453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.087 qpair failed and we were unable to recover it. 00:25:54.087 [2024-11-19 11:27:49.416580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.087 [2024-11-19 11:27:49.416606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.087 qpair failed and we were unable to recover it. 00:25:54.087 [2024-11-19 11:27:49.416776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.087 [2024-11-19 11:27:49.416800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.087 qpair failed and we were unable to recover it. 00:25:54.087 [2024-11-19 11:27:49.416913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.087 [2024-11-19 11:27:49.416937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.087 qpair failed and we were unable to recover it. 00:25:54.087 [2024-11-19 11:27:49.417039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.087 [2024-11-19 11:27:49.417064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.087 qpair failed and we were unable to recover it. 00:25:54.087 [2024-11-19 11:27:49.417162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.087 [2024-11-19 11:27:49.417187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.087 qpair failed and we were unable to recover it. 00:25:54.087 [2024-11-19 11:27:49.417295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.087 [2024-11-19 11:27:49.417336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.087 qpair failed and we were unable to recover it. 00:25:54.087 [2024-11-19 11:27:49.417505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.087 [2024-11-19 11:27:49.417531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.087 qpair failed and we were unable to recover it. 00:25:54.087 [2024-11-19 11:27:49.417684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.087 [2024-11-19 11:27:49.417723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.087 qpair failed and we were unable to recover it. 00:25:54.087 [2024-11-19 11:27:49.417817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.087 [2024-11-19 11:27:49.417857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.087 qpair failed and we were unable to recover it. 00:25:54.087 [2024-11-19 11:27:49.417947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.087 [2024-11-19 11:27:49.417972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.087 qpair failed and we were unable to recover it. 00:25:54.087 [2024-11-19 11:27:49.418048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.087 [2024-11-19 11:27:49.418073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.087 qpair failed and we were unable to recover it. 00:25:54.087 [2024-11-19 11:27:49.418237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.087 [2024-11-19 11:27:49.418262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.087 qpair failed and we were unable to recover it. 00:25:54.087 [2024-11-19 11:27:49.418406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.087 [2024-11-19 11:27:49.418445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.087 qpair failed and we were unable to recover it. 00:25:54.087 [2024-11-19 11:27:49.418581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.087 [2024-11-19 11:27:49.418609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.087 qpair failed and we were unable to recover it. 00:25:54.087 [2024-11-19 11:27:49.418746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.087 [2024-11-19 11:27:49.418773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.087 qpair failed and we were unable to recover it. 00:25:54.088 [2024-11-19 11:27:49.418899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.088 [2024-11-19 11:27:49.418925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.088 qpair failed and we were unable to recover it. 00:25:54.088 [2024-11-19 11:27:49.419038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.088 [2024-11-19 11:27:49.419064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.088 qpair failed and we were unable to recover it. 00:25:54.088 [2024-11-19 11:27:49.419172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.088 [2024-11-19 11:27:49.419198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.088 qpair failed and we were unable to recover it. 00:25:54.088 [2024-11-19 11:27:49.419320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.088 [2024-11-19 11:27:49.419346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.088 qpair failed and we were unable to recover it. 00:25:54.088 [2024-11-19 11:27:49.419491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.088 [2024-11-19 11:27:49.419517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.088 qpair failed and we were unable to recover it. 00:25:54.088 [2024-11-19 11:27:49.419633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.088 [2024-11-19 11:27:49.419658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.088 qpair failed and we were unable to recover it. 00:25:54.088 [2024-11-19 11:27:49.419826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.088 [2024-11-19 11:27:49.419850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.088 qpair failed and we were unable to recover it. 00:25:54.088 [2024-11-19 11:27:49.419985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.088 [2024-11-19 11:27:49.420009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.088 qpair failed and we were unable to recover it. 00:25:54.088 [2024-11-19 11:27:49.420145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.088 [2024-11-19 11:27:49.420170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.088 qpair failed and we were unable to recover it. 00:25:54.088 [2024-11-19 11:27:49.420297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.088 [2024-11-19 11:27:49.420324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.088 qpair failed and we were unable to recover it. 00:25:54.088 [2024-11-19 11:27:49.420477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.088 [2024-11-19 11:27:49.420504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.088 qpair failed and we were unable to recover it. 00:25:54.088 [2024-11-19 11:27:49.420650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.088 [2024-11-19 11:27:49.420676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.088 qpair failed and we were unable to recover it. 00:25:54.088 [2024-11-19 11:27:49.420779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.088 [2024-11-19 11:27:49.420819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.088 qpair failed and we were unable to recover it. 00:25:54.088 [2024-11-19 11:27:49.420967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.088 [2024-11-19 11:27:49.420992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.088 qpair failed and we were unable to recover it. 00:25:54.088 [2024-11-19 11:27:49.421130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.088 [2024-11-19 11:27:49.421156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.088 qpair failed and we were unable to recover it. 00:25:54.088 [2024-11-19 11:27:49.421280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.088 [2024-11-19 11:27:49.421307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.088 qpair failed and we were unable to recover it. 00:25:54.088 [2024-11-19 11:27:49.421458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.088 [2024-11-19 11:27:49.421484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.088 qpair failed and we were unable to recover it. 00:25:54.088 [2024-11-19 11:27:49.421629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.088 [2024-11-19 11:27:49.421655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.088 qpair failed and we were unable to recover it. 00:25:54.088 [2024-11-19 11:27:49.421770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.088 [2024-11-19 11:27:49.421811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.088 qpair failed and we were unable to recover it. 00:25:54.088 [2024-11-19 11:27:49.421960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.088 [2024-11-19 11:27:49.421984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.088 qpair failed and we were unable to recover it. 00:25:54.088 [2024-11-19 11:27:49.422140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.088 [2024-11-19 11:27:49.422165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.088 qpair failed and we were unable to recover it. 00:25:54.088 [2024-11-19 11:27:49.422260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.088 [2024-11-19 11:27:49.422287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.088 qpair failed and we were unable to recover it. 00:25:54.088 [2024-11-19 11:27:49.422452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.088 [2024-11-19 11:27:49.422479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.088 qpair failed and we were unable to recover it. 00:25:54.088 [2024-11-19 11:27:49.422623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.088 [2024-11-19 11:27:49.422650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.088 qpair failed and we were unable to recover it. 00:25:54.088 [2024-11-19 11:27:49.422806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.088 [2024-11-19 11:27:49.422832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.088 qpair failed and we were unable to recover it. 00:25:54.088 [2024-11-19 11:27:49.422956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.088 [2024-11-19 11:27:49.422981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.088 qpair failed and we were unable to recover it. 00:25:54.088 [2024-11-19 11:27:49.423113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.088 [2024-11-19 11:27:49.423140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.088 qpair failed and we were unable to recover it. 00:25:54.088 [2024-11-19 11:27:49.423241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.088 [2024-11-19 11:27:49.423267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.088 qpair failed and we were unable to recover it. 00:25:54.088 [2024-11-19 11:27:49.423407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.088 [2024-11-19 11:27:49.423433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.088 qpair failed and we were unable to recover it. 00:25:54.088 [2024-11-19 11:27:49.423552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.088 [2024-11-19 11:27:49.423577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.088 qpair failed and we were unable to recover it. 00:25:54.088 [2024-11-19 11:27:49.423701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.088 [2024-11-19 11:27:49.423727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.088 qpair failed and we were unable to recover it. 00:25:54.089 [2024-11-19 11:27:49.423825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.089 [2024-11-19 11:27:49.423850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.089 qpair failed and we were unable to recover it. 00:25:54.089 [2024-11-19 11:27:49.424017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.089 [2024-11-19 11:27:49.424042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.089 qpair failed and we were unable to recover it. 00:25:54.089 [2024-11-19 11:27:49.424173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.089 [2024-11-19 11:27:49.424199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.089 qpair failed and we were unable to recover it. 00:25:54.089 [2024-11-19 11:27:49.424389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.089 [2024-11-19 11:27:49.424416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.089 qpair failed and we were unable to recover it. 00:25:54.089 [2024-11-19 11:27:49.424508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.089 [2024-11-19 11:27:49.424534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.089 qpair failed and we were unable to recover it. 00:25:54.089 [2024-11-19 11:27:49.424648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.089 [2024-11-19 11:27:49.424675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.089 qpair failed and we were unable to recover it. 00:25:54.089 [2024-11-19 11:27:49.424811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.089 [2024-11-19 11:27:49.424836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.089 qpair failed and we were unable to recover it. 00:25:54.089 [2024-11-19 11:27:49.424979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.089 [2024-11-19 11:27:49.425005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.089 qpair failed and we were unable to recover it. 00:25:54.089 [2024-11-19 11:27:49.425100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.089 [2024-11-19 11:27:49.425127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.089 qpair failed and we were unable to recover it. 00:25:54.089 [2024-11-19 11:27:49.425264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.089 [2024-11-19 11:27:49.425289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.089 qpair failed and we were unable to recover it. 00:25:54.089 [2024-11-19 11:27:49.425404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.089 [2024-11-19 11:27:49.425431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.089 qpair failed and we were unable to recover it. 00:25:54.089 [2024-11-19 11:27:49.425524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.089 [2024-11-19 11:27:49.425549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.089 qpair failed and we were unable to recover it. 00:25:54.089 [2024-11-19 11:27:49.425636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.089 [2024-11-19 11:27:49.425677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.089 qpair failed and we were unable to recover it. 00:25:54.089 [2024-11-19 11:27:49.425814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.089 [2024-11-19 11:27:49.425839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.089 qpair failed and we were unable to recover it. 00:25:54.089 [2024-11-19 11:27:49.425974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.089 [2024-11-19 11:27:49.426000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.089 qpair failed and we were unable to recover it. 00:25:54.089 [2024-11-19 11:27:49.426102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.089 [2024-11-19 11:27:49.426127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.089 qpair failed and we were unable to recover it. 00:25:54.089 [2024-11-19 11:27:49.426272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.089 [2024-11-19 11:27:49.426298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.089 qpair failed and we were unable to recover it. 00:25:54.089 [2024-11-19 11:27:49.426401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.089 [2024-11-19 11:27:49.426428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.089 qpair failed and we were unable to recover it. 00:25:54.089 [2024-11-19 11:27:49.426554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.089 [2024-11-19 11:27:49.426580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.089 qpair failed and we were unable to recover it. 00:25:54.089 [2024-11-19 11:27:49.426690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.089 [2024-11-19 11:27:49.426715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.089 qpair failed and we were unable to recover it. 00:25:54.089 [2024-11-19 11:27:49.426894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.089 [2024-11-19 11:27:49.426920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.089 qpair failed and we were unable to recover it. 00:25:54.089 [2024-11-19 11:27:49.427019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.089 [2024-11-19 11:27:49.427044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.089 qpair failed and we were unable to recover it. 00:25:54.089 [2024-11-19 11:27:49.427209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.089 [2024-11-19 11:27:49.427235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.089 qpair failed and we were unable to recover it. 00:25:54.089 [2024-11-19 11:27:49.427354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.089 [2024-11-19 11:27:49.427387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.089 qpair failed and we were unable to recover it. 00:25:54.089 [2024-11-19 11:27:49.427491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.089 [2024-11-19 11:27:49.427516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.089 qpair failed and we were unable to recover it. 00:25:54.089 [2024-11-19 11:27:49.427615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.089 [2024-11-19 11:27:49.427641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.089 qpair failed and we were unable to recover it. 00:25:54.089 [2024-11-19 11:27:49.427787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.089 [2024-11-19 11:27:49.427827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.089 qpair failed and we were unable to recover it. 00:25:54.089 [2024-11-19 11:27:49.427961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.089 [2024-11-19 11:27:49.427985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.090 qpair failed and we were unable to recover it. 00:25:54.090 [2024-11-19 11:27:49.428129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.090 [2024-11-19 11:27:49.428154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.090 qpair failed and we were unable to recover it. 00:25:54.090 [2024-11-19 11:27:49.428300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.090 [2024-11-19 11:27:49.428329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.090 qpair failed and we were unable to recover it. 00:25:54.090 [2024-11-19 11:27:49.428428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.090 [2024-11-19 11:27:49.428453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.090 qpair failed and we were unable to recover it. 00:25:54.090 [2024-11-19 11:27:49.428542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.090 [2024-11-19 11:27:49.428568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.090 qpair failed and we were unable to recover it. 00:25:54.090 [2024-11-19 11:27:49.428715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.090 [2024-11-19 11:27:49.428758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.090 qpair failed and we were unable to recover it. 00:25:54.090 [2024-11-19 11:27:49.428920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.090 [2024-11-19 11:27:49.428946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.090 qpair failed and we were unable to recover it. 00:25:54.090 [2024-11-19 11:27:49.429078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.090 [2024-11-19 11:27:49.429103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.090 qpair failed and we were unable to recover it. 00:25:54.090 [2024-11-19 11:27:49.429219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.090 [2024-11-19 11:27:49.429244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.090 qpair failed and we were unable to recover it. 00:25:54.090 [2024-11-19 11:27:49.429389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.090 [2024-11-19 11:27:49.429416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.090 qpair failed and we were unable to recover it. 00:25:54.090 [2024-11-19 11:27:49.429513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.090 [2024-11-19 11:27:49.429538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.090 qpair failed and we were unable to recover it. 00:25:54.090 [2024-11-19 11:27:49.429629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.090 [2024-11-19 11:27:49.429655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.090 qpair failed and we were unable to recover it. 00:25:54.090 [2024-11-19 11:27:49.429779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.090 [2024-11-19 11:27:49.429804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.090 qpair failed and we were unable to recover it. 00:25:54.090 [2024-11-19 11:27:49.429943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.090 [2024-11-19 11:27:49.429969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.090 qpair failed and we were unable to recover it. 00:25:54.090 [2024-11-19 11:27:49.430117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.090 [2024-11-19 11:27:49.430160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.090 qpair failed and we were unable to recover it. 00:25:54.090 [2024-11-19 11:27:49.430275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.090 [2024-11-19 11:27:49.430300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.090 qpair failed and we were unable to recover it. 00:25:54.090 [2024-11-19 11:27:49.430445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.090 [2024-11-19 11:27:49.430471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.090 qpair failed and we were unable to recover it. 00:25:54.090 [2024-11-19 11:27:49.430581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.090 [2024-11-19 11:27:49.430606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.090 qpair failed and we were unable to recover it. 00:25:54.090 [2024-11-19 11:27:49.430734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.090 [2024-11-19 11:27:49.430759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.090 qpair failed and we were unable to recover it. 00:25:54.090 [2024-11-19 11:27:49.430884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.090 [2024-11-19 11:27:49.430909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.090 qpair failed and we were unable to recover it. 00:25:54.090 [2024-11-19 11:27:49.431012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.090 [2024-11-19 11:27:49.431038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.090 qpair failed and we were unable to recover it. 00:25:54.090 [2024-11-19 11:27:49.431191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.090 [2024-11-19 11:27:49.431216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.090 qpair failed and we were unable to recover it. 00:25:54.090 [2024-11-19 11:27:49.431323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.090 [2024-11-19 11:27:49.431348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.090 qpair failed and we were unable to recover it. 00:25:54.090 [2024-11-19 11:27:49.431477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.090 [2024-11-19 11:27:49.431504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.090 qpair failed and we were unable to recover it. 00:25:54.090 [2024-11-19 11:27:49.431626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.090 [2024-11-19 11:27:49.431666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.090 qpair failed and we were unable to recover it. 00:25:54.090 [2024-11-19 11:27:49.431826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.090 [2024-11-19 11:27:49.431852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.090 qpair failed and we were unable to recover it. 00:25:54.090 [2024-11-19 11:27:49.431983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.090 [2024-11-19 11:27:49.432008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.090 qpair failed and we were unable to recover it. 00:25:54.090 [2024-11-19 11:27:49.432152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.090 [2024-11-19 11:27:49.432177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.090 qpair failed and we were unable to recover it. 00:25:54.090 [2024-11-19 11:27:49.432340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.090 [2024-11-19 11:27:49.432372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.090 qpair failed and we were unable to recover it. 00:25:54.090 [2024-11-19 11:27:49.432470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.090 [2024-11-19 11:27:49.432500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.090 qpair failed and we were unable to recover it. 00:25:54.091 [2024-11-19 11:27:49.432627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.091 [2024-11-19 11:27:49.432667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.091 qpair failed and we were unable to recover it. 00:25:54.091 [2024-11-19 11:27:49.432790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.091 [2024-11-19 11:27:49.432816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.091 qpair failed and we were unable to recover it. 00:25:54.091 [2024-11-19 11:27:49.432949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.091 [2024-11-19 11:27:49.432976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.091 qpair failed and we were unable to recover it. 00:25:54.091 [2024-11-19 11:27:49.433139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.091 [2024-11-19 11:27:49.433163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.091 qpair failed and we were unable to recover it. 00:25:54.091 [2024-11-19 11:27:49.433271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.091 [2024-11-19 11:27:49.433296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.091 qpair failed and we were unable to recover it. 00:25:54.091 [2024-11-19 11:27:49.433419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.091 [2024-11-19 11:27:49.433445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.091 qpair failed and we were unable to recover it. 00:25:54.091 [2024-11-19 11:27:49.433572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.091 [2024-11-19 11:27:49.433598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.091 qpair failed and we were unable to recover it. 00:25:54.091 [2024-11-19 11:27:49.433720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.091 [2024-11-19 11:27:49.433746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.091 qpair failed and we were unable to recover it. 00:25:54.091 [2024-11-19 11:27:49.433871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.091 [2024-11-19 11:27:49.433897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.091 qpair failed and we were unable to recover it. 00:25:54.091 [2024-11-19 11:27:49.434031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.091 [2024-11-19 11:27:49.434056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.091 qpair failed and we were unable to recover it. 00:25:54.091 [2024-11-19 11:27:49.434192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.091 [2024-11-19 11:27:49.434218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.091 qpair failed and we were unable to recover it. 00:25:54.091 [2024-11-19 11:27:49.434308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.091 [2024-11-19 11:27:49.434333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.091 qpair failed and we were unable to recover it. 00:25:54.091 [2024-11-19 11:27:49.434452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.091 [2024-11-19 11:27:49.434478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.091 qpair failed and we were unable to recover it. 00:25:54.091 [2024-11-19 11:27:49.434645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.091 [2024-11-19 11:27:49.434670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.091 qpair failed and we were unable to recover it. 00:25:54.091 [2024-11-19 11:27:49.434814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.091 [2024-11-19 11:27:49.434839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.091 qpair failed and we were unable to recover it. 00:25:54.091 [2024-11-19 11:27:49.434958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.091 [2024-11-19 11:27:49.434982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.091 qpair failed and we were unable to recover it. 00:25:54.091 [2024-11-19 11:27:49.435120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.091 [2024-11-19 11:27:49.435145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.091 qpair failed and we were unable to recover it. 00:25:54.091 [2024-11-19 11:27:49.435306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.091 [2024-11-19 11:27:49.435369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.091 qpair failed and we were unable to recover it. 00:25:54.091 [2024-11-19 11:27:49.435509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.091 [2024-11-19 11:27:49.435537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.091 qpair failed and we were unable to recover it. 00:25:54.091 [2024-11-19 11:27:49.435696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.091 [2024-11-19 11:27:49.435722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.091 qpair failed and we were unable to recover it. 00:25:54.091 [2024-11-19 11:27:49.435861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.091 [2024-11-19 11:27:49.435886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.091 qpair failed and we were unable to recover it. 00:25:54.091 [2024-11-19 11:27:49.436006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.091 [2024-11-19 11:27:49.436031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.091 qpair failed and we were unable to recover it. 00:25:54.091 [2024-11-19 11:27:49.436210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.091 [2024-11-19 11:27:49.436235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.091 qpair failed and we were unable to recover it. 00:25:54.091 [2024-11-19 11:27:49.436356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.091 [2024-11-19 11:27:49.436402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.091 qpair failed and we were unable to recover it. 00:25:54.091 [2024-11-19 11:27:49.436540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.091 [2024-11-19 11:27:49.436566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.091 qpair failed and we were unable to recover it. 00:25:54.091 [2024-11-19 11:27:49.436680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.091 [2024-11-19 11:27:49.436706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.092 qpair failed and we were unable to recover it. 00:25:54.092 [2024-11-19 11:27:49.436796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.092 [2024-11-19 11:27:49.436828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.092 qpair failed and we were unable to recover it. 00:25:54.092 [2024-11-19 11:27:49.436932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.092 [2024-11-19 11:27:49.436956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.092 qpair failed and we were unable to recover it. 00:25:54.092 [2024-11-19 11:27:49.437088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.092 [2024-11-19 11:27:49.437112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.092 qpair failed and we were unable to recover it. 00:25:54.092 [2024-11-19 11:27:49.437231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.092 [2024-11-19 11:27:49.437258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.092 qpair failed and we were unable to recover it. 00:25:54.092 [2024-11-19 11:27:49.437402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.092 [2024-11-19 11:27:49.437430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.092 qpair failed and we were unable to recover it. 00:25:54.092 [2024-11-19 11:27:49.437533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.092 [2024-11-19 11:27:49.437559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.092 qpair failed and we were unable to recover it. 00:25:54.092 [2024-11-19 11:27:49.437647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.092 [2024-11-19 11:27:49.437672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.092 qpair failed and we were unable to recover it. 00:25:54.092 [2024-11-19 11:27:49.437807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.092 [2024-11-19 11:27:49.437831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.092 qpair failed and we were unable to recover it. 00:25:54.092 [2024-11-19 11:27:49.437927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.092 [2024-11-19 11:27:49.437953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.092 qpair failed and we were unable to recover it. 00:25:54.092 [2024-11-19 11:27:49.438073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.092 [2024-11-19 11:27:49.438100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.092 qpair failed and we were unable to recover it. 00:25:54.092 [2024-11-19 11:27:49.438199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.092 [2024-11-19 11:27:49.438223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.092 qpair failed and we were unable to recover it. 00:25:54.092 [2024-11-19 11:27:49.438357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.092 [2024-11-19 11:27:49.438387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.092 qpair failed and we were unable to recover it. 00:25:54.092 [2024-11-19 11:27:49.438556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.092 [2024-11-19 11:27:49.438581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.092 qpair failed and we were unable to recover it. 00:25:54.092 [2024-11-19 11:27:49.438714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.092 [2024-11-19 11:27:49.438739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.092 qpair failed and we were unable to recover it. 00:25:54.092 [2024-11-19 11:27:49.438846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.092 [2024-11-19 11:27:49.438871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.092 qpair failed and we were unable to recover it. 00:25:54.092 [2024-11-19 11:27:49.438991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.092 [2024-11-19 11:27:49.439017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.092 qpair failed and we were unable to recover it. 00:25:54.092 [2024-11-19 11:27:49.439177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.092 [2024-11-19 11:27:49.439202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.092 qpair failed and we were unable to recover it. 00:25:54.092 [2024-11-19 11:27:49.439314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.092 [2024-11-19 11:27:49.439339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.092 qpair failed and we were unable to recover it. 00:25:54.092 [2024-11-19 11:27:49.439445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.092 [2024-11-19 11:27:49.439472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.092 qpair failed and we were unable to recover it. 00:25:54.092 [2024-11-19 11:27:49.439578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.092 [2024-11-19 11:27:49.439603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.092 qpair failed and we were unable to recover it. 00:25:54.092 [2024-11-19 11:27:49.439723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.092 [2024-11-19 11:27:49.439749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.092 qpair failed and we were unable to recover it. 00:25:54.092 [2024-11-19 11:27:49.439906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.092 [2024-11-19 11:27:49.439946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.092 qpair failed and we were unable to recover it. 00:25:54.092 [2024-11-19 11:27:49.440126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.092 [2024-11-19 11:27:49.440151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.092 qpair failed and we were unable to recover it. 00:25:54.092 [2024-11-19 11:27:49.440262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.092 [2024-11-19 11:27:49.440286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.092 qpair failed and we were unable to recover it. 00:25:54.092 [2024-11-19 11:27:49.440378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.092 [2024-11-19 11:27:49.440404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.092 qpair failed and we were unable to recover it. 00:25:54.092 [2024-11-19 11:27:49.440552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.092 [2024-11-19 11:27:49.440578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.092 qpair failed and we were unable to recover it. 00:25:54.092 [2024-11-19 11:27:49.440700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.092 [2024-11-19 11:27:49.440725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.092 qpair failed and we were unable to recover it. 00:25:54.092 [2024-11-19 11:27:49.440864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.092 [2024-11-19 11:27:49.440904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.092 qpair failed and we were unable to recover it. 00:25:54.092 [2024-11-19 11:27:49.440996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.092 [2024-11-19 11:27:49.441020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.092 qpair failed and we were unable to recover it. 00:25:54.092 [2024-11-19 11:27:49.441181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.092 [2024-11-19 11:27:49.441206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.092 qpair failed and we were unable to recover it. 00:25:54.092 [2024-11-19 11:27:49.441286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.092 [2024-11-19 11:27:49.441312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.092 qpair failed and we were unable to recover it. 00:25:54.092 [2024-11-19 11:27:49.441408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.092 [2024-11-19 11:27:49.441434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.092 qpair failed and we were unable to recover it. 00:25:54.092 [2024-11-19 11:27:49.441556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.092 [2024-11-19 11:27:49.441582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.093 qpair failed and we were unable to recover it. 00:25:54.093 [2024-11-19 11:27:49.441704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.093 [2024-11-19 11:27:49.441729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.093 qpair failed and we were unable to recover it. 00:25:54.093 [2024-11-19 11:27:49.441867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.093 [2024-11-19 11:27:49.441892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.093 qpair failed and we were unable to recover it. 00:25:54.093 [2024-11-19 11:27:49.442024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.093 [2024-11-19 11:27:49.442048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.093 qpair failed and we were unable to recover it. 00:25:54.093 [2024-11-19 11:27:49.442147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.093 [2024-11-19 11:27:49.442173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.093 qpair failed and we were unable to recover it. 00:25:54.093 [2024-11-19 11:27:49.442299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.093 [2024-11-19 11:27:49.442324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.093 qpair failed and we were unable to recover it. 00:25:54.093 [2024-11-19 11:27:49.442457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.093 [2024-11-19 11:27:49.442482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.093 qpair failed and we were unable to recover it. 00:25:54.093 [2024-11-19 11:27:49.442605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.093 [2024-11-19 11:27:49.442631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.093 qpair failed and we were unable to recover it. 00:25:54.093 [2024-11-19 11:27:49.442778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.093 [2024-11-19 11:27:49.442803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.093 qpair failed and we were unable to recover it. 00:25:54.093 [2024-11-19 11:27:49.442940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.093 [2024-11-19 11:27:49.442966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.093 qpair failed and we were unable to recover it. 00:25:54.093 [2024-11-19 11:27:49.443055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.093 [2024-11-19 11:27:49.443080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.093 qpair failed and we were unable to recover it. 00:25:54.093 [2024-11-19 11:27:49.443207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.093 [2024-11-19 11:27:49.443232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.093 qpair failed and we were unable to recover it. 00:25:54.093 [2024-11-19 11:27:49.443374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.093 [2024-11-19 11:27:49.443399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.093 qpair failed and we were unable to recover it. 00:25:54.093 [2024-11-19 11:27:49.443485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.093 [2024-11-19 11:27:49.443510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.093 qpair failed and we were unable to recover it. 00:25:54.093 [2024-11-19 11:27:49.443616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.093 [2024-11-19 11:27:49.443656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.093 qpair failed and we were unable to recover it. 00:25:54.093 [2024-11-19 11:27:49.443773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.093 [2024-11-19 11:27:49.443797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.093 qpair failed and we were unable to recover it. 00:25:54.093 [2024-11-19 11:27:49.443891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.093 [2024-11-19 11:27:49.443916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.093 qpair failed and we were unable to recover it. 00:25:54.093 [2024-11-19 11:27:49.444037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.093 [2024-11-19 11:27:49.444062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.093 qpair failed and we were unable to recover it. 00:25:54.093 [2024-11-19 11:27:49.444191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.093 [2024-11-19 11:27:49.444217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.093 qpair failed and we were unable to recover it. 00:25:54.093 [2024-11-19 11:27:49.444322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.093 [2024-11-19 11:27:49.444372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.093 qpair failed and we were unable to recover it. 00:25:54.093 [2024-11-19 11:27:49.444479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.093 [2024-11-19 11:27:49.444506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.093 qpair failed and we were unable to recover it. 00:25:54.093 [2024-11-19 11:27:49.444597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.093 [2024-11-19 11:27:49.444624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.093 qpair failed and we were unable to recover it. 00:25:54.093 [2024-11-19 11:27:49.444748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.093 [2024-11-19 11:27:49.444788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.093 qpair failed and we were unable to recover it. 00:25:54.093 [2024-11-19 11:27:49.444904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.093 [2024-11-19 11:27:49.444929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.093 qpair failed and we were unable to recover it. 00:25:54.093 [2024-11-19 11:27:49.445040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.093 [2024-11-19 11:27:49.445065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.093 qpair failed and we were unable to recover it. 00:25:54.093 [2024-11-19 11:27:49.445178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.093 [2024-11-19 11:27:49.445204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.093 qpair failed and we were unable to recover it. 00:25:54.093 [2024-11-19 11:27:49.445312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.093 [2024-11-19 11:27:49.445336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.093 qpair failed and we were unable to recover it. 00:25:54.093 [2024-11-19 11:27:49.445485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.093 [2024-11-19 11:27:49.445510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.093 qpair failed and we were unable to recover it. 00:25:54.093 [2024-11-19 11:27:49.445606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.093 [2024-11-19 11:27:49.445632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.093 qpair failed and we were unable to recover it. 00:25:54.093 [2024-11-19 11:27:49.445769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.093 [2024-11-19 11:27:49.445794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.093 qpair failed and we were unable to recover it. 00:25:54.093 [2024-11-19 11:27:49.445926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.093 [2024-11-19 11:27:49.445950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.093 qpair failed and we were unable to recover it. 00:25:54.093 [2024-11-19 11:27:49.446083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.093 [2024-11-19 11:27:49.446109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.093 qpair failed and we were unable to recover it. 00:25:54.093 [2024-11-19 11:27:49.446224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.093 [2024-11-19 11:27:49.446249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.093 qpair failed and we were unable to recover it. 00:25:54.093 [2024-11-19 11:27:49.446351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.094 [2024-11-19 11:27:49.446392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.094 qpair failed and we were unable to recover it. 00:25:54.094 [2024-11-19 11:27:49.446511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.094 [2024-11-19 11:27:49.446537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.094 qpair failed and we were unable to recover it. 00:25:54.094 [2024-11-19 11:27:49.446671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.094 [2024-11-19 11:27:49.446696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.094 qpair failed and we were unable to recover it. 00:25:54.094 [2024-11-19 11:27:49.446830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.094 [2024-11-19 11:27:49.446855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.094 qpair failed and we were unable to recover it. 00:25:54.094 [2024-11-19 11:27:49.447017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.094 [2024-11-19 11:27:49.447042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.094 qpair failed and we were unable to recover it. 00:25:54.094 [2024-11-19 11:27:49.447204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.094 [2024-11-19 11:27:49.447228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.094 qpair failed and we were unable to recover it. 00:25:54.094 [2024-11-19 11:27:49.447377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.094 [2024-11-19 11:27:49.447403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.094 qpair failed and we were unable to recover it. 00:25:54.094 [2024-11-19 11:27:49.447550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.094 [2024-11-19 11:27:49.447576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.094 qpair failed and we were unable to recover it. 00:25:54.094 [2024-11-19 11:27:49.447701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.094 [2024-11-19 11:27:49.447739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.094 qpair failed and we were unable to recover it. 00:25:54.094 [2024-11-19 11:27:49.447837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.094 [2024-11-19 11:27:49.447862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.094 qpair failed and we were unable to recover it. 00:25:54.094 [2024-11-19 11:27:49.447955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.094 [2024-11-19 11:27:49.447981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.094 qpair failed and we were unable to recover it. 00:25:54.094 [2024-11-19 11:27:49.448107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.094 [2024-11-19 11:27:49.448132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.094 qpair failed and we were unable to recover it. 00:25:54.094 [2024-11-19 11:27:49.448265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.094 [2024-11-19 11:27:49.448291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.094 qpair failed and we were unable to recover it. 00:25:54.094 [2024-11-19 11:27:49.448431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.094 [2024-11-19 11:27:49.448471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.094 qpair failed and we were unable to recover it. 00:25:54.094 [2024-11-19 11:27:49.448603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.094 [2024-11-19 11:27:49.448644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.094 qpair failed and we were unable to recover it. 00:25:54.094 [2024-11-19 11:27:49.448741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.094 [2024-11-19 11:27:49.448767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.094 qpair failed and we were unable to recover it. 00:25:54.094 [2024-11-19 11:27:49.448916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.094 [2024-11-19 11:27:49.448943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.094 qpair failed and we were unable to recover it. 00:25:54.094 [2024-11-19 11:27:49.449079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.094 [2024-11-19 11:27:49.449104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.094 qpair failed and we were unable to recover it. 00:25:54.094 [2024-11-19 11:27:49.449230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.094 [2024-11-19 11:27:49.449256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.094 qpair failed and we were unable to recover it. 00:25:54.094 [2024-11-19 11:27:49.449347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.094 [2024-11-19 11:27:49.449378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.094 qpair failed and we were unable to recover it. 00:25:54.094 [2024-11-19 11:27:49.449481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.094 [2024-11-19 11:27:49.449506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.094 qpair failed and we were unable to recover it. 00:25:54.094 [2024-11-19 11:27:49.449602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.094 [2024-11-19 11:27:49.449627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.094 qpair failed and we were unable to recover it. 00:25:54.094 [2024-11-19 11:27:49.449786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.094 [2024-11-19 11:27:49.449826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.094 qpair failed and we were unable to recover it. 00:25:54.094 [2024-11-19 11:27:49.449948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.094 [2024-11-19 11:27:49.449973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.094 qpair failed and we were unable to recover it. 00:25:54.094 [2024-11-19 11:27:49.450147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.094 [2024-11-19 11:27:49.450174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.094 qpair failed and we were unable to recover it. 00:25:54.094 [2024-11-19 11:27:49.450269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.094 [2024-11-19 11:27:49.450311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.094 qpair failed and we were unable to recover it. 00:25:54.094 [2024-11-19 11:27:49.450453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.094 [2024-11-19 11:27:49.450478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.094 qpair failed and we were unable to recover it. 00:25:54.094 [2024-11-19 11:27:49.450624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.094 [2024-11-19 11:27:49.450663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.094 qpair failed and we were unable to recover it. 00:25:54.094 [2024-11-19 11:27:49.450839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.094 [2024-11-19 11:27:49.450863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.094 qpair failed and we were unable to recover it. 00:25:54.094 [2024-11-19 11:27:49.450967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.094 [2024-11-19 11:27:49.451006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.094 qpair failed and we were unable to recover it. 00:25:54.094 [2024-11-19 11:27:49.451221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.094 [2024-11-19 11:27:49.451247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.094 qpair failed and we were unable to recover it. 00:25:54.094 [2024-11-19 11:27:49.451424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.094 [2024-11-19 11:27:49.451463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.094 qpair failed and we were unable to recover it. 00:25:54.094 [2024-11-19 11:27:49.451584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.094 [2024-11-19 11:27:49.451609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.094 qpair failed and we were unable to recover it. 00:25:54.094 [2024-11-19 11:27:49.451788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.094 [2024-11-19 11:27:49.451812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.094 qpair failed and we were unable to recover it. 00:25:54.094 [2024-11-19 11:27:49.451993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.094 [2024-11-19 11:27:49.452018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.094 qpair failed and we were unable to recover it. 00:25:54.094 [2024-11-19 11:27:49.452136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.094 [2024-11-19 11:27:49.452161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.094 qpair failed and we were unable to recover it. 00:25:54.095 [2024-11-19 11:27:49.452330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.095 [2024-11-19 11:27:49.452355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.095 qpair failed and we were unable to recover it. 00:25:54.095 [2024-11-19 11:27:49.452462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.095 [2024-11-19 11:27:49.452488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.095 qpair failed and we were unable to recover it. 00:25:54.095 [2024-11-19 11:27:49.452608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.095 [2024-11-19 11:27:49.452634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.095 qpair failed and we were unable to recover it. 00:25:54.095 [2024-11-19 11:27:49.452848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.095 [2024-11-19 11:27:49.452873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.095 qpair failed and we were unable to recover it. 00:25:54.095 [2024-11-19 11:27:49.453021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.095 [2024-11-19 11:27:49.453045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.095 qpair failed and we were unable to recover it. 00:25:54.095 [2024-11-19 11:27:49.453189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.095 [2024-11-19 11:27:49.453214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.095 qpair failed and we were unable to recover it. 00:25:54.095 [2024-11-19 11:27:49.453374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.095 [2024-11-19 11:27:49.453408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.095 qpair failed and we were unable to recover it. 00:25:54.095 [2024-11-19 11:27:49.453560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.095 [2024-11-19 11:27:49.453590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.095 qpair failed and we were unable to recover it. 00:25:54.095 [2024-11-19 11:27:49.453740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.095 [2024-11-19 11:27:49.453764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.095 qpair failed and we were unable to recover it. 00:25:54.095 [2024-11-19 11:27:49.453938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.095 [2024-11-19 11:27:49.453962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.095 qpair failed and we were unable to recover it. 00:25:54.095 [2024-11-19 11:27:49.454133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.095 [2024-11-19 11:27:49.454158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.095 qpair failed and we were unable to recover it. 00:25:54.095 [2024-11-19 11:27:49.454294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.095 [2024-11-19 11:27:49.454318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.095 qpair failed and we were unable to recover it. 00:25:54.095 [2024-11-19 11:27:49.454461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.095 [2024-11-19 11:27:49.454487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.095 qpair failed and we were unable to recover it. 00:25:54.095 [2024-11-19 11:27:49.454588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.095 [2024-11-19 11:27:49.454613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.095 qpair failed and we were unable to recover it. 00:25:54.095 [2024-11-19 11:27:49.454738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.095 [2024-11-19 11:27:49.454763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.095 qpair failed and we were unable to recover it. 00:25:54.095 [2024-11-19 11:27:49.454869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.095 [2024-11-19 11:27:49.454901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.095 qpair failed and we were unable to recover it. 00:25:54.095 [2024-11-19 11:27:49.455061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.095 [2024-11-19 11:27:49.455086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.095 qpair failed and we were unable to recover it. 00:25:54.095 [2024-11-19 11:27:49.455193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.095 [2024-11-19 11:27:49.455217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.095 qpair failed and we were unable to recover it. 00:25:54.095 [2024-11-19 11:27:49.455443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.095 [2024-11-19 11:27:49.455469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.095 qpair failed and we were unable to recover it. 00:25:54.095 [2024-11-19 11:27:49.455601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.095 [2024-11-19 11:27:49.455626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.095 qpair failed and we were unable to recover it. 00:25:54.095 [2024-11-19 11:27:49.455750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.095 [2024-11-19 11:27:49.455788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.095 qpair failed and we were unable to recover it. 00:25:54.095 [2024-11-19 11:27:49.455900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.095 [2024-11-19 11:27:49.455924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.095 qpair failed and we were unable to recover it. 00:25:54.095 [2024-11-19 11:27:49.456096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.095 [2024-11-19 11:27:49.456122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.095 qpair failed and we were unable to recover it. 00:25:54.095 [2024-11-19 11:27:49.456274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.095 [2024-11-19 11:27:49.456299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.095 qpair failed and we were unable to recover it. 00:25:54.095 [2024-11-19 11:27:49.456424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.095 [2024-11-19 11:27:49.456450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.095 qpair failed and we were unable to recover it. 00:25:54.095 [2024-11-19 11:27:49.456623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.095 [2024-11-19 11:27:49.456647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.095 qpair failed and we were unable to recover it. 00:25:54.095 [2024-11-19 11:27:49.456766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.095 [2024-11-19 11:27:49.456789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.095 qpair failed and we were unable to recover it. 00:25:54.095 [2024-11-19 11:27:49.456899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.095 [2024-11-19 11:27:49.456923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.095 qpair failed and we were unable to recover it. 00:25:54.095 [2024-11-19 11:27:49.457144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.095 [2024-11-19 11:27:49.457168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.095 qpair failed and we were unable to recover it. 00:25:54.095 [2024-11-19 11:27:49.457257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.095 [2024-11-19 11:27:49.457281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.095 qpair failed and we were unable to recover it. 00:25:54.095 [2024-11-19 11:27:49.457431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.095 [2024-11-19 11:27:49.457457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.095 qpair failed and we were unable to recover it. 00:25:54.095 [2024-11-19 11:27:49.457551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.095 [2024-11-19 11:27:49.457576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.095 qpair failed and we were unable to recover it. 00:25:54.095 [2024-11-19 11:27:49.457695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.095 [2024-11-19 11:27:49.457719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.095 qpair failed and we were unable to recover it. 00:25:54.095 [2024-11-19 11:27:49.457801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.095 [2024-11-19 11:27:49.457825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.095 qpair failed and we were unable to recover it. 00:25:54.095 [2024-11-19 11:27:49.457999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.095 [2024-11-19 11:27:49.458029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.095 qpair failed and we were unable to recover it. 00:25:54.095 [2024-11-19 11:27:49.458240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.095 [2024-11-19 11:27:49.458278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.095 qpair failed and we were unable to recover it. 00:25:54.095 [2024-11-19 11:27:49.458440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.095 [2024-11-19 11:27:49.458466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.095 qpair failed and we were unable to recover it. 00:25:54.095 [2024-11-19 11:27:49.458591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.095 [2024-11-19 11:27:49.458617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.095 qpair failed and we were unable to recover it. 00:25:54.096 [2024-11-19 11:27:49.458825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.096 [2024-11-19 11:27:49.458848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.096 qpair failed and we were unable to recover it. 00:25:54.096 [2024-11-19 11:27:49.458961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.096 [2024-11-19 11:27:49.458986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.096 qpair failed and we were unable to recover it. 00:25:54.096 [2024-11-19 11:27:49.459103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.096 [2024-11-19 11:27:49.459128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.096 qpair failed and we were unable to recover it. 00:25:54.096 [2024-11-19 11:27:49.459372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.096 [2024-11-19 11:27:49.459398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.096 qpair failed and we were unable to recover it. 00:25:54.096 [2024-11-19 11:27:49.459512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.096 [2024-11-19 11:27:49.459537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.096 qpair failed and we were unable to recover it. 00:25:54.096 [2024-11-19 11:27:49.459632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.096 [2024-11-19 11:27:49.459657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.096 qpair failed and we were unable to recover it. 00:25:54.096 [2024-11-19 11:27:49.459824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.096 [2024-11-19 11:27:49.459873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.096 qpair failed and we were unable to recover it. 00:25:54.096 [2024-11-19 11:27:49.460045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.096 [2024-11-19 11:27:49.460072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.096 qpair failed and we were unable to recover it. 00:25:54.096 [2024-11-19 11:27:49.460186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.096 [2024-11-19 11:27:49.460233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.096 qpair failed and we were unable to recover it. 00:25:54.096 [2024-11-19 11:27:49.460431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.096 [2024-11-19 11:27:49.460457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.096 qpair failed and we were unable to recover it. 00:25:54.096 [2024-11-19 11:27:49.460607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.096 [2024-11-19 11:27:49.460633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.096 qpair failed and we were unable to recover it. 00:25:54.096 [2024-11-19 11:27:49.460748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.096 [2024-11-19 11:27:49.460787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.096 qpair failed and we were unable to recover it. 00:25:54.096 [2024-11-19 11:27:49.460921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.096 [2024-11-19 11:27:49.460945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.096 qpair failed and we were unable to recover it. 00:25:54.096 [2024-11-19 11:27:49.461139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.096 [2024-11-19 11:27:49.461163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.096 qpair failed and we were unable to recover it. 00:25:54.096 [2024-11-19 11:27:49.461298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.096 [2024-11-19 11:27:49.461323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.096 qpair failed and we were unable to recover it. 00:25:54.096 [2024-11-19 11:27:49.461480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.096 [2024-11-19 11:27:49.461520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.096 qpair failed and we were unable to recover it. 00:25:54.096 [2024-11-19 11:27:49.461676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.096 [2024-11-19 11:27:49.461700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.096 qpair failed and we were unable to recover it. 00:25:54.096 [2024-11-19 11:27:49.461896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.096 [2024-11-19 11:27:49.461920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.096 qpair failed and we were unable to recover it. 00:25:54.096 [2024-11-19 11:27:49.462073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.096 [2024-11-19 11:27:49.462120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.096 qpair failed and we were unable to recover it. 00:25:54.096 [2024-11-19 11:27:49.462264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.096 [2024-11-19 11:27:49.462289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.096 qpair failed and we were unable to recover it. 00:25:54.096 [2024-11-19 11:27:49.462424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.096 [2024-11-19 11:27:49.462450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.096 qpair failed and we were unable to recover it. 00:25:54.096 [2024-11-19 11:27:49.462577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.096 [2024-11-19 11:27:49.462602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.096 qpair failed and we were unable to recover it. 00:25:54.096 [2024-11-19 11:27:49.462773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.096 [2024-11-19 11:27:49.462797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.096 qpair failed and we were unable to recover it. 00:25:54.096 [2024-11-19 11:27:49.462942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.096 [2024-11-19 11:27:49.462971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.096 qpair failed and we were unable to recover it. 00:25:54.096 [2024-11-19 11:27:49.463128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.096 [2024-11-19 11:27:49.463166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.096 qpair failed and we were unable to recover it. 00:25:54.096 [2024-11-19 11:27:49.463299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.096 [2024-11-19 11:27:49.463324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.096 qpair failed and we were unable to recover it. 00:25:54.096 [2024-11-19 11:27:49.463532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.096 [2024-11-19 11:27:49.463558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.096 qpair failed and we were unable to recover it. 00:25:54.096 [2024-11-19 11:27:49.463700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.096 [2024-11-19 11:27:49.463724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.096 qpair failed and we were unable to recover it. 00:25:54.096 [2024-11-19 11:27:49.463851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.096 [2024-11-19 11:27:49.463876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.096 qpair failed and we were unable to recover it. 00:25:54.096 [2024-11-19 11:27:49.464012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.096 [2024-11-19 11:27:49.464036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.096 qpair failed and we were unable to recover it. 00:25:54.096 [2024-11-19 11:27:49.464178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.096 [2024-11-19 11:27:49.464202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.096 qpair failed and we were unable to recover it. 00:25:54.096 [2024-11-19 11:27:49.464320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.096 [2024-11-19 11:27:49.464345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.096 qpair failed and we were unable to recover it. 00:25:54.097 [2024-11-19 11:27:49.464487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.097 [2024-11-19 11:27:49.464513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.097 qpair failed and we were unable to recover it. 00:25:54.097 [2024-11-19 11:27:49.464672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.097 [2024-11-19 11:27:49.464696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.097 qpair failed and we were unable to recover it. 00:25:54.097 [2024-11-19 11:27:49.464822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.097 [2024-11-19 11:27:49.464846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.097 qpair failed and we were unable to recover it. 00:25:54.097 [2024-11-19 11:27:49.464964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.097 [2024-11-19 11:27:49.464989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.097 qpair failed and we were unable to recover it. 00:25:54.097 [2024-11-19 11:27:49.465134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.097 [2024-11-19 11:27:49.465158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.097 qpair failed and we were unable to recover it. 00:25:54.097 [2024-11-19 11:27:49.465325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.097 [2024-11-19 11:27:49.465372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.097 qpair failed and we were unable to recover it. 00:25:54.097 [2024-11-19 11:27:49.465517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.097 [2024-11-19 11:27:49.465543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.097 qpair failed and we were unable to recover it. 00:25:54.097 [2024-11-19 11:27:49.465632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.097 [2024-11-19 11:27:49.465657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.097 qpair failed and we were unable to recover it. 00:25:54.097 [2024-11-19 11:27:49.465771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.097 [2024-11-19 11:27:49.465795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.097 qpair failed and we were unable to recover it. 00:25:54.097 [2024-11-19 11:27:49.465992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.097 [2024-11-19 11:27:49.466018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.097 qpair failed and we were unable to recover it. 00:25:54.097 [2024-11-19 11:27:49.466127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.097 [2024-11-19 11:27:49.466167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.097 qpair failed and we were unable to recover it. 00:25:54.097 [2024-11-19 11:27:49.466306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.097 [2024-11-19 11:27:49.466356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.097 qpair failed and we were unable to recover it. 00:25:54.097 [2024-11-19 11:27:49.466462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.097 [2024-11-19 11:27:49.466488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.097 qpair failed and we were unable to recover it. 00:25:54.097 [2024-11-19 11:27:49.466638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.097 [2024-11-19 11:27:49.466663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.097 qpair failed and we were unable to recover it. 00:25:54.097 [2024-11-19 11:27:49.466839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.097 [2024-11-19 11:27:49.466863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.097 qpair failed and we were unable to recover it. 00:25:54.097 [2024-11-19 11:27:49.466967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.097 [2024-11-19 11:27:49.466991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.097 qpair failed and we were unable to recover it. 00:25:54.097 [2024-11-19 11:27:49.467163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.097 [2024-11-19 11:27:49.467188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.097 qpair failed and we were unable to recover it. 00:25:54.097 [2024-11-19 11:27:49.467349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.097 [2024-11-19 11:27:49.467382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.097 qpair failed and we were unable to recover it. 00:25:54.097 [2024-11-19 11:27:49.467486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.097 [2024-11-19 11:27:49.467512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.097 qpair failed and we were unable to recover it. 00:25:54.097 [2024-11-19 11:27:49.467603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.097 [2024-11-19 11:27:49.467629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.097 qpair failed and we were unable to recover it. 00:25:54.097 [2024-11-19 11:27:49.467795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.097 [2024-11-19 11:27:49.467834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.097 qpair failed and we were unable to recover it. 00:25:54.097 [2024-11-19 11:27:49.468001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.097 [2024-11-19 11:27:49.468025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.097 qpair failed and we were unable to recover it. 00:25:54.097 [2024-11-19 11:27:49.468213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.097 [2024-11-19 11:27:49.468236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.097 qpair failed and we were unable to recover it. 00:25:54.097 [2024-11-19 11:27:49.468391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.097 [2024-11-19 11:27:49.468416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.097 qpair failed and we were unable to recover it. 00:25:54.097 [2024-11-19 11:27:49.468551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.097 [2024-11-19 11:27:49.468576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.097 qpair failed and we were unable to recover it. 00:25:54.097 [2024-11-19 11:27:49.468795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.097 [2024-11-19 11:27:49.468819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.097 qpair failed and we were unable to recover it. 00:25:54.097 [2024-11-19 11:27:49.469008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.097 [2024-11-19 11:27:49.469032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.097 qpair failed and we were unable to recover it. 00:25:54.097 [2024-11-19 11:27:49.469211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.097 [2024-11-19 11:27:49.469243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.097 qpair failed and we were unable to recover it. 00:25:54.097 [2024-11-19 11:27:49.469453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.097 [2024-11-19 11:27:49.469478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.097 qpair failed and we were unable to recover it. 00:25:54.097 [2024-11-19 11:27:49.469591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.097 [2024-11-19 11:27:49.469641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.097 qpair failed and we were unable to recover it. 00:25:54.097 [2024-11-19 11:27:49.469812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.097 [2024-11-19 11:27:49.469836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.097 qpair failed and we were unable to recover it. 00:25:54.097 [2024-11-19 11:27:49.469925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.097 [2024-11-19 11:27:49.469950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.097 qpair failed and we were unable to recover it. 00:25:54.097 [2024-11-19 11:27:49.470096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.097 [2024-11-19 11:27:49.470125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.097 qpair failed and we were unable to recover it. 00:25:54.097 [2024-11-19 11:27:49.470237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.097 [2024-11-19 11:27:49.470262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.097 qpair failed and we were unable to recover it. 00:25:54.098 [2024-11-19 11:27:49.470452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.098 [2024-11-19 11:27:49.470477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.098 qpair failed and we were unable to recover it. 00:25:54.098 [2024-11-19 11:27:49.470605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.098 [2024-11-19 11:27:49.470629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.098 qpair failed and we were unable to recover it. 00:25:54.098 [2024-11-19 11:27:49.470828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.098 [2024-11-19 11:27:49.470868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.098 qpair failed and we were unable to recover it. 00:25:54.098 [2024-11-19 11:27:49.471019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.098 [2024-11-19 11:27:49.471043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.098 qpair failed and we were unable to recover it. 00:25:54.098 [2024-11-19 11:27:49.471221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.098 [2024-11-19 11:27:49.471245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.098 qpair failed and we were unable to recover it. 00:25:54.098 [2024-11-19 11:27:49.471416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.098 [2024-11-19 11:27:49.471442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.098 qpair failed and we were unable to recover it. 00:25:54.098 [2024-11-19 11:27:49.471565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.098 [2024-11-19 11:27:49.471606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.098 qpair failed and we were unable to recover it. 00:25:54.098 [2024-11-19 11:27:49.471750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.098 [2024-11-19 11:27:49.471788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.098 qpair failed and we were unable to recover it. 00:25:54.098 [2024-11-19 11:27:49.471901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.098 [2024-11-19 11:27:49.471939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.098 qpair failed and we were unable to recover it. 00:25:54.098 [2024-11-19 11:27:49.472157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.098 [2024-11-19 11:27:49.472182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.098 qpair failed and we were unable to recover it. 00:25:54.098 [2024-11-19 11:27:49.472316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.098 [2024-11-19 11:27:49.472355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.098 qpair failed and we were unable to recover it. 00:25:54.098 [2024-11-19 11:27:49.472468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.098 [2024-11-19 11:27:49.472494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.098 qpair failed and we were unable to recover it. 00:25:54.098 [2024-11-19 11:27:49.472632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.098 [2024-11-19 11:27:49.472672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.098 qpair failed and we were unable to recover it. 00:25:54.098 [2024-11-19 11:27:49.472804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.098 [2024-11-19 11:27:49.472828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.098 qpair failed and we were unable to recover it. 00:25:54.098 [2024-11-19 11:27:49.473042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.098 [2024-11-19 11:27:49.473066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.098 qpair failed and we were unable to recover it. 00:25:54.098 [2024-11-19 11:27:49.473201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.098 [2024-11-19 11:27:49.473225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.098 qpair failed and we were unable to recover it. 00:25:54.098 [2024-11-19 11:27:49.473429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.098 [2024-11-19 11:27:49.473454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.098 qpair failed and we were unable to recover it. 00:25:54.098 [2024-11-19 11:27:49.473633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.098 [2024-11-19 11:27:49.473683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.098 qpair failed and we were unable to recover it. 00:25:54.098 [2024-11-19 11:27:49.473855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.098 [2024-11-19 11:27:49.473879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.098 qpair failed and we were unable to recover it. 00:25:54.098 [2024-11-19 11:27:49.474003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.098 [2024-11-19 11:27:49.474028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.098 qpair failed and we were unable to recover it. 00:25:54.098 [2024-11-19 11:27:49.474207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.098 [2024-11-19 11:27:49.474246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.098 qpair failed and we were unable to recover it. 00:25:54.098 [2024-11-19 11:27:49.474450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.098 [2024-11-19 11:27:49.474475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.098 qpair failed and we were unable to recover it. 00:25:54.098 [2024-11-19 11:27:49.474575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.098 [2024-11-19 11:27:49.474599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.098 qpair failed and we were unable to recover it. 00:25:54.098 [2024-11-19 11:27:49.474747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.098 [2024-11-19 11:27:49.474771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.098 qpair failed and we were unable to recover it. 00:25:54.098 [2024-11-19 11:27:49.474923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.098 [2024-11-19 11:27:49.474947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.098 qpair failed and we were unable to recover it. 00:25:54.098 [2024-11-19 11:27:49.475115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.098 [2024-11-19 11:27:49.475147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.098 qpair failed and we were unable to recover it. 00:25:54.098 [2024-11-19 11:27:49.475358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.098 [2024-11-19 11:27:49.475402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.098 qpair failed and we were unable to recover it. 00:25:54.098 [2024-11-19 11:27:49.475541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.098 [2024-11-19 11:27:49.475566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.098 qpair failed and we were unable to recover it. 00:25:54.098 [2024-11-19 11:27:49.475662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.098 [2024-11-19 11:27:49.475687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.098 qpair failed and we were unable to recover it. 00:25:54.098 [2024-11-19 11:27:49.475883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.098 [2024-11-19 11:27:49.475908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.098 qpair failed and we were unable to recover it. 00:25:54.098 [2024-11-19 11:27:49.476070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.098 [2024-11-19 11:27:49.476094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.098 qpair failed and we were unable to recover it. 00:25:54.098 [2024-11-19 11:27:49.476273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.098 [2024-11-19 11:27:49.476297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.098 qpair failed and we were unable to recover it. 00:25:54.098 [2024-11-19 11:27:49.476440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.098 [2024-11-19 11:27:49.476466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.098 qpair failed and we were unable to recover it. 00:25:54.098 [2024-11-19 11:27:49.476589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.099 [2024-11-19 11:27:49.476615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.099 qpair failed and we were unable to recover it. 00:25:54.099 [2024-11-19 11:27:49.476779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.099 [2024-11-19 11:27:49.476804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.099 qpair failed and we were unable to recover it. 00:25:54.099 [2024-11-19 11:27:49.476949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.099 [2024-11-19 11:27:49.476972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.099 qpair failed and we were unable to recover it. 00:25:54.099 [2024-11-19 11:27:49.477128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.099 [2024-11-19 11:27:49.477171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.099 qpair failed and we were unable to recover it. 00:25:54.099 [2024-11-19 11:27:49.477317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.099 [2024-11-19 11:27:49.477358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.099 qpair failed and we were unable to recover it. 00:25:54.099 [2024-11-19 11:27:49.477460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.099 [2024-11-19 11:27:49.477485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.099 qpair failed and we were unable to recover it. 00:25:54.099 [2024-11-19 11:27:49.477604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.099 [2024-11-19 11:27:49.477639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.099 qpair failed and we were unable to recover it. 00:25:54.099 [2024-11-19 11:27:49.477833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.099 [2024-11-19 11:27:49.477856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.099 qpair failed and we were unable to recover it. 00:25:54.099 [2024-11-19 11:27:49.477996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.099 [2024-11-19 11:27:49.478019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.099 qpair failed and we were unable to recover it. 00:25:54.099 [2024-11-19 11:27:49.478184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.099 [2024-11-19 11:27:49.478222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.099 qpair failed and we were unable to recover it. 00:25:54.099 [2024-11-19 11:27:49.478392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.099 [2024-11-19 11:27:49.478418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.099 qpair failed and we were unable to recover it. 00:25:54.099 [2024-11-19 11:27:49.478552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.099 [2024-11-19 11:27:49.478577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.099 qpair failed and we were unable to recover it. 00:25:54.099 [2024-11-19 11:27:49.478743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.099 [2024-11-19 11:27:49.478785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.099 qpair failed and we were unable to recover it. 00:25:54.099 [2024-11-19 11:27:49.478941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.099 [2024-11-19 11:27:49.478966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.099 qpair failed and we were unable to recover it. 00:25:54.099 [2024-11-19 11:27:49.479180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.099 [2024-11-19 11:27:49.479205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.099 qpair failed and we were unable to recover it. 00:25:54.099 [2024-11-19 11:27:49.479317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.099 [2024-11-19 11:27:49.479341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.099 qpair failed and we were unable to recover it. 00:25:54.099 [2024-11-19 11:27:49.479540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.099 [2024-11-19 11:27:49.479566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.099 qpair failed and we were unable to recover it. 00:25:54.099 [2024-11-19 11:27:49.479655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.099 [2024-11-19 11:27:49.479680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.099 qpair failed and we were unable to recover it. 00:25:54.099 [2024-11-19 11:27:49.479811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.099 [2024-11-19 11:27:49.479849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.099 qpair failed and we were unable to recover it. 00:25:54.099 [2024-11-19 11:27:49.479987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.099 [2024-11-19 11:27:49.480016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.099 qpair failed and we were unable to recover it. 00:25:54.099 [2024-11-19 11:27:49.480231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.099 [2024-11-19 11:27:49.480256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.099 qpair failed and we were unable to recover it. 00:25:54.099 [2024-11-19 11:27:49.480404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.099 [2024-11-19 11:27:49.480443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.099 qpair failed and we were unable to recover it. 00:25:54.099 [2024-11-19 11:27:49.480548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.099 [2024-11-19 11:27:49.480572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.099 qpair failed and we were unable to recover it. 00:25:54.099 [2024-11-19 11:27:49.480740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.099 [2024-11-19 11:27:49.480778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.099 qpair failed and we were unable to recover it. 00:25:54.099 [2024-11-19 11:27:49.480874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.099 [2024-11-19 11:27:49.480899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.099 qpair failed and we were unable to recover it. 00:25:54.099 [2024-11-19 11:27:49.481059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.099 [2024-11-19 11:27:49.481083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.099 qpair failed and we were unable to recover it. 00:25:54.099 [2024-11-19 11:27:49.481228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.099 [2024-11-19 11:27:49.481256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.099 qpair failed and we were unable to recover it. 00:25:54.099 [2024-11-19 11:27:49.481400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.099 [2024-11-19 11:27:49.481440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.099 qpair failed and we were unable to recover it. 00:25:54.099 [2024-11-19 11:27:49.481522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.099 [2024-11-19 11:27:49.481547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.099 qpair failed and we were unable to recover it. 00:25:54.099 [2024-11-19 11:27:49.481680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.099 [2024-11-19 11:27:49.481705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.099 qpair failed and we were unable to recover it. 00:25:54.099 [2024-11-19 11:27:49.481809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.099 [2024-11-19 11:27:49.481833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.099 qpair failed and we were unable to recover it. 00:25:54.099 [2024-11-19 11:27:49.481967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.099 [2024-11-19 11:27:49.482000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.099 qpair failed and we were unable to recover it. 00:25:54.099 [2024-11-19 11:27:49.482199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.099 [2024-11-19 11:27:49.482224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.099 qpair failed and we were unable to recover it. 00:25:54.099 [2024-11-19 11:27:49.482392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.099 [2024-11-19 11:27:49.482416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.099 qpair failed and we were unable to recover it. 00:25:54.099 [2024-11-19 11:27:49.482504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.099 [2024-11-19 11:27:49.482527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.099 qpair failed and we were unable to recover it. 00:25:54.099 [2024-11-19 11:27:49.482670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.099 [2024-11-19 11:27:49.482696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.100 qpair failed and we were unable to recover it. 00:25:54.100 [2024-11-19 11:27:49.482804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.100 [2024-11-19 11:27:49.482829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.100 qpair failed and we were unable to recover it. 00:25:54.100 [2024-11-19 11:27:49.482946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.100 [2024-11-19 11:27:49.482972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.100 qpair failed and we were unable to recover it. 00:25:54.100 [2024-11-19 11:27:49.483171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.100 [2024-11-19 11:27:49.483195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.100 qpair failed and we were unable to recover it. 00:25:54.100 [2024-11-19 11:27:49.483305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.100 [2024-11-19 11:27:49.483344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.100 qpair failed and we were unable to recover it. 00:25:54.100 [2024-11-19 11:27:49.483527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.100 [2024-11-19 11:27:49.483554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.100 qpair failed and we were unable to recover it. 00:25:54.100 [2024-11-19 11:27:49.483708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.100 [2024-11-19 11:27:49.483732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.100 qpair failed and we were unable to recover it. 00:25:54.100 [2024-11-19 11:27:49.483848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.100 [2024-11-19 11:27:49.483887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.100 qpair failed and we were unable to recover it. 00:25:54.100 [2024-11-19 11:27:49.484026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.100 [2024-11-19 11:27:49.484051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.100 qpair failed and we were unable to recover it. 00:25:54.100 [2024-11-19 11:27:49.484172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.100 [2024-11-19 11:27:49.484198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.100 qpair failed and we were unable to recover it. 00:25:54.100 [2024-11-19 11:27:49.484348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.100 [2024-11-19 11:27:49.484392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.100 qpair failed and we were unable to recover it. 00:25:54.100 [2024-11-19 11:27:49.484497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.100 [2024-11-19 11:27:49.484521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.100 qpair failed and we were unable to recover it. 00:25:54.100 [2024-11-19 11:27:49.484694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.100 [2024-11-19 11:27:49.484735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.100 qpair failed and we were unable to recover it. 00:25:54.100 [2024-11-19 11:27:49.484903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.100 [2024-11-19 11:27:49.484926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.100 qpair failed and we were unable to recover it. 00:25:54.100 [2024-11-19 11:27:49.485055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.100 [2024-11-19 11:27:49.485080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.100 qpair failed and we were unable to recover it. 00:25:54.100 [2024-11-19 11:27:49.485209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.100 [2024-11-19 11:27:49.485235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.100 qpair failed and we were unable to recover it. 00:25:54.100 [2024-11-19 11:27:49.485402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.100 [2024-11-19 11:27:49.485427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.100 qpair failed and we were unable to recover it. 00:25:54.100 [2024-11-19 11:27:49.485542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.100 [2024-11-19 11:27:49.485567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.100 qpair failed and we were unable to recover it. 00:25:54.100 [2024-11-19 11:27:49.485682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.100 [2024-11-19 11:27:49.485708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.100 qpair failed and we were unable to recover it. 00:25:54.100 [2024-11-19 11:27:49.485808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.100 [2024-11-19 11:27:49.485832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.100 qpair failed and we were unable to recover it. 00:25:54.100 [2024-11-19 11:27:49.485946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.100 [2024-11-19 11:27:49.485970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.100 qpair failed and we were unable to recover it. 00:25:54.100 [2024-11-19 11:27:49.486122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.100 [2024-11-19 11:27:49.486148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.100 qpair failed and we were unable to recover it. 00:25:54.100 [2024-11-19 11:27:49.486290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.100 [2024-11-19 11:27:49.486314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.100 qpair failed and we were unable to recover it. 00:25:54.100 [2024-11-19 11:27:49.486441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.100 [2024-11-19 11:27:49.486467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.100 qpair failed and we were unable to recover it. 00:25:54.100 [2024-11-19 11:27:49.486562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.100 [2024-11-19 11:27:49.486588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.100 qpair failed and we were unable to recover it. 00:25:54.100 [2024-11-19 11:27:49.486757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.100 [2024-11-19 11:27:49.486796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.100 qpair failed and we were unable to recover it. 00:25:54.100 [2024-11-19 11:27:49.486950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.100 [2024-11-19 11:27:49.486975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.100 qpair failed and we were unable to recover it. 00:25:54.100 [2024-11-19 11:27:49.487115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.100 [2024-11-19 11:27:49.487140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.100 qpair failed and we were unable to recover it. 00:25:54.100 [2024-11-19 11:27:49.487307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.100 [2024-11-19 11:27:49.487345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.100 qpair failed and we were unable to recover it. 00:25:54.100 [2024-11-19 11:27:49.487536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.100 [2024-11-19 11:27:49.487561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.100 qpair failed and we were unable to recover it. 00:25:54.100 [2024-11-19 11:27:49.487687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.100 [2024-11-19 11:27:49.487712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.100 qpair failed and we were unable to recover it. 00:25:54.100 [2024-11-19 11:27:49.487850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.100 [2024-11-19 11:27:49.487889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.100 qpair failed and we were unable to recover it. 00:25:54.100 [2024-11-19 11:27:49.488011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.100 [2024-11-19 11:27:49.488035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.100 qpair failed and we were unable to recover it. 00:25:54.100 [2024-11-19 11:27:49.488241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.100 [2024-11-19 11:27:49.488265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.100 qpair failed and we were unable to recover it. 00:25:54.100 [2024-11-19 11:27:49.488342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.100 [2024-11-19 11:27:49.488370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.100 qpair failed and we were unable to recover it. 00:25:54.100 [2024-11-19 11:27:49.488475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.100 [2024-11-19 11:27:49.488513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.100 qpair failed and we were unable to recover it. 00:25:54.100 [2024-11-19 11:27:49.488629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.100 [2024-11-19 11:27:49.488654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.100 qpair failed and we were unable to recover it. 00:25:54.100 [2024-11-19 11:27:49.488791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.100 [2024-11-19 11:27:49.488815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.100 qpair failed and we were unable to recover it. 00:25:54.101 [2024-11-19 11:27:49.488952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.101 [2024-11-19 11:27:49.488977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.101 qpair failed and we were unable to recover it. 00:25:54.101 [2024-11-19 11:27:49.489117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.101 [2024-11-19 11:27:49.489143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.101 qpair failed and we were unable to recover it. 00:25:54.101 [2024-11-19 11:27:49.489306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.101 [2024-11-19 11:27:49.489344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.101 qpair failed and we were unable to recover it. 00:25:54.101 [2024-11-19 11:27:49.489477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.101 [2024-11-19 11:27:49.489503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.101 qpair failed and we were unable to recover it. 00:25:54.101 [2024-11-19 11:27:49.489622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.101 [2024-11-19 11:27:49.489647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.101 qpair failed and we were unable to recover it. 00:25:54.101 [2024-11-19 11:27:49.489773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.101 [2024-11-19 11:27:49.489798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.101 qpair failed and we were unable to recover it. 00:25:54.101 [2024-11-19 11:27:49.489934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.101 [2024-11-19 11:27:49.489959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.101 qpair failed and we were unable to recover it. 00:25:54.101 [2024-11-19 11:27:49.490097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.101 [2024-11-19 11:27:49.490137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.101 qpair failed and we were unable to recover it. 00:25:54.101 [2024-11-19 11:27:49.490277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.101 [2024-11-19 11:27:49.490316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.101 qpair failed and we were unable to recover it. 00:25:54.101 [2024-11-19 11:27:49.490450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.101 [2024-11-19 11:27:49.490490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.101 qpair failed and we were unable to recover it. 00:25:54.101 [2024-11-19 11:27:49.490574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.101 [2024-11-19 11:27:49.490600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.101 qpair failed and we were unable to recover it. 00:25:54.101 [2024-11-19 11:27:49.490718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.101 [2024-11-19 11:27:49.490743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.101 qpair failed and we were unable to recover it. 00:25:54.101 [2024-11-19 11:27:49.490887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.101 [2024-11-19 11:27:49.490912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.101 qpair failed and we were unable to recover it. 00:25:54.101 [2024-11-19 11:27:49.491054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.101 [2024-11-19 11:27:49.491105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.101 qpair failed and we were unable to recover it. 00:25:54.101 [2024-11-19 11:27:49.491225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.101 [2024-11-19 11:27:49.491267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.101 qpair failed and we were unable to recover it. 00:25:54.101 [2024-11-19 11:27:49.491403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.101 [2024-11-19 11:27:49.491430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.101 qpair failed and we were unable to recover it. 00:25:54.101 [2024-11-19 11:27:49.491544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.101 [2024-11-19 11:27:49.491569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.101 qpair failed and we were unable to recover it. 00:25:54.101 [2024-11-19 11:27:49.491693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.101 [2024-11-19 11:27:49.491717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.101 qpair failed and we were unable to recover it. 00:25:54.101 [2024-11-19 11:27:49.491830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.101 [2024-11-19 11:27:49.491854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.101 qpair failed and we were unable to recover it. 00:25:54.101 [2024-11-19 11:27:49.491992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.101 [2024-11-19 11:27:49.492018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.101 qpair failed and we were unable to recover it. 00:25:54.101 [2024-11-19 11:27:49.492168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.101 [2024-11-19 11:27:49.492207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.101 qpair failed and we were unable to recover it. 00:25:54.101 [2024-11-19 11:27:49.492313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.101 [2024-11-19 11:27:49.492337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.101 qpair failed and we were unable to recover it. 00:25:54.101 [2024-11-19 11:27:49.492476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.101 [2024-11-19 11:27:49.492503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.101 qpair failed and we were unable to recover it. 00:25:54.101 [2024-11-19 11:27:49.492619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.101 [2024-11-19 11:27:49.492645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.101 qpair failed and we were unable to recover it. 00:25:54.101 [2024-11-19 11:27:49.492810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.101 [2024-11-19 11:27:49.492834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.101 qpair failed and we were unable to recover it. 00:25:54.101 [2024-11-19 11:27:49.493011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.101 [2024-11-19 11:27:49.493035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.101 qpair failed and we were unable to recover it. 00:25:54.101 [2024-11-19 11:27:49.493127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.101 [2024-11-19 11:27:49.493151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.101 qpair failed and we were unable to recover it. 00:25:54.101 [2024-11-19 11:27:49.493298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.101 [2024-11-19 11:27:49.493322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.101 qpair failed and we were unable to recover it. 00:25:54.101 [2024-11-19 11:27:49.493474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.101 [2024-11-19 11:27:49.493499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.101 qpair failed and we were unable to recover it. 00:25:54.101 [2024-11-19 11:27:49.493618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.101 [2024-11-19 11:27:49.493643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.101 qpair failed and we were unable to recover it. 00:25:54.101 [2024-11-19 11:27:49.493770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.101 [2024-11-19 11:27:49.493809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.101 qpair failed and we were unable to recover it. 00:25:54.101 [2024-11-19 11:27:49.493957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.101 [2024-11-19 11:27:49.493981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.101 qpair failed and we were unable to recover it. 00:25:54.101 [2024-11-19 11:27:49.494157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.101 [2024-11-19 11:27:49.494196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.101 qpair failed and we were unable to recover it. 00:25:54.101 [2024-11-19 11:27:49.494310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.101 [2024-11-19 11:27:49.494334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.101 qpair failed and we were unable to recover it. 00:25:54.101 [2024-11-19 11:27:49.494518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.101 [2024-11-19 11:27:49.494544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.101 qpair failed and we were unable to recover it. 00:25:54.101 [2024-11-19 11:27:49.494709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.101 [2024-11-19 11:27:49.494747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.101 qpair failed and we were unable to recover it. 00:25:54.101 [2024-11-19 11:27:49.494854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.101 [2024-11-19 11:27:49.494878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.101 qpair failed and we were unable to recover it. 00:25:54.101 [2024-11-19 11:27:49.495043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.101 [2024-11-19 11:27:49.495068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.101 qpair failed and we were unable to recover it. 00:25:54.101 [2024-11-19 11:27:49.495201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.101 [2024-11-19 11:27:49.495245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.101 qpair failed and we were unable to recover it. 00:25:54.101 [2024-11-19 11:27:49.495404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-11-19 11:27:49.495429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.102 qpair failed and we were unable to recover it. 00:25:54.102 [2024-11-19 11:27:49.495539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-11-19 11:27:49.495564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.102 qpair failed and we were unable to recover it. 00:25:54.102 [2024-11-19 11:27:49.495720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-11-19 11:27:49.495749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.102 qpair failed and we were unable to recover it. 00:25:54.102 [2024-11-19 11:27:49.495877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-11-19 11:27:49.495902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.102 qpair failed and we were unable to recover it. 00:25:54.102 [2024-11-19 11:27:49.496028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-11-19 11:27:49.496054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.102 qpair failed and we were unable to recover it. 00:25:54.102 [2024-11-19 11:27:49.496159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-11-19 11:27:49.496184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.102 qpair failed and we were unable to recover it. 00:25:54.102 [2024-11-19 11:27:49.496320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-11-19 11:27:49.496359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.102 qpair failed and we were unable to recover it. 00:25:54.102 [2024-11-19 11:27:49.496464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-11-19 11:27:49.496490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.102 qpair failed and we were unable to recover it. 00:25:54.102 [2024-11-19 11:27:49.496624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-11-19 11:27:49.496649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.102 qpair failed and we were unable to recover it. 00:25:54.102 [2024-11-19 11:27:49.496797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-11-19 11:27:49.496821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.102 qpair failed and we were unable to recover it. 00:25:54.102 [2024-11-19 11:27:49.496924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-11-19 11:27:49.496950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.102 qpair failed and we were unable to recover it. 00:25:54.102 [2024-11-19 11:27:49.497073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-11-19 11:27:49.497098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.102 qpair failed and we were unable to recover it. 00:25:54.102 [2024-11-19 11:27:49.497255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-11-19 11:27:49.497280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.102 qpair failed and we were unable to recover it. 00:25:54.102 [2024-11-19 11:27:49.497393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-11-19 11:27:49.497429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.102 qpair failed and we were unable to recover it. 00:25:54.102 [2024-11-19 11:27:49.497553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-11-19 11:27:49.497579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.102 qpair failed and we were unable to recover it. 00:25:54.102 [2024-11-19 11:27:49.497686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-11-19 11:27:49.497710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.102 qpair failed and we were unable to recover it. 00:25:54.102 [2024-11-19 11:27:49.497849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-11-19 11:27:49.497889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.102 qpair failed and we were unable to recover it. 00:25:54.102 [2024-11-19 11:27:49.498020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-11-19 11:27:49.498044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.102 qpair failed and we were unable to recover it. 00:25:54.102 [2024-11-19 11:27:49.498179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-11-19 11:27:49.498205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.102 qpair failed and we were unable to recover it. 00:25:54.102 [2024-11-19 11:27:49.498387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-11-19 11:27:49.498414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.102 qpair failed and we were unable to recover it. 00:25:54.102 [2024-11-19 11:27:49.498498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-11-19 11:27:49.498524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.102 qpair failed and we were unable to recover it. 00:25:54.102 [2024-11-19 11:27:49.498657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-11-19 11:27:49.498682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.102 qpair failed and we were unable to recover it. 00:25:54.102 [2024-11-19 11:27:49.498799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-11-19 11:27:49.498825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.102 qpair failed and we were unable to recover it. 00:25:54.102 [2024-11-19 11:27:49.498998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-11-19 11:27:49.499024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.102 qpair failed and we were unable to recover it. 00:25:54.102 [2024-11-19 11:27:49.499111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-11-19 11:27:49.499136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.102 qpair failed and we were unable to recover it. 00:25:54.102 [2024-11-19 11:27:49.499280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-11-19 11:27:49.499306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.102 qpair failed and we were unable to recover it. 00:25:54.102 [2024-11-19 11:27:49.499444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-11-19 11:27:49.499485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.102 qpair failed and we were unable to recover it. 00:25:54.102 [2024-11-19 11:27:49.499601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-11-19 11:27:49.499626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.102 qpair failed and we were unable to recover it. 00:25:54.102 [2024-11-19 11:27:49.499759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-11-19 11:27:49.499798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.102 qpair failed and we were unable to recover it. 00:25:54.102 [2024-11-19 11:27:49.499895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-11-19 11:27:49.499919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.102 qpair failed and we were unable to recover it. 00:25:54.102 [2024-11-19 11:27:49.500085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-11-19 11:27:49.500109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.102 qpair failed and we were unable to recover it. 00:25:54.102 [2024-11-19 11:27:49.500234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-11-19 11:27:49.500275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.102 qpair failed and we were unable to recover it. 00:25:54.102 [2024-11-19 11:27:49.500403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-11-19 11:27:49.500430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.102 qpair failed and we were unable to recover it. 00:25:54.102 [2024-11-19 11:27:49.500514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-11-19 11:27:49.500539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.102 qpair failed and we were unable to recover it. 00:25:54.102 [2024-11-19 11:27:49.500657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-11-19 11:27:49.500682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.102 qpair failed and we were unable to recover it. 00:25:54.102 [2024-11-19 11:27:49.500781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-11-19 11:27:49.500805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.102 qpair failed and we were unable to recover it. 00:25:54.102 [2024-11-19 11:27:49.500919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-11-19 11:27:49.500944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.102 qpair failed and we were unable to recover it. 00:25:54.102 [2024-11-19 11:27:49.501055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-11-19 11:27:49.501080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.102 qpair failed and we were unable to recover it. 00:25:54.102 [2024-11-19 11:27:49.501209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-11-19 11:27:49.501234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.102 qpair failed and we were unable to recover it. 00:25:54.102 [2024-11-19 11:27:49.501370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.102 [2024-11-19 11:27:49.501411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.102 qpair failed and we were unable to recover it. 00:25:54.102 [2024-11-19 11:27:49.501499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-11-19 11:27:49.501524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.103 qpair failed and we were unable to recover it. 00:25:54.103 [2024-11-19 11:27:49.501639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-11-19 11:27:49.501664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.103 qpair failed and we were unable to recover it. 00:25:54.103 [2024-11-19 11:27:49.501760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-11-19 11:27:49.501785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.103 qpair failed and we were unable to recover it. 00:25:54.103 [2024-11-19 11:27:49.501897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-11-19 11:27:49.501922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.103 qpair failed and we were unable to recover it. 00:25:54.103 [2024-11-19 11:27:49.502082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-11-19 11:27:49.502106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.103 qpair failed and we were unable to recover it. 00:25:54.103 [2024-11-19 11:27:49.502265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-11-19 11:27:49.502290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.103 qpair failed and we were unable to recover it. 00:25:54.103 [2024-11-19 11:27:49.502423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-11-19 11:27:49.502450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.103 qpair failed and we were unable to recover it. 00:25:54.103 [2024-11-19 11:27:49.502566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-11-19 11:27:49.502591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.103 qpair failed and we were unable to recover it. 00:25:54.103 [2024-11-19 11:27:49.502751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-11-19 11:27:49.502776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.103 qpair failed and we were unable to recover it. 00:25:54.103 [2024-11-19 11:27:49.502922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-11-19 11:27:49.502947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.103 qpair failed and we were unable to recover it. 00:25:54.103 [2024-11-19 11:27:49.503065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-11-19 11:27:49.503089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.103 qpair failed and we were unable to recover it. 00:25:54.103 [2024-11-19 11:27:49.503189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-11-19 11:27:49.503228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.103 qpair failed and we were unable to recover it. 00:25:54.103 [2024-11-19 11:27:49.503357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-11-19 11:27:49.503389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.103 qpair failed and we were unable to recover it. 00:25:54.103 [2024-11-19 11:27:49.503508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-11-19 11:27:49.503534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.103 qpair failed and we were unable to recover it. 00:25:54.103 [2024-11-19 11:27:49.503637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-11-19 11:27:49.503662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.103 qpair failed and we were unable to recover it. 00:25:54.103 [2024-11-19 11:27:49.503790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-11-19 11:27:49.503816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.103 qpair failed and we were unable to recover it. 00:25:54.103 [2024-11-19 11:27:49.504000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-11-19 11:27:49.504025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.103 qpair failed and we were unable to recover it. 00:25:54.103 [2024-11-19 11:27:49.504144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-11-19 11:27:49.504169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.103 qpair failed and we were unable to recover it. 00:25:54.103 [2024-11-19 11:27:49.504314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-11-19 11:27:49.504340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.103 qpair failed and we were unable to recover it. 00:25:54.103 [2024-11-19 11:27:49.504490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-11-19 11:27:49.504516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.103 qpair failed and we were unable to recover it. 00:25:54.103 [2024-11-19 11:27:49.504655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-11-19 11:27:49.504680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.103 qpair failed and we were unable to recover it. 00:25:54.103 [2024-11-19 11:27:49.504790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-11-19 11:27:49.504815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.103 qpair failed and we were unable to recover it. 00:25:54.103 [2024-11-19 11:27:49.504948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-11-19 11:27:49.504972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.103 qpair failed and we were unable to recover it. 00:25:54.103 [2024-11-19 11:27:49.505070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-11-19 11:27:49.505110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.103 qpair failed and we were unable to recover it. 00:25:54.103 [2024-11-19 11:27:49.505239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-11-19 11:27:49.505264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.103 qpair failed and we were unable to recover it. 00:25:54.103 [2024-11-19 11:27:49.505436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-11-19 11:27:49.505462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.103 qpair failed and we were unable to recover it. 00:25:54.103 [2024-11-19 11:27:49.505556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-11-19 11:27:49.505582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.103 qpair failed and we were unable to recover it. 00:25:54.103 [2024-11-19 11:27:49.505730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-11-19 11:27:49.505756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.103 qpair failed and we were unable to recover it. 00:25:54.103 [2024-11-19 11:27:49.505862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-11-19 11:27:49.505901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.103 qpair failed and we were unable to recover it. 00:25:54.103 [2024-11-19 11:27:49.506068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-11-19 11:27:49.506093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.103 qpair failed and we were unable to recover it. 00:25:54.103 [2024-11-19 11:27:49.506204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-11-19 11:27:49.506234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.103 qpair failed and we were unable to recover it. 00:25:54.103 [2024-11-19 11:27:49.506339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-11-19 11:27:49.506383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.103 qpair failed and we were unable to recover it. 00:25:54.103 [2024-11-19 11:27:49.506538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-11-19 11:27:49.506563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.103 qpair failed and we were unable to recover it. 00:25:54.103 [2024-11-19 11:27:49.506658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-11-19 11:27:49.506697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.103 qpair failed and we were unable to recover it. 00:25:54.103 [2024-11-19 11:27:49.506796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-11-19 11:27:49.506835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.103 qpair failed and we were unable to recover it. 00:25:54.103 [2024-11-19 11:27:49.506964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-11-19 11:27:49.506989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.103 qpair failed and we were unable to recover it. 00:25:54.103 [2024-11-19 11:27:49.507115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-11-19 11:27:49.507140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.103 qpair failed and we were unable to recover it. 00:25:54.103 [2024-11-19 11:27:49.507235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-11-19 11:27:49.507259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.103 qpair failed and we were unable to recover it. 00:25:54.103 [2024-11-19 11:27:49.507396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-11-19 11:27:49.507421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.103 qpair failed and we were unable to recover it. 00:25:54.103 [2024-11-19 11:27:49.507553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-11-19 11:27:49.507578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.103 qpair failed and we were unable to recover it. 00:25:54.103 [2024-11-19 11:27:49.507669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.103 [2024-11-19 11:27:49.507693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.103 qpair failed and we were unable to recover it. 00:25:54.103 [2024-11-19 11:27:49.507856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.104 [2024-11-19 11:27:49.507881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.104 qpair failed and we were unable to recover it. 00:25:54.104 [2024-11-19 11:27:49.508006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.104 [2024-11-19 11:27:49.508046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.104 qpair failed and we were unable to recover it. 00:25:54.104 [2024-11-19 11:27:49.508169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.104 [2024-11-19 11:27:49.508209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.104 qpair failed and we were unable to recover it. 00:25:54.104 [2024-11-19 11:27:49.508324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.104 [2024-11-19 11:27:49.508349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.104 qpair failed and we were unable to recover it. 00:25:54.104 [2024-11-19 11:27:49.508498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.104 [2024-11-19 11:27:49.508523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.104 qpair failed and we were unable to recover it. 00:25:54.104 [2024-11-19 11:27:49.508607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.104 [2024-11-19 11:27:49.508633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.104 qpair failed and we were unable to recover it. 00:25:54.104 [2024-11-19 11:27:49.508744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.104 [2024-11-19 11:27:49.508770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.104 qpair failed and we were unable to recover it. 00:25:54.104 [2024-11-19 11:27:49.508889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.104 [2024-11-19 11:27:49.508915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.104 qpair failed and we were unable to recover it. 00:25:54.104 [2024-11-19 11:27:49.509042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.104 [2024-11-19 11:27:49.509067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.104 qpair failed and we were unable to recover it. 00:25:54.104 [2024-11-19 11:27:49.509159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.104 [2024-11-19 11:27:49.509184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.104 qpair failed and we were unable to recover it. 00:25:54.104 [2024-11-19 11:27:49.509326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.104 [2024-11-19 11:27:49.509351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.104 qpair failed and we were unable to recover it. 00:25:54.104 [2024-11-19 11:27:49.509434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.104 [2024-11-19 11:27:49.509461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.104 qpair failed and we were unable to recover it. 00:25:54.104 [2024-11-19 11:27:49.509603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.104 [2024-11-19 11:27:49.509628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.104 qpair failed and we were unable to recover it. 00:25:54.104 [2024-11-19 11:27:49.509808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.104 [2024-11-19 11:27:49.509833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.104 qpair failed and we were unable to recover it. 00:25:54.104 [2024-11-19 11:27:49.509937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.104 [2024-11-19 11:27:49.509961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.104 qpair failed and we were unable to recover it. 00:25:54.104 [2024-11-19 11:27:49.510133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.104 [2024-11-19 11:27:49.510158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.104 qpair failed and we were unable to recover it. 00:25:54.104 [2024-11-19 11:27:49.510286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.104 [2024-11-19 11:27:49.510330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.104 qpair failed and we were unable to recover it. 00:25:54.104 [2024-11-19 11:27:49.510478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.104 [2024-11-19 11:27:49.510504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.104 qpair failed and we were unable to recover it. 00:25:54.104 [2024-11-19 11:27:49.510588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.104 [2024-11-19 11:27:49.510614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.104 qpair failed and we were unable to recover it. 00:25:54.104 [2024-11-19 11:27:49.510743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.104 [2024-11-19 11:27:49.510768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.104 qpair failed and we were unable to recover it. 00:25:54.104 [2024-11-19 11:27:49.510897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.104 [2024-11-19 11:27:49.510922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.104 qpair failed and we were unable to recover it. 00:25:54.104 [2024-11-19 11:27:49.511051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.104 [2024-11-19 11:27:49.511076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.104 qpair failed and we were unable to recover it. 00:25:54.104 [2024-11-19 11:27:49.511183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.104 [2024-11-19 11:27:49.511208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.104 qpair failed and we were unable to recover it. 00:25:54.104 [2024-11-19 11:27:49.511344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.104 [2024-11-19 11:27:49.511391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.104 qpair failed and we were unable to recover it. 00:25:54.104 [2024-11-19 11:27:49.511510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.104 [2024-11-19 11:27:49.511535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.104 qpair failed and we were unable to recover it. 00:25:54.104 [2024-11-19 11:27:49.511655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.104 [2024-11-19 11:27:49.511680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.104 qpair failed and we were unable to recover it. 00:25:54.104 [2024-11-19 11:27:49.511812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.104 [2024-11-19 11:27:49.511851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.104 qpair failed and we were unable to recover it. 00:25:54.104 [2024-11-19 11:27:49.511978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.104 [2024-11-19 11:27:49.512002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.104 qpair failed and we were unable to recover it. 00:25:54.104 [2024-11-19 11:27:49.512174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.104 [2024-11-19 11:27:49.512214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.104 qpair failed and we were unable to recover it. 00:25:54.104 [2024-11-19 11:27:49.512323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.104 [2024-11-19 11:27:49.512368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.104 qpair failed and we were unable to recover it. 00:25:54.104 [2024-11-19 11:27:49.512502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.104 [2024-11-19 11:27:49.512526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.104 qpair failed and we were unable to recover it. 00:25:54.104 [2024-11-19 11:27:49.512657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.104 [2024-11-19 11:27:49.512683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.104 qpair failed and we were unable to recover it. 00:25:54.104 [2024-11-19 11:27:49.512845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.104 [2024-11-19 11:27:49.512882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.104 qpair failed and we were unable to recover it. 00:25:54.104 [2024-11-19 11:27:49.513007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.104 [2024-11-19 11:27:49.513031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.104 qpair failed and we were unable to recover it. 00:25:54.104 [2024-11-19 11:27:49.513187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.104 [2024-11-19 11:27:49.513212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.104 qpair failed and we were unable to recover it. 00:25:54.104 [2024-11-19 11:27:49.513337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.104 [2024-11-19 11:27:49.513389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.104 qpair failed and we were unable to recover it. 00:25:54.104 [2024-11-19 11:27:49.513469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.104 [2024-11-19 11:27:49.513494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.104 qpair failed and we were unable to recover it. 00:25:54.104 [2024-11-19 11:27:49.513623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.104 [2024-11-19 11:27:49.513648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.104 qpair failed and we were unable to recover it. 00:25:54.104 [2024-11-19 11:27:49.513806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.104 [2024-11-19 11:27:49.513846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.104 qpair failed and we were unable to recover it. 00:25:54.104 [2024-11-19 11:27:49.514001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.104 [2024-11-19 11:27:49.514026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.104 qpair failed and we were unable to recover it. 00:25:54.104 [2024-11-19 11:27:49.514117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.104 [2024-11-19 11:27:49.514143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.104 qpair failed and we were unable to recover it. 00:25:54.104 [2024-11-19 11:27:49.514238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.105 [2024-11-19 11:27:49.514263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.105 qpair failed and we were unable to recover it. 00:25:54.105 [2024-11-19 11:27:49.514406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.105 [2024-11-19 11:27:49.514432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.105 qpair failed and we were unable to recover it. 00:25:54.105 [2024-11-19 11:27:49.514562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.105 [2024-11-19 11:27:49.514593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.105 qpair failed and we were unable to recover it. 00:25:54.105 [2024-11-19 11:27:49.514715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.105 [2024-11-19 11:27:49.514740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.105 qpair failed and we were unable to recover it. 00:25:54.105 [2024-11-19 11:27:49.514891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.105 [2024-11-19 11:27:49.514916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.105 qpair failed and we were unable to recover it. 00:25:54.105 [2024-11-19 11:27:49.515045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.105 [2024-11-19 11:27:49.515070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.105 qpair failed and we were unable to recover it. 00:25:54.105 [2024-11-19 11:27:49.515162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.105 [2024-11-19 11:27:49.515186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.105 qpair failed and we were unable to recover it. 00:25:54.105 [2024-11-19 11:27:49.515327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.105 [2024-11-19 11:27:49.515351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.105 qpair failed and we were unable to recover it. 00:25:54.105 [2024-11-19 11:27:49.515527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.105 [2024-11-19 11:27:49.515553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.105 qpair failed and we were unable to recover it. 00:25:54.105 [2024-11-19 11:27:49.515655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.105 [2024-11-19 11:27:49.515695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.105 qpair failed and we were unable to recover it. 00:25:54.105 [2024-11-19 11:27:49.515814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.105 [2024-11-19 11:27:49.515838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.105 qpair failed and we were unable to recover it. 00:25:54.105 [2024-11-19 11:27:49.515964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.105 [2024-11-19 11:27:49.515989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.105 qpair failed and we were unable to recover it. 00:25:54.105 [2024-11-19 11:27:49.516148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.105 [2024-11-19 11:27:49.516186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.105 qpair failed and we were unable to recover it. 00:25:54.105 [2024-11-19 11:27:49.516307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.105 [2024-11-19 11:27:49.516331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.105 qpair failed and we were unable to recover it. 00:25:54.105 [2024-11-19 11:27:49.516434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.105 [2024-11-19 11:27:49.516460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.105 qpair failed and we were unable to recover it. 00:25:54.105 [2024-11-19 11:27:49.516606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.105 [2024-11-19 11:27:49.516632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.105 qpair failed and we were unable to recover it. 00:25:54.105 [2024-11-19 11:27:49.516745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.105 [2024-11-19 11:27:49.516770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.105 qpair failed and we were unable to recover it. 00:25:54.105 [2024-11-19 11:27:49.516915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.105 [2024-11-19 11:27:49.516940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.105 qpair failed and we were unable to recover it. 00:25:54.105 [2024-11-19 11:27:49.517043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.105 [2024-11-19 11:27:49.517070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.105 qpair failed and we were unable to recover it. 00:25:54.105 [2024-11-19 11:27:49.517189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.105 [2024-11-19 11:27:49.517215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.105 qpair failed and we were unable to recover it. 00:25:54.105 [2024-11-19 11:27:49.517334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.105 [2024-11-19 11:27:49.517359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.105 qpair failed and we were unable to recover it. 00:25:54.105 [2024-11-19 11:27:49.517507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.105 [2024-11-19 11:27:49.517533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.105 qpair failed and we were unable to recover it. 00:25:54.105 [2024-11-19 11:27:49.517686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.105 [2024-11-19 11:27:49.517711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.105 qpair failed and we were unable to recover it. 00:25:54.105 [2024-11-19 11:27:49.517804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.105 [2024-11-19 11:27:49.517829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.105 qpair failed and we were unable to recover it. 00:25:54.105 [2024-11-19 11:27:49.517922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.105 [2024-11-19 11:27:49.517948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.105 qpair failed and we were unable to recover it. 00:25:54.105 [2024-11-19 11:27:49.518076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.105 [2024-11-19 11:27:49.518101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.105 qpair failed and we were unable to recover it. 00:25:54.105 [2024-11-19 11:27:49.518238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.105 [2024-11-19 11:27:49.518263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.105 qpair failed and we were unable to recover it. 00:25:54.105 [2024-11-19 11:27:49.518399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.105 [2024-11-19 11:27:49.518425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.105 qpair failed and we were unable to recover it. 00:25:54.105 [2024-11-19 11:27:49.518570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.105 [2024-11-19 11:27:49.518596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.105 qpair failed and we were unable to recover it. 00:25:54.105 [2024-11-19 11:27:49.518728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.105 [2024-11-19 11:27:49.518754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.105 qpair failed and we were unable to recover it. 00:25:54.105 [2024-11-19 11:27:49.518871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.105 [2024-11-19 11:27:49.518897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.105 qpair failed and we were unable to recover it. 00:25:54.105 [2024-11-19 11:27:49.519046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.105 [2024-11-19 11:27:49.519071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.105 qpair failed and we were unable to recover it. 00:25:54.105 [2024-11-19 11:27:49.519190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.105 [2024-11-19 11:27:49.519215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.105 qpair failed and we were unable to recover it. 00:25:54.105 [2024-11-19 11:27:49.519335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.105 [2024-11-19 11:27:49.519367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.105 qpair failed and we were unable to recover it. 00:25:54.105 [2024-11-19 11:27:49.519461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.106 [2024-11-19 11:27:49.519487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.106 qpair failed and we were unable to recover it. 00:25:54.106 [2024-11-19 11:27:49.519577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.106 [2024-11-19 11:27:49.519603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.106 qpair failed and we were unable to recover it. 00:25:54.106 [2024-11-19 11:27:49.519757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.106 [2024-11-19 11:27:49.519783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.106 qpair failed and we were unable to recover it. 00:25:54.106 [2024-11-19 11:27:49.519883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.106 [2024-11-19 11:27:49.519922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.106 qpair failed and we were unable to recover it. 00:25:54.106 [2024-11-19 11:27:49.520064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.106 [2024-11-19 11:27:49.520088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.106 qpair failed and we were unable to recover it. 00:25:54.106 [2024-11-19 11:27:49.520221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.106 [2024-11-19 11:27:49.520245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.106 qpair failed and we were unable to recover it. 00:25:54.106 [2024-11-19 11:27:49.520375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.106 [2024-11-19 11:27:49.520401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.106 qpair failed and we were unable to recover it. 00:25:54.106 [2024-11-19 11:27:49.520528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.106 [2024-11-19 11:27:49.520554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.106 qpair failed and we were unable to recover it. 00:25:54.106 [2024-11-19 11:27:49.520677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.106 [2024-11-19 11:27:49.520702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.106 qpair failed and we were unable to recover it. 00:25:54.106 [2024-11-19 11:27:49.520817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.106 [2024-11-19 11:27:49.520844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.106 qpair failed and we were unable to recover it. 00:25:54.406 [2024-11-19 11:27:49.520971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.406 [2024-11-19 11:27:49.520997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.406 qpair failed and we were unable to recover it. 00:25:54.406 [2024-11-19 11:27:49.521117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.406 [2024-11-19 11:27:49.521143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.406 qpair failed and we were unable to recover it. 00:25:54.406 [2024-11-19 11:27:49.521266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.406 [2024-11-19 11:27:49.521291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.406 qpair failed and we were unable to recover it. 00:25:54.407 [2024-11-19 11:27:49.521413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-11-19 11:27:49.521440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-11-19 11:27:49.521586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-11-19 11:27:49.521611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-11-19 11:27:49.521730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-11-19 11:27:49.521755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-11-19 11:27:49.521855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-11-19 11:27:49.521881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-11-19 11:27:49.522001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-11-19 11:27:49.522026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-11-19 11:27:49.522147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-11-19 11:27:49.522173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-11-19 11:27:49.522291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-11-19 11:27:49.522316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-11-19 11:27:49.522462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-11-19 11:27:49.522489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-11-19 11:27:49.522567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-11-19 11:27:49.522593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-11-19 11:27:49.522690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-11-19 11:27:49.522715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-11-19 11:27:49.522807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-11-19 11:27:49.522833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-11-19 11:27:49.522956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-11-19 11:27:49.522981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-11-19 11:27:49.523102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-11-19 11:27:49.523127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-11-19 11:27:49.523225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-11-19 11:27:49.523250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-11-19 11:27:49.523378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-11-19 11:27:49.523404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-11-19 11:27:49.523501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-11-19 11:27:49.523527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-11-19 11:27:49.523615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-11-19 11:27:49.523640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-11-19 11:27:49.523777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-11-19 11:27:49.523804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-11-19 11:27:49.524022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-11-19 11:27:49.524057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-11-19 11:27:49.524163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-11-19 11:27:49.524189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-11-19 11:27:49.524323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-11-19 11:27:49.524348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-11-19 11:27:49.524463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-11-19 11:27:49.524489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-11-19 11:27:49.524617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-11-19 11:27:49.524642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-11-19 11:27:49.524752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-11-19 11:27:49.524781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-11-19 11:27:49.524879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-11-19 11:27:49.524904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-11-19 11:27:49.525054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-11-19 11:27:49.525079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-11-19 11:27:49.525192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-11-19 11:27:49.525217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-11-19 11:27:49.525371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-11-19 11:27:49.525398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-11-19 11:27:49.525519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-11-19 11:27:49.525545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-11-19 11:27:49.525689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-11-19 11:27:49.525714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-11-19 11:27:49.525852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-11-19 11:27:49.525877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-11-19 11:27:49.526077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-11-19 11:27:49.526103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-11-19 11:27:49.526260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-11-19 11:27:49.526285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-11-19 11:27:49.526408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.407 [2024-11-19 11:27:49.526434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.407 qpair failed and we were unable to recover it. 00:25:54.407 [2024-11-19 11:27:49.526547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-11-19 11:27:49.526572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.408 qpair failed and we were unable to recover it. 00:25:54.408 [2024-11-19 11:27:49.526738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-11-19 11:27:49.526764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.408 qpair failed and we were unable to recover it. 00:25:54.408 [2024-11-19 11:27:49.526873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-11-19 11:27:49.526898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.408 qpair failed and we were unable to recover it. 00:25:54.408 [2024-11-19 11:27:49.527079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-11-19 11:27:49.527120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.408 qpair failed and we were unable to recover it. 00:25:54.408 [2024-11-19 11:27:49.527220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-11-19 11:27:49.527245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.408 qpair failed and we were unable to recover it. 00:25:54.408 [2024-11-19 11:27:49.527373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-11-19 11:27:49.527400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.408 qpair failed and we were unable to recover it. 00:25:54.408 [2024-11-19 11:27:49.527510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-11-19 11:27:49.527535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.408 qpair failed and we were unable to recover it. 00:25:54.408 [2024-11-19 11:27:49.527635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-11-19 11:27:49.527660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.408 qpair failed and we were unable to recover it. 00:25:54.408 [2024-11-19 11:27:49.527783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-11-19 11:27:49.527808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.408 qpair failed and we were unable to recover it. 00:25:54.408 [2024-11-19 11:27:49.527908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-11-19 11:27:49.527932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.408 qpair failed and we were unable to recover it. 00:25:54.408 [2024-11-19 11:27:49.528081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-11-19 11:27:49.528105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.408 qpair failed and we were unable to recover it. 00:25:54.408 [2024-11-19 11:27:49.528285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-11-19 11:27:49.528310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.408 qpair failed and we were unable to recover it. 00:25:54.408 [2024-11-19 11:27:49.528474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-11-19 11:27:49.528499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.408 qpair failed and we were unable to recover it. 00:25:54.408 [2024-11-19 11:27:49.528644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-11-19 11:27:49.528669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.408 qpair failed and we were unable to recover it. 00:25:54.408 [2024-11-19 11:27:49.528840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-11-19 11:27:49.528865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.408 qpair failed and we were unable to recover it. 00:25:54.408 [2024-11-19 11:27:49.528991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-11-19 11:27:49.529030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.408 qpair failed and we were unable to recover it. 00:25:54.408 [2024-11-19 11:27:49.529278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-11-19 11:27:49.529316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.408 qpair failed and we were unable to recover it. 00:25:54.408 [2024-11-19 11:27:49.529443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-11-19 11:27:49.529469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.408 qpair failed and we were unable to recover it. 00:25:54.408 [2024-11-19 11:27:49.529570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-11-19 11:27:49.529595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.408 qpair failed and we were unable to recover it. 00:25:54.408 [2024-11-19 11:27:49.529742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-11-19 11:27:49.529767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.408 qpair failed and we were unable to recover it. 00:25:54.408 [2024-11-19 11:27:49.529885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-11-19 11:27:49.529911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.408 qpair failed and we were unable to recover it. 00:25:54.408 [2024-11-19 11:27:49.530058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-11-19 11:27:49.530084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.408 qpair failed and we were unable to recover it. 00:25:54.408 [2024-11-19 11:27:49.530243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-11-19 11:27:49.530267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.408 qpair failed and we were unable to recover it. 00:25:54.408 [2024-11-19 11:27:49.530437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-11-19 11:27:49.530464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.408 qpair failed and we were unable to recover it. 00:25:54.408 [2024-11-19 11:27:49.530678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-11-19 11:27:49.530703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.408 qpair failed and we were unable to recover it. 00:25:54.408 [2024-11-19 11:27:49.530816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-11-19 11:27:49.530841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.408 qpair failed and we were unable to recover it. 00:25:54.408 [2024-11-19 11:27:49.531019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-11-19 11:27:49.531045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.408 qpair failed and we were unable to recover it. 00:25:54.408 [2024-11-19 11:27:49.531140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-11-19 11:27:49.531166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.408 qpair failed and we were unable to recover it. 00:25:54.408 [2024-11-19 11:27:49.531275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-11-19 11:27:49.531301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.408 qpair failed and we were unable to recover it. 00:25:54.408 [2024-11-19 11:27:49.531476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-11-19 11:27:49.531502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.408 qpair failed and we were unable to recover it. 00:25:54.408 [2024-11-19 11:27:49.531665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-11-19 11:27:49.531691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.408 qpair failed and we were unable to recover it. 00:25:54.408 [2024-11-19 11:27:49.531818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-11-19 11:27:49.531843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.408 qpair failed and we were unable to recover it. 00:25:54.408 [2024-11-19 11:27:49.531977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-11-19 11:27:49.532003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.408 qpair failed and we were unable to recover it. 00:25:54.408 [2024-11-19 11:27:49.532118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-11-19 11:27:49.532143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.408 qpair failed and we were unable to recover it. 00:25:54.408 [2024-11-19 11:27:49.532300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-11-19 11:27:49.532325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.408 qpair failed and we were unable to recover it. 00:25:54.408 [2024-11-19 11:27:49.532489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.408 [2024-11-19 11:27:49.532515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.408 qpair failed and we were unable to recover it. 00:25:54.409 [2024-11-19 11:27:49.532655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.409 [2024-11-19 11:27:49.532695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.409 qpair failed and we were unable to recover it. 00:25:54.409 [2024-11-19 11:27:49.532882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.409 [2024-11-19 11:27:49.532906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.409 qpair failed and we were unable to recover it. 00:25:54.409 [2024-11-19 11:27:49.533080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.409 [2024-11-19 11:27:49.533118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.409 qpair failed and we were unable to recover it. 00:25:54.409 [2024-11-19 11:27:49.533265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.409 [2024-11-19 11:27:49.533289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.409 qpair failed and we were unable to recover it. 00:25:54.409 [2024-11-19 11:27:49.533465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.409 [2024-11-19 11:27:49.533492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.409 qpair failed and we were unable to recover it. 00:25:54.409 [2024-11-19 11:27:49.533605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.409 [2024-11-19 11:27:49.533630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.409 qpair failed and we were unable to recover it. 00:25:54.409 [2024-11-19 11:27:49.533807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.409 [2024-11-19 11:27:49.533846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.409 qpair failed and we were unable to recover it. 00:25:54.409 [2024-11-19 11:27:49.534056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.409 [2024-11-19 11:27:49.534080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.409 qpair failed and we were unable to recover it. 00:25:54.409 [2024-11-19 11:27:49.534259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.409 [2024-11-19 11:27:49.534284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.409 qpair failed and we were unable to recover it. 00:25:54.409 [2024-11-19 11:27:49.534459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.409 [2024-11-19 11:27:49.534484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.409 qpair failed and we were unable to recover it. 00:25:54.409 [2024-11-19 11:27:49.534640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.409 [2024-11-19 11:27:49.534664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.409 qpair failed and we were unable to recover it. 00:25:54.409 [2024-11-19 11:27:49.534811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.409 [2024-11-19 11:27:49.534835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.409 qpair failed and we were unable to recover it. 00:25:54.409 [2024-11-19 11:27:49.534953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.409 [2024-11-19 11:27:49.534977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.409 qpair failed and we were unable to recover it. 00:25:54.409 [2024-11-19 11:27:49.535141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.409 [2024-11-19 11:27:49.535172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.409 qpair failed and we were unable to recover it. 00:25:54.409 [2024-11-19 11:27:49.535341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.409 [2024-11-19 11:27:49.535370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.409 qpair failed and we were unable to recover it. 00:25:54.409 [2024-11-19 11:27:49.535487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.409 [2024-11-19 11:27:49.535527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.409 qpair failed and we were unable to recover it. 00:25:54.409 [2024-11-19 11:27:49.535680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.409 [2024-11-19 11:27:49.535709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.409 qpair failed and we were unable to recover it. 00:25:54.409 [2024-11-19 11:27:49.535855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.409 [2024-11-19 11:27:49.535880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.409 qpair failed and we were unable to recover it. 00:25:54.409 [2024-11-19 11:27:49.536030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.409 [2024-11-19 11:27:49.536078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.409 qpair failed and we were unable to recover it. 00:25:54.409 [2024-11-19 11:27:49.536240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.409 [2024-11-19 11:27:49.536265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.409 qpair failed and we were unable to recover it. 00:25:54.409 [2024-11-19 11:27:49.536373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.409 [2024-11-19 11:27:49.536406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.409 qpair failed and we were unable to recover it. 00:25:54.409 [2024-11-19 11:27:49.536531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.409 [2024-11-19 11:27:49.536557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.409 qpair failed and we were unable to recover it. 00:25:54.409 [2024-11-19 11:27:49.536683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.409 [2024-11-19 11:27:49.536707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.409 qpair failed and we were unable to recover it. 00:25:54.409 [2024-11-19 11:27:49.536897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.409 [2024-11-19 11:27:49.536922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.409 qpair failed and we were unable to recover it. 00:25:54.409 [2024-11-19 11:27:49.537069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.409 [2024-11-19 11:27:49.537093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.409 qpair failed and we were unable to recover it. 00:25:54.409 [2024-11-19 11:27:49.537257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.409 [2024-11-19 11:27:49.537281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.409 qpair failed and we were unable to recover it. 00:25:54.409 [2024-11-19 11:27:49.537402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.409 [2024-11-19 11:27:49.537428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.409 qpair failed and we were unable to recover it. 00:25:54.409 [2024-11-19 11:27:49.537564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.409 [2024-11-19 11:27:49.537589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.409 qpair failed and we were unable to recover it. 00:25:54.409 [2024-11-19 11:27:49.537687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.409 [2024-11-19 11:27:49.537712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.409 qpair failed and we were unable to recover it. 00:25:54.409 [2024-11-19 11:27:49.537840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.409 [2024-11-19 11:27:49.537865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.409 qpair failed and we were unable to recover it. 00:25:54.409 [2024-11-19 11:27:49.538039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.409 [2024-11-19 11:27:49.538078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.409 qpair failed and we were unable to recover it. 00:25:54.409 [2024-11-19 11:27:49.538245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.409 [2024-11-19 11:27:49.538269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.409 qpair failed and we were unable to recover it. 00:25:54.409 [2024-11-19 11:27:49.538410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.409 [2024-11-19 11:27:49.538443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.409 qpair failed and we were unable to recover it. 00:25:54.409 [2024-11-19 11:27:49.538560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.409 [2024-11-19 11:27:49.538585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.409 qpair failed and we were unable to recover it. 00:25:54.409 [2024-11-19 11:27:49.538699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.409 [2024-11-19 11:27:49.538724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.409 qpair failed and we were unable to recover it. 00:25:54.409 [2024-11-19 11:27:49.538825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.409 [2024-11-19 11:27:49.538850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.410 qpair failed and we were unable to recover it. 00:25:54.410 [2024-11-19 11:27:49.538985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.410 [2024-11-19 11:27:49.539008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.410 qpair failed and we were unable to recover it. 00:25:54.410 [2024-11-19 11:27:49.539157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.410 [2024-11-19 11:27:49.539182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.410 qpair failed and we were unable to recover it. 00:25:54.410 [2024-11-19 11:27:49.539324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.410 [2024-11-19 11:27:49.539381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.410 qpair failed and we were unable to recover it. 00:25:54.410 [2024-11-19 11:27:49.539586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.410 [2024-11-19 11:27:49.539611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.410 qpair failed and we were unable to recover it. 00:25:54.410 [2024-11-19 11:27:49.539768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.410 [2024-11-19 11:27:49.539793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.410 qpair failed and we were unable to recover it. 00:25:54.410 [2024-11-19 11:27:49.540014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.410 [2024-11-19 11:27:49.540037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.410 qpair failed and we were unable to recover it. 00:25:54.410 [2024-11-19 11:27:49.540184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.410 [2024-11-19 11:27:49.540224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.410 qpair failed and we were unable to recover it. 00:25:54.410 [2024-11-19 11:27:49.540373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.410 [2024-11-19 11:27:49.540399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.410 qpair failed and we were unable to recover it. 00:25:54.410 [2024-11-19 11:27:49.540569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.410 [2024-11-19 11:27:49.540594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.410 qpair failed and we were unable to recover it. 00:25:54.410 [2024-11-19 11:27:49.540759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.410 [2024-11-19 11:27:49.540783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.410 qpair failed and we were unable to recover it. 00:25:54.410 [2024-11-19 11:27:49.540947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.410 [2024-11-19 11:27:49.540972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.410 qpair failed and we were unable to recover it. 00:25:54.410 [2024-11-19 11:27:49.541097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.410 [2024-11-19 11:27:49.541122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.410 qpair failed and we were unable to recover it. 00:25:54.410 [2024-11-19 11:27:49.541246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.410 [2024-11-19 11:27:49.541277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.410 qpair failed and we were unable to recover it. 00:25:54.410 [2024-11-19 11:27:49.541443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.410 [2024-11-19 11:27:49.541469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.410 qpair failed and we were unable to recover it. 00:25:54.410 [2024-11-19 11:27:49.541591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.410 [2024-11-19 11:27:49.541616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.410 qpair failed and we were unable to recover it. 00:25:54.410 [2024-11-19 11:27:49.541748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.410 [2024-11-19 11:27:49.541787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.410 qpair failed and we were unable to recover it. 00:25:54.410 [2024-11-19 11:27:49.541905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.410 [2024-11-19 11:27:49.541929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.410 qpair failed and we were unable to recover it. 00:25:54.410 [2024-11-19 11:27:49.542135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.410 [2024-11-19 11:27:49.542175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.410 qpair failed and we were unable to recover it. 00:25:54.410 [2024-11-19 11:27:49.542314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.410 [2024-11-19 11:27:49.542338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.410 qpair failed and we were unable to recover it. 00:25:54.410 [2024-11-19 11:27:49.542520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.410 [2024-11-19 11:27:49.542545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.410 qpair failed and we were unable to recover it. 00:25:54.410 [2024-11-19 11:27:49.542627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.410 [2024-11-19 11:27:49.542652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.410 qpair failed and we were unable to recover it. 00:25:54.410 [2024-11-19 11:27:49.542861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.410 [2024-11-19 11:27:49.542885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.410 qpair failed and we were unable to recover it. 00:25:54.410 [2024-11-19 11:27:49.543017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.410 [2024-11-19 11:27:49.543041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.410 qpair failed and we were unable to recover it. 00:25:54.410 [2024-11-19 11:27:49.543172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.410 [2024-11-19 11:27:49.543221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.410 qpair failed and we were unable to recover it. 00:25:54.410 [2024-11-19 11:27:49.543340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.410 [2024-11-19 11:27:49.543387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.410 qpair failed and we were unable to recover it. 00:25:54.410 [2024-11-19 11:27:49.543542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.410 [2024-11-19 11:27:49.543567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.410 qpair failed and we were unable to recover it. 00:25:54.410 [2024-11-19 11:27:49.543683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.410 [2024-11-19 11:27:49.543722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.410 qpair failed and we were unable to recover it. 00:25:54.410 [2024-11-19 11:27:49.543853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.410 [2024-11-19 11:27:49.543877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.410 qpair failed and we were unable to recover it. 00:25:54.410 [2024-11-19 11:27:49.544032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.410 [2024-11-19 11:27:49.544056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.410 qpair failed and we were unable to recover it. 00:25:54.410 [2024-11-19 11:27:49.544147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.410 [2024-11-19 11:27:49.544172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.410 qpair failed and we were unable to recover it. 00:25:54.410 [2024-11-19 11:27:49.544275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.410 [2024-11-19 11:27:49.544300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.410 qpair failed and we were unable to recover it. 00:25:54.410 [2024-11-19 11:27:49.544436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.410 [2024-11-19 11:27:49.544462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.410 qpair failed and we were unable to recover it. 00:25:54.410 [2024-11-19 11:27:49.544553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.410 [2024-11-19 11:27:49.544579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.410 qpair failed and we were unable to recover it. 00:25:54.410 [2024-11-19 11:27:49.544744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.410 [2024-11-19 11:27:49.544783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.410 qpair failed and we were unable to recover it. 00:25:54.410 [2024-11-19 11:27:49.544973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.410 [2024-11-19 11:27:49.544997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.410 qpair failed and we were unable to recover it. 00:25:54.410 [2024-11-19 11:27:49.545149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.411 [2024-11-19 11:27:49.545173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.411 qpair failed and we were unable to recover it. 00:25:54.411 [2024-11-19 11:27:49.545290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.411 [2024-11-19 11:27:49.545329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.411 qpair failed and we were unable to recover it. 00:25:54.411 [2024-11-19 11:27:49.545458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.411 [2024-11-19 11:27:49.545484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.411 qpair failed and we were unable to recover it. 00:25:54.411 [2024-11-19 11:27:49.545711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.411 [2024-11-19 11:27:49.545735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.411 qpair failed and we were unable to recover it. 00:25:54.411 [2024-11-19 11:27:49.545927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.411 [2024-11-19 11:27:49.545957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.411 qpair failed and we were unable to recover it. 00:25:54.411 [2024-11-19 11:27:49.546147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.411 [2024-11-19 11:27:49.546171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.411 qpair failed and we were unable to recover it. 00:25:54.411 [2024-11-19 11:27:49.546420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.411 [2024-11-19 11:27:49.546446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.411 qpair failed and we were unable to recover it. 00:25:54.411 [2024-11-19 11:27:49.546654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.411 [2024-11-19 11:27:49.546692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.411 qpair failed and we were unable to recover it. 00:25:54.411 [2024-11-19 11:27:49.546884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.411 [2024-11-19 11:27:49.546908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.411 qpair failed and we were unable to recover it. 00:25:54.411 [2024-11-19 11:27:49.547015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.411 [2024-11-19 11:27:49.547040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.411 qpair failed and we were unable to recover it. 00:25:54.411 [2024-11-19 11:27:49.547279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.411 [2024-11-19 11:27:49.547303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.411 qpair failed and we were unable to recover it. 00:25:54.411 [2024-11-19 11:27:49.547441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.411 [2024-11-19 11:27:49.547465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.411 qpair failed and we were unable to recover it. 00:25:54.411 [2024-11-19 11:27:49.547660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.411 [2024-11-19 11:27:49.547684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.411 qpair failed and we were unable to recover it. 00:25:54.411 [2024-11-19 11:27:49.547835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.411 [2024-11-19 11:27:49.547858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.411 qpair failed and we were unable to recover it. 00:25:54.411 [2024-11-19 11:27:49.548072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.411 [2024-11-19 11:27:49.548096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.411 qpair failed and we were unable to recover it. 00:25:54.411 [2024-11-19 11:27:49.548374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.411 [2024-11-19 11:27:49.548399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.411 qpair failed and we were unable to recover it. 00:25:54.411 [2024-11-19 11:27:49.548619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.411 [2024-11-19 11:27:49.548657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.411 qpair failed and we were unable to recover it. 00:25:54.411 [2024-11-19 11:27:49.548794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.411 [2024-11-19 11:27:49.548817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.411 qpair failed and we were unable to recover it. 00:25:54.411 [2024-11-19 11:27:49.548946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.411 [2024-11-19 11:27:49.548971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.411 qpair failed and we were unable to recover it. 00:25:54.411 [2024-11-19 11:27:49.549100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.411 [2024-11-19 11:27:49.549124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.411 qpair failed and we were unable to recover it. 00:25:54.411 [2024-11-19 11:27:49.549294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.411 [2024-11-19 11:27:49.549317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.411 qpair failed and we were unable to recover it. 00:25:54.411 [2024-11-19 11:27:49.549457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.411 [2024-11-19 11:27:49.549483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.411 qpair failed and we were unable to recover it. 00:25:54.411 [2024-11-19 11:27:49.549658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.411 [2024-11-19 11:27:49.549683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.411 qpair failed and we were unable to recover it. 00:25:54.411 [2024-11-19 11:27:49.549867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.411 [2024-11-19 11:27:49.549890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.411 qpair failed and we were unable to recover it. 00:25:54.411 [2024-11-19 11:27:49.550110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.411 [2024-11-19 11:27:49.550134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.411 qpair failed and we were unable to recover it. 00:25:54.411 [2024-11-19 11:27:49.550420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.411 [2024-11-19 11:27:49.550444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.411 qpair failed and we were unable to recover it. 00:25:54.411 [2024-11-19 11:27:49.550562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.411 [2024-11-19 11:27:49.550586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.411 qpair failed and we were unable to recover it. 00:25:54.411 [2024-11-19 11:27:49.550808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.411 [2024-11-19 11:27:49.550831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.411 qpair failed and we were unable to recover it. 00:25:54.411 [2024-11-19 11:27:49.550941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.411 [2024-11-19 11:27:49.550963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.411 qpair failed and we were unable to recover it. 00:25:54.411 [2024-11-19 11:27:49.551136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.411 [2024-11-19 11:27:49.551160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.411 qpair failed and we were unable to recover it. 00:25:54.411 [2024-11-19 11:27:49.551394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.412 [2024-11-19 11:27:49.551419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.412 qpair failed and we were unable to recover it. 00:25:54.412 [2024-11-19 11:27:49.551533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.412 [2024-11-19 11:27:49.551562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.412 qpair failed and we were unable to recover it. 00:25:54.412 [2024-11-19 11:27:49.551761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.412 [2024-11-19 11:27:49.551785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.412 qpair failed and we were unable to recover it. 00:25:54.412 [2024-11-19 11:27:49.551974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.412 [2024-11-19 11:27:49.551998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.412 qpair failed and we were unable to recover it. 00:25:54.412 [2024-11-19 11:27:49.552163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.412 [2024-11-19 11:27:49.552186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.412 qpair failed and we were unable to recover it. 00:25:54.412 [2024-11-19 11:27:49.552405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.412 [2024-11-19 11:27:49.552430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.412 qpair failed and we were unable to recover it. 00:25:54.412 [2024-11-19 11:27:49.552581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.412 [2024-11-19 11:27:49.552616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.412 qpair failed and we were unable to recover it. 00:25:54.412 [2024-11-19 11:27:49.552795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.412 [2024-11-19 11:27:49.552817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.412 qpair failed and we were unable to recover it. 00:25:54.412 [2024-11-19 11:27:49.552996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.412 [2024-11-19 11:27:49.553019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.412 qpair failed and we were unable to recover it. 00:25:54.412 [2024-11-19 11:27:49.553257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.412 [2024-11-19 11:27:49.553281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.412 qpair failed and we were unable to recover it. 00:25:54.412 [2024-11-19 11:27:49.553457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.412 [2024-11-19 11:27:49.553482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.412 qpair failed and we were unable to recover it. 00:25:54.412 [2024-11-19 11:27:49.553641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.412 [2024-11-19 11:27:49.553665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.412 qpair failed and we were unable to recover it. 00:25:54.412 [2024-11-19 11:27:49.553839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.412 [2024-11-19 11:27:49.553863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.412 qpair failed and we were unable to recover it. 00:25:54.412 [2024-11-19 11:27:49.553971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.412 [2024-11-19 11:27:49.553994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.412 qpair failed and we were unable to recover it. 00:25:54.412 [2024-11-19 11:27:49.554218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.412 [2024-11-19 11:27:49.554241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.412 qpair failed and we were unable to recover it. 00:25:54.412 [2024-11-19 11:27:49.554425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.412 [2024-11-19 11:27:49.554450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.412 qpair failed and we were unable to recover it. 00:25:54.412 [2024-11-19 11:27:49.554586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.412 [2024-11-19 11:27:49.554610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.412 qpair failed and we were unable to recover it. 00:25:54.412 [2024-11-19 11:27:49.554749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.412 [2024-11-19 11:27:49.554773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.412 qpair failed and we were unable to recover it. 00:25:54.412 [2024-11-19 11:27:49.555002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.412 [2024-11-19 11:27:49.555026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.412 qpair failed and we were unable to recover it. 00:25:54.412 [2024-11-19 11:27:49.555243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.412 [2024-11-19 11:27:49.555266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.412 qpair failed and we were unable to recover it. 00:25:54.412 [2024-11-19 11:27:49.555445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.412 [2024-11-19 11:27:49.555469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.412 qpair failed and we were unable to recover it. 00:25:54.412 [2024-11-19 11:27:49.555625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.412 [2024-11-19 11:27:49.555664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.412 qpair failed and we were unable to recover it. 00:25:54.412 [2024-11-19 11:27:49.555864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.412 [2024-11-19 11:27:49.555886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.412 qpair failed and we were unable to recover it. 00:25:54.412 [2024-11-19 11:27:49.556076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.412 [2024-11-19 11:27:49.556100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.412 qpair failed and we were unable to recover it. 00:25:54.412 [2024-11-19 11:27:49.556324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.412 [2024-11-19 11:27:49.556348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.412 qpair failed and we were unable to recover it. 00:25:54.412 [2024-11-19 11:27:49.556494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.412 [2024-11-19 11:27:49.556518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.412 qpair failed and we were unable to recover it. 00:25:54.412 [2024-11-19 11:27:49.556678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.412 [2024-11-19 11:27:49.556701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.412 qpair failed and we were unable to recover it. 00:25:54.412 [2024-11-19 11:27:49.556940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.412 [2024-11-19 11:27:49.556964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.412 qpair failed and we were unable to recover it. 00:25:54.412 [2024-11-19 11:27:49.557133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.412 [2024-11-19 11:27:49.557156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.412 qpair failed and we were unable to recover it. 00:25:54.412 [2024-11-19 11:27:49.557370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.412 [2024-11-19 11:27:49.557395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.412 qpair failed and we were unable to recover it. 00:25:54.412 [2024-11-19 11:27:49.557562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.412 [2024-11-19 11:27:49.557588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.412 qpair failed and we were unable to recover it. 00:25:54.412 [2024-11-19 11:27:49.557765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.412 [2024-11-19 11:27:49.557803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.412 qpair failed and we were unable to recover it. 00:25:54.412 [2024-11-19 11:27:49.557943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.412 [2024-11-19 11:27:49.557966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.412 qpair failed and we were unable to recover it. 00:25:54.412 [2024-11-19 11:27:49.558128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.412 [2024-11-19 11:27:49.558166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.412 qpair failed and we were unable to recover it. 00:25:54.412 [2024-11-19 11:27:49.558289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.412 [2024-11-19 11:27:49.558326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.412 qpair failed and we were unable to recover it. 00:25:54.412 [2024-11-19 11:27:49.558552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.412 [2024-11-19 11:27:49.558577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.412 qpair failed and we were unable to recover it. 00:25:54.412 [2024-11-19 11:27:49.558684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.413 [2024-11-19 11:27:49.558707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.413 qpair failed and we were unable to recover it. 00:25:54.413 [2024-11-19 11:27:49.558831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.413 [2024-11-19 11:27:49.558855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.413 qpair failed and we were unable to recover it. 00:25:54.413 [2024-11-19 11:27:49.558978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.413 [2024-11-19 11:27:49.559002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.413 qpair failed and we were unable to recover it. 00:25:54.413 [2024-11-19 11:27:49.559111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.413 [2024-11-19 11:27:49.559136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.413 qpair failed and we were unable to recover it. 00:25:54.413 [2024-11-19 11:27:49.559276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.413 [2024-11-19 11:27:49.559300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.413 qpair failed and we were unable to recover it. 00:25:54.413 [2024-11-19 11:27:49.559474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.413 [2024-11-19 11:27:49.559498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.413 qpair failed and we were unable to recover it. 00:25:54.413 [2024-11-19 11:27:49.559670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.413 [2024-11-19 11:27:49.559694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.413 qpair failed and we were unable to recover it. 00:25:54.413 [2024-11-19 11:27:49.559815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.413 [2024-11-19 11:27:49.559854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.413 qpair failed and we were unable to recover it. 00:25:54.413 [2024-11-19 11:27:49.560002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.413 [2024-11-19 11:27:49.560027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.413 qpair failed and we were unable to recover it. 00:25:54.413 [2024-11-19 11:27:49.560232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.413 [2024-11-19 11:27:49.560256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.413 qpair failed and we were unable to recover it. 00:25:54.413 [2024-11-19 11:27:49.560440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.413 [2024-11-19 11:27:49.560465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.413 qpair failed and we were unable to recover it. 00:25:54.413 [2024-11-19 11:27:49.560583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.413 [2024-11-19 11:27:49.560606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.413 qpair failed and we were unable to recover it. 00:25:54.413 [2024-11-19 11:27:49.560743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.413 [2024-11-19 11:27:49.560783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.413 qpair failed and we were unable to recover it. 00:25:54.413 [2024-11-19 11:27:49.560934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.413 [2024-11-19 11:27:49.560973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.413 qpair failed and we were unable to recover it. 00:25:54.413 [2024-11-19 11:27:49.561112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.413 [2024-11-19 11:27:49.561139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.413 qpair failed and we were unable to recover it. 00:25:54.413 [2024-11-19 11:27:49.561269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.413 [2024-11-19 11:27:49.561294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.413 qpair failed and we were unable to recover it. 00:25:54.413 [2024-11-19 11:27:49.561496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.413 [2024-11-19 11:27:49.561521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.413 qpair failed and we were unable to recover it. 00:25:54.413 [2024-11-19 11:27:49.561621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.413 [2024-11-19 11:27:49.561646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.413 qpair failed and we were unable to recover it. 00:25:54.413 [2024-11-19 11:27:49.561844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.413 [2024-11-19 11:27:49.561868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.413 qpair failed and we were unable to recover it. 00:25:54.413 [2024-11-19 11:27:49.562038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.413 [2024-11-19 11:27:49.562060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.413 qpair failed and we were unable to recover it. 00:25:54.413 [2024-11-19 11:27:49.562248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.413 [2024-11-19 11:27:49.562272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.413 qpair failed and we were unable to recover it. 00:25:54.413 [2024-11-19 11:27:49.562409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.413 [2024-11-19 11:27:49.562434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.413 qpair failed and we were unable to recover it. 00:25:54.413 [2024-11-19 11:27:49.562575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.413 [2024-11-19 11:27:49.562599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.413 qpair failed and we were unable to recover it. 00:25:54.413 [2024-11-19 11:27:49.562784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.413 [2024-11-19 11:27:49.562807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.413 qpair failed and we were unable to recover it. 00:25:54.413 [2024-11-19 11:27:49.562984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.413 [2024-11-19 11:27:49.563008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.413 qpair failed and we were unable to recover it. 00:25:54.413 [2024-11-19 11:27:49.563216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.413 [2024-11-19 11:27:49.563238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.413 qpair failed and we were unable to recover it. 00:25:54.413 [2024-11-19 11:27:49.563418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.413 [2024-11-19 11:27:49.563444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.413 qpair failed and we were unable to recover it. 00:25:54.413 [2024-11-19 11:27:49.563533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.413 [2024-11-19 11:27:49.563558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.413 qpair failed and we were unable to recover it. 00:25:54.413 [2024-11-19 11:27:49.563686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.413 [2024-11-19 11:27:49.563724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.413 qpair failed and we were unable to recover it. 00:25:54.413 [2024-11-19 11:27:49.563935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.413 [2024-11-19 11:27:49.563958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.413 qpair failed and we were unable to recover it. 00:25:54.413 [2024-11-19 11:27:49.564166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.413 [2024-11-19 11:27:49.564190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.413 qpair failed and we were unable to recover it. 00:25:54.413 [2024-11-19 11:27:49.564394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.413 [2024-11-19 11:27:49.564418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.413 qpair failed and we were unable to recover it. 00:25:54.413 [2024-11-19 11:27:49.564521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.413 [2024-11-19 11:27:49.564545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.413 qpair failed and we were unable to recover it. 00:25:54.413 [2024-11-19 11:27:49.564712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.413 [2024-11-19 11:27:49.564763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.413 qpair failed and we were unable to recover it. 00:25:54.413 [2024-11-19 11:27:49.564863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.413 [2024-11-19 11:27:49.564901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.413 qpair failed and we were unable to recover it. 00:25:54.413 [2024-11-19 11:27:49.565101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.413 [2024-11-19 11:27:49.565125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.413 qpair failed and we were unable to recover it. 00:25:54.413 [2024-11-19 11:27:49.565282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.414 [2024-11-19 11:27:49.565306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.414 qpair failed and we were unable to recover it. 00:25:54.414 [2024-11-19 11:27:49.565441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.414 [2024-11-19 11:27:49.565467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.414 qpair failed and we were unable to recover it. 00:25:54.414 [2024-11-19 11:27:49.565621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.414 [2024-11-19 11:27:49.565647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.414 qpair failed and we were unable to recover it. 00:25:54.414 [2024-11-19 11:27:49.565795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.414 [2024-11-19 11:27:49.565819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.414 qpair failed and we were unable to recover it. 00:25:54.414 [2024-11-19 11:27:49.565964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.414 [2024-11-19 11:27:49.565988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.414 qpair failed and we were unable to recover it. 00:25:54.414 [2024-11-19 11:27:49.566167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.414 [2024-11-19 11:27:49.566192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.414 qpair failed and we were unable to recover it. 00:25:54.414 [2024-11-19 11:27:49.566376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.414 [2024-11-19 11:27:49.566401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.414 qpair failed and we were unable to recover it. 00:25:54.414 [2024-11-19 11:27:49.566530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.414 [2024-11-19 11:27:49.566570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.414 qpair failed and we were unable to recover it. 00:25:54.414 [2024-11-19 11:27:49.566683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.414 [2024-11-19 11:27:49.566707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.414 qpair failed and we were unable to recover it. 00:25:54.414 [2024-11-19 11:27:49.566923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.414 [2024-11-19 11:27:49.566947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.414 qpair failed and we were unable to recover it. 00:25:54.414 [2024-11-19 11:27:49.567080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.414 [2024-11-19 11:27:49.567104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.414 qpair failed and we were unable to recover it. 00:25:54.414 [2024-11-19 11:27:49.567344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.414 [2024-11-19 11:27:49.567384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.414 qpair failed and we were unable to recover it. 00:25:54.414 [2024-11-19 11:27:49.567491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.414 [2024-11-19 11:27:49.567516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.414 qpair failed and we were unable to recover it. 00:25:54.414 [2024-11-19 11:27:49.567638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.414 [2024-11-19 11:27:49.567662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.414 qpair failed and we were unable to recover it. 00:25:54.414 [2024-11-19 11:27:49.567862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.414 [2024-11-19 11:27:49.567887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.414 qpair failed and we were unable to recover it. 00:25:54.414 [2024-11-19 11:27:49.568031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.414 [2024-11-19 11:27:49.568055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.414 qpair failed and we were unable to recover it. 00:25:54.414 [2024-11-19 11:27:49.568210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.414 [2024-11-19 11:27:49.568249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.414 qpair failed and we were unable to recover it. 00:25:54.414 [2024-11-19 11:27:49.568388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.414 [2024-11-19 11:27:49.568414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.414 qpair failed and we were unable to recover it. 00:25:54.414 [2024-11-19 11:27:49.568544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.414 [2024-11-19 11:27:49.568569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.414 qpair failed and we were unable to recover it. 00:25:54.414 [2024-11-19 11:27:49.568789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.414 [2024-11-19 11:27:49.568813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.414 qpair failed and we were unable to recover it. 00:25:54.414 [2024-11-19 11:27:49.569006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.414 [2024-11-19 11:27:49.569031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.414 qpair failed and we were unable to recover it. 00:25:54.414 [2024-11-19 11:27:49.569208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.414 [2024-11-19 11:27:49.569233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.414 qpair failed and we were unable to recover it. 00:25:54.414 [2024-11-19 11:27:49.569360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.414 [2024-11-19 11:27:49.569392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.414 qpair failed and we were unable to recover it. 00:25:54.414 [2024-11-19 11:27:49.569516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.414 [2024-11-19 11:27:49.569541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.414 qpair failed and we were unable to recover it. 00:25:54.414 [2024-11-19 11:27:49.569689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.414 [2024-11-19 11:27:49.569717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.414 qpair failed and we were unable to recover it. 00:25:54.414 [2024-11-19 11:27:49.569894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.414 [2024-11-19 11:27:49.569918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.414 qpair failed and we were unable to recover it. 00:25:54.414 [2024-11-19 11:27:49.570054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.414 [2024-11-19 11:27:49.570079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.414 qpair failed and we were unable to recover it. 00:25:54.414 [2024-11-19 11:27:49.570215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.414 [2024-11-19 11:27:49.570240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.414 qpair failed and we were unable to recover it. 00:25:54.414 [2024-11-19 11:27:49.570401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.414 [2024-11-19 11:27:49.570427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.414 qpair failed and we were unable to recover it. 00:25:54.414 [2024-11-19 11:27:49.570528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.414 [2024-11-19 11:27:49.570553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.414 qpair failed and we were unable to recover it. 00:25:54.414 [2024-11-19 11:27:49.570679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.414 [2024-11-19 11:27:49.570704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.414 qpair failed and we were unable to recover it. 00:25:54.414 [2024-11-19 11:27:49.570887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.414 [2024-11-19 11:27:49.570911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.414 qpair failed and we were unable to recover it. 00:25:54.414 [2024-11-19 11:27:49.571043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.414 [2024-11-19 11:27:49.571068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.414 qpair failed and we were unable to recover it. 00:25:54.414 [2024-11-19 11:27:49.571229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.414 [2024-11-19 11:27:49.571268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.414 qpair failed and we were unable to recover it. 00:25:54.414 [2024-11-19 11:27:49.571387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.414 [2024-11-19 11:27:49.571411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.414 qpair failed and we were unable to recover it. 00:25:54.414 [2024-11-19 11:27:49.571523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.414 [2024-11-19 11:27:49.571546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.414 qpair failed and we were unable to recover it. 00:25:54.414 [2024-11-19 11:27:49.571730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.415 [2024-11-19 11:27:49.571754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.415 qpair failed and we were unable to recover it. 00:25:54.415 [2024-11-19 11:27:49.571899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.415 [2024-11-19 11:27:49.571924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.415 qpair failed and we were unable to recover it. 00:25:54.415 [2024-11-19 11:27:49.572029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.415 [2024-11-19 11:27:49.572054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.415 qpair failed and we were unable to recover it. 00:25:54.415 [2024-11-19 11:27:49.572188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.415 [2024-11-19 11:27:49.572214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.415 qpair failed and we were unable to recover it. 00:25:54.415 [2024-11-19 11:27:49.572349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.415 [2024-11-19 11:27:49.572381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.415 qpair failed and we were unable to recover it. 00:25:54.415 [2024-11-19 11:27:49.572487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.415 [2024-11-19 11:27:49.572523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.415 qpair failed and we were unable to recover it. 00:25:54.415 [2024-11-19 11:27:49.572649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.415 [2024-11-19 11:27:49.572675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.415 qpair failed and we were unable to recover it. 00:25:54.415 [2024-11-19 11:27:49.572868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.415 [2024-11-19 11:27:49.572892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.415 qpair failed and we were unable to recover it. 00:25:54.415 [2024-11-19 11:27:49.573048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.415 [2024-11-19 11:27:49.573072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.415 qpair failed and we were unable to recover it. 00:25:54.415 [2024-11-19 11:27:49.573243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.415 [2024-11-19 11:27:49.573267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.415 qpair failed and we were unable to recover it. 00:25:54.415 [2024-11-19 11:27:49.573388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.415 [2024-11-19 11:27:49.573428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.415 qpair failed and we were unable to recover it. 00:25:54.415 [2024-11-19 11:27:49.573555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.415 [2024-11-19 11:27:49.573581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.415 qpair failed and we were unable to recover it. 00:25:54.415 [2024-11-19 11:27:49.573698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.415 [2024-11-19 11:27:49.573724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.415 qpair failed and we were unable to recover it. 00:25:54.415 [2024-11-19 11:27:49.573913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.415 [2024-11-19 11:27:49.573937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.415 qpair failed and we were unable to recover it. 00:25:54.415 [2024-11-19 11:27:49.574122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.415 [2024-11-19 11:27:49.574147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.415 qpair failed and we were unable to recover it. 00:25:54.415 [2024-11-19 11:27:49.574339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.415 [2024-11-19 11:27:49.574387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.415 qpair failed and we were unable to recover it. 00:25:54.415 [2024-11-19 11:27:49.574524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.415 [2024-11-19 11:27:49.574549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.415 qpair failed and we were unable to recover it. 00:25:54.415 [2024-11-19 11:27:49.574711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.415 [2024-11-19 11:27:49.574735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.415 qpair failed and we were unable to recover it. 00:25:54.415 [2024-11-19 11:27:49.574877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.415 [2024-11-19 11:27:49.574902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.415 qpair failed and we were unable to recover it. 00:25:54.415 [2024-11-19 11:27:49.575043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.415 [2024-11-19 11:27:49.575067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.415 qpair failed and we were unable to recover it. 00:25:54.415 [2024-11-19 11:27:49.575238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.415 [2024-11-19 11:27:49.575263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.415 qpair failed and we were unable to recover it. 00:25:54.415 [2024-11-19 11:27:49.575467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.415 [2024-11-19 11:27:49.575493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.415 qpair failed and we were unable to recover it. 00:25:54.415 [2024-11-19 11:27:49.575583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.415 [2024-11-19 11:27:49.575624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.415 qpair failed and we were unable to recover it. 00:25:54.415 [2024-11-19 11:27:49.575742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.415 [2024-11-19 11:27:49.575767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.415 qpair failed and we were unable to recover it. 00:25:54.415 [2024-11-19 11:27:49.575924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.415 [2024-11-19 11:27:49.575965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.415 qpair failed and we were unable to recover it. 00:25:54.415 [2024-11-19 11:27:49.576084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.415 [2024-11-19 11:27:49.576123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.415 qpair failed and we were unable to recover it. 00:25:54.415 [2024-11-19 11:27:49.576329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.415 [2024-11-19 11:27:49.576375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.415 qpair failed and we were unable to recover it. 00:25:54.415 [2024-11-19 11:27:49.576519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.415 [2024-11-19 11:27:49.576544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.415 qpair failed and we were unable to recover it. 00:25:54.415 [2024-11-19 11:27:49.576676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.415 [2024-11-19 11:27:49.576702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.415 qpair failed and we were unable to recover it. 00:25:54.415 [2024-11-19 11:27:49.576876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.415 [2024-11-19 11:27:49.576901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.415 qpair failed and we were unable to recover it. 00:25:54.415 [2024-11-19 11:27:49.577061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.415 [2024-11-19 11:27:49.577086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.415 qpair failed and we were unable to recover it. 00:25:54.415 [2024-11-19 11:27:49.577256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.415 [2024-11-19 11:27:49.577280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.415 qpair failed and we were unable to recover it. 00:25:54.415 [2024-11-19 11:27:49.577484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.415 [2024-11-19 11:27:49.577510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.415 qpair failed and we were unable to recover it. 00:25:54.415 [2024-11-19 11:27:49.577634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.415 [2024-11-19 11:27:49.577659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.415 qpair failed and we were unable to recover it. 00:25:54.415 [2024-11-19 11:27:49.577796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.415 [2024-11-19 11:27:49.577821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.415 qpair failed and we were unable to recover it. 00:25:54.415 [2024-11-19 11:27:49.577970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.416 [2024-11-19 11:27:49.577995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.416 qpair failed and we were unable to recover it. 00:25:54.416 [2024-11-19 11:27:49.578154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.416 [2024-11-19 11:27:49.578180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.416 qpair failed and we were unable to recover it. 00:25:54.416 [2024-11-19 11:27:49.578274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.416 [2024-11-19 11:27:49.578299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.416 qpair failed and we were unable to recover it. 00:25:54.416 [2024-11-19 11:27:49.578439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.416 [2024-11-19 11:27:49.578465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.416 qpair failed and we were unable to recover it. 00:25:54.416 [2024-11-19 11:27:49.578550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.416 [2024-11-19 11:27:49.578575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.416 qpair failed and we were unable to recover it. 00:25:54.416 [2024-11-19 11:27:49.578729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.416 [2024-11-19 11:27:49.578754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.416 qpair failed and we were unable to recover it. 00:25:54.416 [2024-11-19 11:27:49.578901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.416 [2024-11-19 11:27:49.578926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.416 qpair failed and we were unable to recover it. 00:25:54.416 [2024-11-19 11:27:49.579087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.416 [2024-11-19 11:27:49.579127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.416 qpair failed and we were unable to recover it. 00:25:54.416 [2024-11-19 11:27:49.579291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.416 [2024-11-19 11:27:49.579315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.416 qpair failed and we were unable to recover it. 00:25:54.416 [2024-11-19 11:27:49.579435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.416 [2024-11-19 11:27:49.579461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.416 qpair failed and we were unable to recover it. 00:25:54.416 [2024-11-19 11:27:49.579589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.416 [2024-11-19 11:27:49.579614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.416 qpair failed and we were unable to recover it. 00:25:54.416 [2024-11-19 11:27:49.579751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.416 [2024-11-19 11:27:49.579789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.416 qpair failed and we were unable to recover it. 00:25:54.416 [2024-11-19 11:27:49.579906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.416 [2024-11-19 11:27:49.579944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.416 qpair failed and we were unable to recover it. 00:25:54.416 [2024-11-19 11:27:49.580147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.416 [2024-11-19 11:27:49.580186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.416 qpair failed and we were unable to recover it. 00:25:54.416 [2024-11-19 11:27:49.580341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.416 [2024-11-19 11:27:49.580385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.416 qpair failed and we were unable to recover it. 00:25:54.416 [2024-11-19 11:27:49.580498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.416 [2024-11-19 11:27:49.580523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.416 qpair failed and we were unable to recover it. 00:25:54.416 [2024-11-19 11:27:49.580667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.416 [2024-11-19 11:27:49.580691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.416 qpair failed and we were unable to recover it. 00:25:54.416 [2024-11-19 11:27:49.580851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.416 [2024-11-19 11:27:49.580888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.416 qpair failed and we were unable to recover it. 00:25:54.416 [2024-11-19 11:27:49.581026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.416 [2024-11-19 11:27:49.581050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.416 qpair failed and we were unable to recover it. 00:25:54.416 [2024-11-19 11:27:49.581213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.416 [2024-11-19 11:27:49.581252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.416 qpair failed and we were unable to recover it. 00:25:54.416 [2024-11-19 11:27:49.581376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.416 [2024-11-19 11:27:49.581401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.416 qpair failed and we were unable to recover it. 00:25:54.416 [2024-11-19 11:27:49.581521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.416 [2024-11-19 11:27:49.581551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.416 qpair failed and we were unable to recover it. 00:25:54.416 [2024-11-19 11:27:49.581667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.416 [2024-11-19 11:27:49.581693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.416 qpair failed and we were unable to recover it. 00:25:54.416 [2024-11-19 11:27:49.581825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.416 [2024-11-19 11:27:49.581849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.416 qpair failed and we were unable to recover it. 00:25:54.416 [2024-11-19 11:27:49.581959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.416 [2024-11-19 11:27:49.581984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.416 qpair failed and we were unable to recover it. 00:25:54.416 [2024-11-19 11:27:49.582142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.416 [2024-11-19 11:27:49.582167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.416 qpair failed and we were unable to recover it. 00:25:54.416 [2024-11-19 11:27:49.582300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.416 [2024-11-19 11:27:49.582338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.416 qpair failed and we were unable to recover it. 00:25:54.416 [2024-11-19 11:27:49.582514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.416 [2024-11-19 11:27:49.582539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.416 qpair failed and we were unable to recover it. 00:25:54.416 [2024-11-19 11:27:49.582733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.416 [2024-11-19 11:27:49.582757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.416 qpair failed and we were unable to recover it. 00:25:54.416 [2024-11-19 11:27:49.582910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.416 [2024-11-19 11:27:49.582934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.416 qpair failed and we were unable to recover it. 00:25:54.416 [2024-11-19 11:27:49.583136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.416 [2024-11-19 11:27:49.583159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.416 qpair failed and we were unable to recover it. 00:25:54.416 [2024-11-19 11:27:49.583294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.417 [2024-11-19 11:27:49.583318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.417 qpair failed and we were unable to recover it. 00:25:54.417 [2024-11-19 11:27:49.583451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.417 [2024-11-19 11:27:49.583476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.417 qpair failed and we were unable to recover it. 00:25:54.417 [2024-11-19 11:27:49.583601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.417 [2024-11-19 11:27:49.583625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.417 qpair failed and we were unable to recover it. 00:25:54.417 [2024-11-19 11:27:49.583806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.417 [2024-11-19 11:27:49.583830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.417 qpair failed and we were unable to recover it. 00:25:54.417 [2024-11-19 11:27:49.583985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.417 [2024-11-19 11:27:49.584009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.417 qpair failed and we were unable to recover it. 00:25:54.417 [2024-11-19 11:27:49.584132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.417 [2024-11-19 11:27:49.584156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.417 qpair failed and we were unable to recover it. 00:25:54.417 [2024-11-19 11:27:49.584329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.417 [2024-11-19 11:27:49.584374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.417 qpair failed and we were unable to recover it. 00:25:54.417 [2024-11-19 11:27:49.584561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.417 [2024-11-19 11:27:49.584585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.417 qpair failed and we were unable to recover it. 00:25:54.417 [2024-11-19 11:27:49.584669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.417 [2024-11-19 11:27:49.584692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.417 qpair failed and we were unable to recover it. 00:25:54.417 [2024-11-19 11:27:49.584909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.417 [2024-11-19 11:27:49.584932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.417 qpair failed and we were unable to recover it. 00:25:54.417 [2024-11-19 11:27:49.585081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.417 [2024-11-19 11:27:49.585105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.417 qpair failed and we were unable to recover it. 00:25:54.417 [2024-11-19 11:27:49.585316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.417 [2024-11-19 11:27:49.585339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.417 qpair failed and we were unable to recover it. 00:25:54.417 [2024-11-19 11:27:49.585549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.417 [2024-11-19 11:27:49.585575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.417 qpair failed and we were unable to recover it. 00:25:54.417 [2024-11-19 11:27:49.585732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.417 [2024-11-19 11:27:49.585756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.417 qpair failed and we were unable to recover it. 00:25:54.417 [2024-11-19 11:27:49.585926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.417 [2024-11-19 11:27:49.585949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.417 qpair failed and we were unable to recover it. 00:25:54.417 [2024-11-19 11:27:49.586165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.417 [2024-11-19 11:27:49.586188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.417 qpair failed and we were unable to recover it. 00:25:54.417 [2024-11-19 11:27:49.586325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.417 [2024-11-19 11:27:49.586348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.417 qpair failed and we were unable to recover it. 00:25:54.417 [2024-11-19 11:27:49.586566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.417 [2024-11-19 11:27:49.586594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.417 qpair failed and we were unable to recover it. 00:25:54.417 [2024-11-19 11:27:49.586749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.417 [2024-11-19 11:27:49.586772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.417 qpair failed and we were unable to recover it. 00:25:54.417 [2024-11-19 11:27:49.586977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.417 [2024-11-19 11:27:49.587000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.417 qpair failed and we were unable to recover it. 00:25:54.417 [2024-11-19 11:27:49.587159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.417 [2024-11-19 11:27:49.587183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.417 qpair failed and we were unable to recover it. 00:25:54.417 [2024-11-19 11:27:49.587352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.417 [2024-11-19 11:27:49.587384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.417 qpair failed and we were unable to recover it. 00:25:54.417 [2024-11-19 11:27:49.587578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.417 [2024-11-19 11:27:49.587602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.417 qpair failed and we were unable to recover it. 00:25:54.417 [2024-11-19 11:27:49.587797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.417 [2024-11-19 11:27:49.587821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.417 qpair failed and we were unable to recover it. 00:25:54.417 [2024-11-19 11:27:49.587995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.417 [2024-11-19 11:27:49.588019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.417 qpair failed and we were unable to recover it. 00:25:54.417 [2024-11-19 11:27:49.588200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.417 [2024-11-19 11:27:49.588223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.417 qpair failed and we were unable to recover it. 00:25:54.417 [2024-11-19 11:27:49.588374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.417 [2024-11-19 11:27:49.588399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.417 qpair failed and we were unable to recover it. 00:25:54.417 [2024-11-19 11:27:49.588561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.417 [2024-11-19 11:27:49.588586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.417 qpair failed and we were unable to recover it. 00:25:54.417 [2024-11-19 11:27:49.588766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.417 [2024-11-19 11:27:49.588789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.417 qpair failed and we were unable to recover it. 00:25:54.417 [2024-11-19 11:27:49.588918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.417 [2024-11-19 11:27:49.588942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.417 qpair failed and we were unable to recover it. 00:25:54.417 [2024-11-19 11:27:49.589150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.417 [2024-11-19 11:27:49.589174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.417 qpair failed and we were unable to recover it. 00:25:54.417 [2024-11-19 11:27:49.589332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.417 [2024-11-19 11:27:49.589378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.417 qpair failed and we were unable to recover it. 00:25:54.417 [2024-11-19 11:27:49.589520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.417 [2024-11-19 11:27:49.589544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.417 qpair failed and we were unable to recover it. 00:25:54.417 [2024-11-19 11:27:49.589753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.417 [2024-11-19 11:27:49.589776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.417 qpair failed and we were unable to recover it. 00:25:54.417 [2024-11-19 11:27:49.589960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.417 [2024-11-19 11:27:49.589982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.417 qpair failed and we were unable to recover it. 00:25:54.417 [2024-11-19 11:27:49.590204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.417 [2024-11-19 11:27:49.590227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.417 qpair failed and we were unable to recover it. 00:25:54.417 [2024-11-19 11:27:49.590439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.418 [2024-11-19 11:27:49.590465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.418 qpair failed and we were unable to recover it. 00:25:54.418 [2024-11-19 11:27:49.590666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.418 [2024-11-19 11:27:49.590704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.418 qpair failed and we were unable to recover it. 00:25:54.418 [2024-11-19 11:27:49.590898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.418 [2024-11-19 11:27:49.590922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.418 qpair failed and we were unable to recover it. 00:25:54.418 [2024-11-19 11:27:49.591114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.418 [2024-11-19 11:27:49.591138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.418 qpair failed and we were unable to recover it. 00:25:54.418 [2024-11-19 11:27:49.591319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.418 [2024-11-19 11:27:49.591342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.418 qpair failed and we were unable to recover it. 00:25:54.418 [2024-11-19 11:27:49.591567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.418 [2024-11-19 11:27:49.591592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.418 qpair failed and we were unable to recover it. 00:25:54.418 [2024-11-19 11:27:49.591779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.418 [2024-11-19 11:27:49.591803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.418 qpair failed and we were unable to recover it. 00:25:54.418 [2024-11-19 11:27:49.591995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.418 [2024-11-19 11:27:49.592018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.418 qpair failed and we were unable to recover it. 00:25:54.418 [2024-11-19 11:27:49.592185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.418 [2024-11-19 11:27:49.592213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.418 qpair failed and we were unable to recover it. 00:25:54.418 [2024-11-19 11:27:49.592410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.418 [2024-11-19 11:27:49.592450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.418 qpair failed and we were unable to recover it. 00:25:54.418 [2024-11-19 11:27:49.592613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.418 [2024-11-19 11:27:49.592636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.418 qpair failed and we were unable to recover it. 00:25:54.418 [2024-11-19 11:27:49.592812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.418 [2024-11-19 11:27:49.592836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.418 qpair failed and we were unable to recover it. 00:25:54.418 [2024-11-19 11:27:49.593038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.418 [2024-11-19 11:27:49.593061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.418 qpair failed and we were unable to recover it. 00:25:54.418 [2024-11-19 11:27:49.593220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.418 [2024-11-19 11:27:49.593242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.418 qpair failed and we were unable to recover it. 00:25:54.418 [2024-11-19 11:27:49.593439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.418 [2024-11-19 11:27:49.593464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.418 qpair failed and we were unable to recover it. 00:25:54.418 [2024-11-19 11:27:49.593686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.418 [2024-11-19 11:27:49.593725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.418 qpair failed and we were unable to recover it. 00:25:54.418 [2024-11-19 11:27:49.593922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.418 [2024-11-19 11:27:49.593945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.418 qpair failed and we were unable to recover it. 00:25:54.418 [2024-11-19 11:27:49.594167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.418 [2024-11-19 11:27:49.594190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.418 qpair failed and we were unable to recover it. 00:25:54.418 [2024-11-19 11:27:49.594389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.418 [2024-11-19 11:27:49.594414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.418 qpair failed and we were unable to recover it. 00:25:54.418 [2024-11-19 11:27:49.594575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.418 [2024-11-19 11:27:49.594599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.418 qpair failed and we were unable to recover it. 00:25:54.418 [2024-11-19 11:27:49.594712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.418 [2024-11-19 11:27:49.594736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.418 qpair failed and we were unable to recover it. 00:25:54.418 [2024-11-19 11:27:49.594963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.418 [2024-11-19 11:27:49.594987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.418 qpair failed and we were unable to recover it. 00:25:54.418 [2024-11-19 11:27:49.595152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.418 [2024-11-19 11:27:49.595175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.418 qpair failed and we were unable to recover it. 00:25:54.418 [2024-11-19 11:27:49.595378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.418 [2024-11-19 11:27:49.595402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.418 qpair failed and we were unable to recover it. 00:25:54.418 [2024-11-19 11:27:49.595660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.418 [2024-11-19 11:27:49.595684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.418 qpair failed and we were unable to recover it. 00:25:54.418 [2024-11-19 11:27:49.595867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.418 [2024-11-19 11:27:49.595890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.418 qpair failed and we were unable to recover it. 00:25:54.418 [2024-11-19 11:27:49.596109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.418 [2024-11-19 11:27:49.596132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.418 qpair failed and we were unable to recover it. 00:25:54.418 [2024-11-19 11:27:49.596333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.418 [2024-11-19 11:27:49.596356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.418 qpair failed and we were unable to recover it. 00:25:54.418 [2024-11-19 11:27:49.596591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.418 [2024-11-19 11:27:49.596614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.418 qpair failed and we were unable to recover it. 00:25:54.418 [2024-11-19 11:27:49.596782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.418 [2024-11-19 11:27:49.596805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.418 qpair failed and we were unable to recover it. 00:25:54.418 [2024-11-19 11:27:49.597003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.418 [2024-11-19 11:27:49.597027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.418 qpair failed and we were unable to recover it. 00:25:54.418 [2024-11-19 11:27:49.597192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.418 [2024-11-19 11:27:49.597214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.418 qpair failed and we were unable to recover it. 00:25:54.418 [2024-11-19 11:27:49.597392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.418 [2024-11-19 11:27:49.597417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.418 qpair failed and we were unable to recover it. 00:25:54.418 [2024-11-19 11:27:49.597579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.418 [2024-11-19 11:27:49.597604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.418 qpair failed and we were unable to recover it. 00:25:54.418 [2024-11-19 11:27:49.597822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.418 [2024-11-19 11:27:49.597844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.418 qpair failed and we were unable to recover it. 00:25:54.418 [2024-11-19 11:27:49.598072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.418 [2024-11-19 11:27:49.598095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.418 qpair failed and we were unable to recover it. 00:25:54.418 [2024-11-19 11:27:49.598267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.419 [2024-11-19 11:27:49.598291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.419 qpair failed and we were unable to recover it. 00:25:54.419 [2024-11-19 11:27:49.598489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.419 [2024-11-19 11:27:49.598522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.419 qpair failed and we were unable to recover it. 00:25:54.419 [2024-11-19 11:27:49.598747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.419 [2024-11-19 11:27:49.598771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.419 qpair failed and we were unable to recover it. 00:25:54.419 [2024-11-19 11:27:49.598998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.419 [2024-11-19 11:27:49.599022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.419 qpair failed and we were unable to recover it. 00:25:54.419 [2024-11-19 11:27:49.599255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.419 [2024-11-19 11:27:49.599280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.419 qpair failed and we were unable to recover it. 00:25:54.419 [2024-11-19 11:27:49.599520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.419 [2024-11-19 11:27:49.599545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.419 qpair failed and we were unable to recover it. 00:25:54.419 [2024-11-19 11:27:49.599781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.419 [2024-11-19 11:27:49.599805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.419 qpair failed and we were unable to recover it. 00:25:54.419 [2024-11-19 11:27:49.599973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.419 [2024-11-19 11:27:49.599996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.419 qpair failed and we were unable to recover it. 00:25:54.419 [2024-11-19 11:27:49.600189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.419 [2024-11-19 11:27:49.600213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.419 qpair failed and we were unable to recover it. 00:25:54.419 [2024-11-19 11:27:49.600441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.419 [2024-11-19 11:27:49.600466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.419 qpair failed and we were unable to recover it. 00:25:54.419 [2024-11-19 11:27:49.600705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.419 [2024-11-19 11:27:49.600728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.419 qpair failed and we were unable to recover it. 00:25:54.419 [2024-11-19 11:27:49.600924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.419 [2024-11-19 11:27:49.600948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.419 qpair failed and we were unable to recover it. 00:25:54.419 [2024-11-19 11:27:49.601142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.419 [2024-11-19 11:27:49.601166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.419 qpair failed and we were unable to recover it. 00:25:54.419 [2024-11-19 11:27:49.601331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.419 [2024-11-19 11:27:49.601375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.419 qpair failed and we were unable to recover it. 00:25:54.419 [2024-11-19 11:27:49.601561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.419 [2024-11-19 11:27:49.601586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.419 qpair failed and we were unable to recover it. 00:25:54.419 [2024-11-19 11:27:49.601793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.419 [2024-11-19 11:27:49.601817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.419 qpair failed and we were unable to recover it. 00:25:54.419 [2024-11-19 11:27:49.602018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.419 [2024-11-19 11:27:49.602041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.419 qpair failed and we were unable to recover it. 00:25:54.419 [2024-11-19 11:27:49.602232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.419 [2024-11-19 11:27:49.602256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.419 qpair failed and we were unable to recover it. 00:25:54.419 [2024-11-19 11:27:49.602431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.419 [2024-11-19 11:27:49.602476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.419 qpair failed and we were unable to recover it. 00:25:54.419 [2024-11-19 11:27:49.602685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.419 [2024-11-19 11:27:49.602708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.419 qpair failed and we were unable to recover it. 00:25:54.419 [2024-11-19 11:27:49.602935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.419 [2024-11-19 11:27:49.602959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.419 qpair failed and we were unable to recover it. 00:25:54.419 [2024-11-19 11:27:49.603151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.419 [2024-11-19 11:27:49.603174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.419 qpair failed and we were unable to recover it. 00:25:54.419 [2024-11-19 11:27:49.603347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.419 [2024-11-19 11:27:49.603395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.419 qpair failed and we were unable to recover it. 00:25:54.419 [2024-11-19 11:27:49.603599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.419 [2024-11-19 11:27:49.603625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.419 qpair failed and we were unable to recover it. 00:25:54.419 [2024-11-19 11:27:49.603832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.419 [2024-11-19 11:27:49.603872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.419 qpair failed and we were unable to recover it. 00:25:54.419 [2024-11-19 11:27:49.604044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.419 [2024-11-19 11:27:49.604084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.419 qpair failed and we were unable to recover it. 00:25:54.419 [2024-11-19 11:27:49.604251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.419 [2024-11-19 11:27:49.604277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.419 qpair failed and we were unable to recover it. 00:25:54.419 [2024-11-19 11:27:49.604504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.419 [2024-11-19 11:27:49.604529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.419 qpair failed and we were unable to recover it. 00:25:54.419 [2024-11-19 11:27:49.604729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.419 [2024-11-19 11:27:49.604769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.419 qpair failed and we were unable to recover it. 00:25:54.419 [2024-11-19 11:27:49.604977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.419 [2024-11-19 11:27:49.605002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.419 qpair failed and we were unable to recover it. 00:25:54.419 [2024-11-19 11:27:49.605184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.419 [2024-11-19 11:27:49.605209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.419 qpair failed and we were unable to recover it. 00:25:54.419 [2024-11-19 11:27:49.605416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.419 [2024-11-19 11:27:49.605442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.419 qpair failed and we were unable to recover it. 00:25:54.419 [2024-11-19 11:27:49.605693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.419 [2024-11-19 11:27:49.605726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.419 qpair failed and we were unable to recover it. 00:25:54.419 [2024-11-19 11:27:49.605965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.419 [2024-11-19 11:27:49.605989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.419 qpair failed and we were unable to recover it. 00:25:54.419 [2024-11-19 11:27:49.606163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.419 [2024-11-19 11:27:49.606186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.419 qpair failed and we were unable to recover it. 00:25:54.419 [2024-11-19 11:27:49.606418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.419 [2024-11-19 11:27:49.606443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.419 qpair failed and we were unable to recover it. 00:25:54.419 [2024-11-19 11:27:49.606651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.420 [2024-11-19 11:27:49.606674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.420 qpair failed and we were unable to recover it. 00:25:54.420 [2024-11-19 11:27:49.606769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.420 [2024-11-19 11:27:49.606807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.420 qpair failed and we were unable to recover it. 00:25:54.420 [2024-11-19 11:27:49.606943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.420 [2024-11-19 11:27:49.606967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.420 qpair failed and we were unable to recover it. 00:25:54.420 [2024-11-19 11:27:49.607127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.420 [2024-11-19 11:27:49.607165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.420 qpair failed and we were unable to recover it. 00:25:54.420 [2024-11-19 11:27:49.607391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.420 [2024-11-19 11:27:49.607420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.420 qpair failed and we were unable to recover it. 00:25:54.420 [2024-11-19 11:27:49.607627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.420 [2024-11-19 11:27:49.607651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.420 qpair failed and we were unable to recover it. 00:25:54.420 [2024-11-19 11:27:49.607878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.420 [2024-11-19 11:27:49.607902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.420 qpair failed and we were unable to recover it. 00:25:54.420 [2024-11-19 11:27:49.608113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.420 [2024-11-19 11:27:49.608136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.420 qpair failed and we were unable to recover it. 00:25:54.420 [2024-11-19 11:27:49.608372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.420 [2024-11-19 11:27:49.608396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.420 qpair failed and we were unable to recover it. 00:25:54.420 [2024-11-19 11:27:49.608624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.420 [2024-11-19 11:27:49.608664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.420 qpair failed and we were unable to recover it. 00:25:54.420 [2024-11-19 11:27:49.608876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.420 [2024-11-19 11:27:49.608914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.420 qpair failed and we were unable to recover it. 00:25:54.420 [2024-11-19 11:27:49.609095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.420 [2024-11-19 11:27:49.609119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.420 qpair failed and we were unable to recover it. 00:25:54.420 [2024-11-19 11:27:49.609319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.420 [2024-11-19 11:27:49.609358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.420 qpair failed and we were unable to recover it. 00:25:54.420 [2024-11-19 11:27:49.609602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.420 [2024-11-19 11:27:49.609628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.420 qpair failed and we were unable to recover it. 00:25:54.420 [2024-11-19 11:27:49.609789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.420 [2024-11-19 11:27:49.609813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.420 qpair failed and we were unable to recover it. 00:25:54.420 [2024-11-19 11:27:49.609984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.420 [2024-11-19 11:27:49.610008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.420 qpair failed and we were unable to recover it. 00:25:54.420 [2024-11-19 11:27:49.610182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.420 [2024-11-19 11:27:49.610205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.420 qpair failed and we were unable to recover it. 00:25:54.420 [2024-11-19 11:27:49.610389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.420 [2024-11-19 11:27:49.610414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.420 qpair failed and we were unable to recover it. 00:25:54.420 [2024-11-19 11:27:49.610652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.420 [2024-11-19 11:27:49.610675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.420 qpair failed and we were unable to recover it. 00:25:54.420 [2024-11-19 11:27:49.610905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.420 [2024-11-19 11:27:49.610928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.420 qpair failed and we were unable to recover it. 00:25:54.420 [2024-11-19 11:27:49.611099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.420 [2024-11-19 11:27:49.611123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.420 qpair failed and we were unable to recover it. 00:25:54.420 [2024-11-19 11:27:49.611355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.420 [2024-11-19 11:27:49.611399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.420 qpair failed and we were unable to recover it. 00:25:54.420 [2024-11-19 11:27:49.611575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.420 [2024-11-19 11:27:49.611600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.420 qpair failed and we were unable to recover it. 00:25:54.420 [2024-11-19 11:27:49.611822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.420 [2024-11-19 11:27:49.611846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.420 qpair failed and we were unable to recover it. 00:25:54.420 [2024-11-19 11:27:49.612066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.420 [2024-11-19 11:27:49.612089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.420 qpair failed and we were unable to recover it. 00:25:54.420 [2024-11-19 11:27:49.612213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.420 [2024-11-19 11:27:49.612236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.420 qpair failed and we were unable to recover it. 00:25:54.420 [2024-11-19 11:27:49.612423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.420 [2024-11-19 11:27:49.612449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.420 qpair failed and we were unable to recover it. 00:25:54.420 [2024-11-19 11:27:49.612686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.420 [2024-11-19 11:27:49.612709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.420 qpair failed and we were unable to recover it. 00:25:54.420 [2024-11-19 11:27:49.612905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.420 [2024-11-19 11:27:49.612928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.420 qpair failed and we were unable to recover it. 00:25:54.420 [2024-11-19 11:27:49.613123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.420 [2024-11-19 11:27:49.613147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.420 qpair failed and we were unable to recover it. 00:25:54.420 [2024-11-19 11:27:49.613320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.420 [2024-11-19 11:27:49.613358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.420 qpair failed and we were unable to recover it. 00:25:54.420 [2024-11-19 11:27:49.613591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.420 [2024-11-19 11:27:49.613619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.420 qpair failed and we were unable to recover it. 00:25:54.420 [2024-11-19 11:27:49.613857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.420 [2024-11-19 11:27:49.613881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.420 qpair failed and we were unable to recover it. 00:25:54.420 [2024-11-19 11:27:49.614106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.420 [2024-11-19 11:27:49.614130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.420 qpair failed and we were unable to recover it. 00:25:54.420 [2024-11-19 11:27:49.614306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.420 [2024-11-19 11:27:49.614329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.420 qpair failed and we were unable to recover it. 00:25:54.420 [2024-11-19 11:27:49.614580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.420 [2024-11-19 11:27:49.614605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.420 qpair failed and we were unable to recover it. 00:25:54.420 [2024-11-19 11:27:49.614827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.421 [2024-11-19 11:27:49.614851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.421 qpair failed and we were unable to recover it. 00:25:54.421 [2024-11-19 11:27:49.615030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.421 [2024-11-19 11:27:49.615053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.421 qpair failed and we were unable to recover it. 00:25:54.421 [2024-11-19 11:27:49.615214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.421 [2024-11-19 11:27:49.615237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.421 qpair failed and we were unable to recover it. 00:25:54.421 [2024-11-19 11:27:49.615428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.421 [2024-11-19 11:27:49.615454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.421 qpair failed and we were unable to recover it. 00:25:54.421 [2024-11-19 11:27:49.615616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.421 [2024-11-19 11:27:49.615640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.421 qpair failed and we were unable to recover it. 00:25:54.421 [2024-11-19 11:27:49.615824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.421 [2024-11-19 11:27:49.615848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.421 qpair failed and we were unable to recover it. 00:25:54.421 [2024-11-19 11:27:49.616048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.421 [2024-11-19 11:27:49.616072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.421 qpair failed and we were unable to recover it. 00:25:54.421 [2024-11-19 11:27:49.616249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.421 [2024-11-19 11:27:49.616272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.421 qpair failed and we were unable to recover it. 00:25:54.421 [2024-11-19 11:27:49.616489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.421 [2024-11-19 11:27:49.616514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.421 qpair failed and we were unable to recover it. 00:25:54.421 [2024-11-19 11:27:49.616752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.421 [2024-11-19 11:27:49.616776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.421 qpair failed and we were unable to recover it. 00:25:54.421 [2024-11-19 11:27:49.616982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.421 [2024-11-19 11:27:49.617005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.421 qpair failed and we were unable to recover it. 00:25:54.421 [2024-11-19 11:27:49.617233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.421 [2024-11-19 11:27:49.617257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.421 qpair failed and we were unable to recover it. 00:25:54.421 [2024-11-19 11:27:49.617482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.421 [2024-11-19 11:27:49.617508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.421 qpair failed and we were unable to recover it. 00:25:54.421 [2024-11-19 11:27:49.617690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.421 [2024-11-19 11:27:49.617713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.421 qpair failed and we were unable to recover it. 00:25:54.421 [2024-11-19 11:27:49.617932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.421 [2024-11-19 11:27:49.617955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.421 qpair failed and we were unable to recover it. 00:25:54.421 [2024-11-19 11:27:49.618200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.421 [2024-11-19 11:27:49.618224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.421 qpair failed and we were unable to recover it. 00:25:54.421 [2024-11-19 11:27:49.618402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.421 [2024-11-19 11:27:49.618426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.421 qpair failed and we were unable to recover it. 00:25:54.421 [2024-11-19 11:27:49.618580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.421 [2024-11-19 11:27:49.618605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.421 qpair failed and we were unable to recover it. 00:25:54.421 [2024-11-19 11:27:49.618771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.421 [2024-11-19 11:27:49.618795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.421 qpair failed and we were unable to recover it. 00:25:54.421 [2024-11-19 11:27:49.618923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.421 [2024-11-19 11:27:49.618961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.421 qpair failed and we were unable to recover it. 00:25:54.421 [2024-11-19 11:27:49.619098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.421 [2024-11-19 11:27:49.619122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.421 qpair failed and we were unable to recover it. 00:25:54.421 [2024-11-19 11:27:49.619259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.421 [2024-11-19 11:27:49.619284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.421 qpair failed and we were unable to recover it. 00:25:54.421 [2024-11-19 11:27:49.619424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.421 [2024-11-19 11:27:49.619450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.421 qpair failed and we were unable to recover it. 00:25:54.421 [2024-11-19 11:27:49.619597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.421 [2024-11-19 11:27:49.619622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.421 qpair failed and we were unable to recover it. 00:25:54.421 [2024-11-19 11:27:49.619797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.421 [2024-11-19 11:27:49.619826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.421 qpair failed and we were unable to recover it. 00:25:54.421 [2024-11-19 11:27:49.620050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.421 [2024-11-19 11:27:49.620073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.421 qpair failed and we were unable to recover it. 00:25:54.421 [2024-11-19 11:27:49.620290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.421 [2024-11-19 11:27:49.620314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.421 qpair failed and we were unable to recover it. 00:25:54.421 [2024-11-19 11:27:49.620469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.421 [2024-11-19 11:27:49.620495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.421 qpair failed and we were unable to recover it. 00:25:54.421 [2024-11-19 11:27:49.620642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.421 [2024-11-19 11:27:49.620667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.421 qpair failed and we were unable to recover it. 00:25:54.421 [2024-11-19 11:27:49.620925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.421 [2024-11-19 11:27:49.620949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.421 qpair failed and we were unable to recover it. 00:25:54.421 [2024-11-19 11:27:49.621175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.421 [2024-11-19 11:27:49.621199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.421 qpair failed and we were unable to recover it. 00:25:54.421 [2024-11-19 11:27:49.621434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.421 [2024-11-19 11:27:49.621460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.421 qpair failed and we were unable to recover it. 00:25:54.421 [2024-11-19 11:27:49.621609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.422 [2024-11-19 11:27:49.621634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.422 qpair failed and we were unable to recover it. 00:25:54.422 [2024-11-19 11:27:49.621870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.422 [2024-11-19 11:27:49.621894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.422 qpair failed and we were unable to recover it. 00:25:54.422 [2024-11-19 11:27:49.622106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.422 [2024-11-19 11:27:49.622128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.422 qpair failed and we were unable to recover it. 00:25:54.422 [2024-11-19 11:27:49.622278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.422 [2024-11-19 11:27:49.622301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.422 qpair failed and we were unable to recover it. 00:25:54.422 [2024-11-19 11:27:49.622465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.422 [2024-11-19 11:27:49.622492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.422 qpair failed and we were unable to recover it. 00:25:54.422 [2024-11-19 11:27:49.622660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.422 [2024-11-19 11:27:49.622683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.422 qpair failed and we were unable to recover it. 00:25:54.422 [2024-11-19 11:27:49.622887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.422 [2024-11-19 11:27:49.622910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.422 qpair failed and we were unable to recover it. 00:25:54.422 [2024-11-19 11:27:49.623052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.422 [2024-11-19 11:27:49.623091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.422 qpair failed and we were unable to recover it. 00:25:54.422 [2024-11-19 11:27:49.623215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.422 [2024-11-19 11:27:49.623238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.422 qpair failed and we were unable to recover it. 00:25:54.422 [2024-11-19 11:27:49.623400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.422 [2024-11-19 11:27:49.623425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.422 qpair failed and we were unable to recover it. 00:25:54.422 [2024-11-19 11:27:49.623567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.422 [2024-11-19 11:27:49.623592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.422 qpair failed and we were unable to recover it. 00:25:54.422 [2024-11-19 11:27:49.623757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.422 [2024-11-19 11:27:49.623781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.422 qpair failed and we were unable to recover it. 00:25:54.422 [2024-11-19 11:27:49.623962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.422 [2024-11-19 11:27:49.623985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.422 qpair failed and we were unable to recover it. 00:25:54.422 [2024-11-19 11:27:49.624172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.422 [2024-11-19 11:27:49.624196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.422 qpair failed and we were unable to recover it. 00:25:54.422 [2024-11-19 11:27:49.624352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.422 [2024-11-19 11:27:49.624400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.422 qpair failed and we were unable to recover it. 00:25:54.422 [2024-11-19 11:27:49.624554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.422 [2024-11-19 11:27:49.624579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.422 qpair failed and we were unable to recover it. 00:25:54.422 [2024-11-19 11:27:49.624800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.422 [2024-11-19 11:27:49.624823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.422 qpair failed and we were unable to recover it. 00:25:54.422 [2024-11-19 11:27:49.624995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.422 [2024-11-19 11:27:49.625018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.422 qpair failed and we were unable to recover it. 00:25:54.422 [2024-11-19 11:27:49.625248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.422 [2024-11-19 11:27:49.625272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.422 qpair failed and we were unable to recover it. 00:25:54.422 [2024-11-19 11:27:49.625481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.422 [2024-11-19 11:27:49.625507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.422 qpair failed and we were unable to recover it. 00:25:54.422 [2024-11-19 11:27:49.625643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.422 [2024-11-19 11:27:49.625668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.422 qpair failed and we were unable to recover it. 00:25:54.422 [2024-11-19 11:27:49.625850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.422 [2024-11-19 11:27:49.625874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.422 qpair failed and we were unable to recover it. 00:25:54.422 [2024-11-19 11:27:49.626034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.422 [2024-11-19 11:27:49.626073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.422 qpair failed and we were unable to recover it. 00:25:54.422 [2024-11-19 11:27:49.626283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.422 [2024-11-19 11:27:49.626306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.422 qpair failed and we were unable to recover it. 00:25:54.422 [2024-11-19 11:27:49.626490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.422 [2024-11-19 11:27:49.626516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.422 qpair failed and we were unable to recover it. 00:25:54.422 [2024-11-19 11:27:49.626712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.422 [2024-11-19 11:27:49.626737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.422 qpair failed and we were unable to recover it. 00:25:54.422 [2024-11-19 11:27:49.626956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.422 [2024-11-19 11:27:49.626979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.422 qpair failed and we were unable to recover it. 00:25:54.422 [2024-11-19 11:27:49.627130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.422 [2024-11-19 11:27:49.627153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.422 qpair failed and we were unable to recover it. 00:25:54.422 [2024-11-19 11:27:49.627412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.422 [2024-11-19 11:27:49.627452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.422 qpair failed and we were unable to recover it. 00:25:54.422 [2024-11-19 11:27:49.627590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.422 [2024-11-19 11:27:49.627615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.422 qpair failed and we were unable to recover it. 00:25:54.422 [2024-11-19 11:27:49.627711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.422 [2024-11-19 11:27:49.627736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.422 qpair failed and we were unable to recover it. 00:25:54.422 [2024-11-19 11:27:49.627914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.422 [2024-11-19 11:27:49.627957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.422 qpair failed and we were unable to recover it. 00:25:54.422 [2024-11-19 11:27:49.628171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.422 [2024-11-19 11:27:49.628194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.422 qpair failed and we were unable to recover it. 00:25:54.422 [2024-11-19 11:27:49.628438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.422 [2024-11-19 11:27:49.628463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.422 qpair failed and we were unable to recover it. 00:25:54.422 [2024-11-19 11:27:49.628642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.422 [2024-11-19 11:27:49.628681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.422 qpair failed and we were unable to recover it. 00:25:54.422 [2024-11-19 11:27:49.628896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.422 [2024-11-19 11:27:49.628920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.422 qpair failed and we were unable to recover it. 00:25:54.422 [2024-11-19 11:27:49.629106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.422 [2024-11-19 11:27:49.629130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.422 qpair failed and we were unable to recover it. 00:25:54.423 [2024-11-19 11:27:49.629378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.423 [2024-11-19 11:27:49.629424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.423 qpair failed and we were unable to recover it. 00:25:54.423 [2024-11-19 11:27:49.629569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.423 [2024-11-19 11:27:49.629594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.423 qpair failed and we were unable to recover it. 00:25:54.423 [2024-11-19 11:27:49.629772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.423 [2024-11-19 11:27:49.629796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.423 qpair failed and we were unable to recover it. 00:25:54.423 [2024-11-19 11:27:49.630009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.423 [2024-11-19 11:27:49.630032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.423 qpair failed and we were unable to recover it. 00:25:54.423 [2024-11-19 11:27:49.630248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.423 [2024-11-19 11:27:49.630271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.423 qpair failed and we were unable to recover it. 00:25:54.423 [2024-11-19 11:27:49.630461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.423 [2024-11-19 11:27:49.630486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.423 qpair failed and we were unable to recover it. 00:25:54.423 [2024-11-19 11:27:49.630632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.423 [2024-11-19 11:27:49.630671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.423 qpair failed and we were unable to recover it. 00:25:54.423 [2024-11-19 11:27:49.630856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.423 [2024-11-19 11:27:49.630879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.423 qpair failed and we were unable to recover it. 00:25:54.423 [2024-11-19 11:27:49.631071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.423 [2024-11-19 11:27:49.631095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.423 qpair failed and we were unable to recover it. 00:25:54.423 [2024-11-19 11:27:49.631324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.423 [2024-11-19 11:27:49.631369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.423 qpair failed and we were unable to recover it. 00:25:54.423 [2024-11-19 11:27:49.631521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.423 [2024-11-19 11:27:49.631545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.423 qpair failed and we were unable to recover it. 00:25:54.423 [2024-11-19 11:27:49.631771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.423 [2024-11-19 11:27:49.631794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.423 qpair failed and we were unable to recover it. 00:25:54.423 [2024-11-19 11:27:49.631980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.423 [2024-11-19 11:27:49.632004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.423 qpair failed and we were unable to recover it. 00:25:54.423 [2024-11-19 11:27:49.632193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.423 [2024-11-19 11:27:49.632216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.423 qpair failed and we were unable to recover it. 00:25:54.423 [2024-11-19 11:27:49.632445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.423 [2024-11-19 11:27:49.632471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.423 qpair failed and we were unable to recover it. 00:25:54.423 [2024-11-19 11:27:49.632626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.423 [2024-11-19 11:27:49.632651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.423 qpair failed and we were unable to recover it. 00:25:54.423 [2024-11-19 11:27:49.632891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.423 [2024-11-19 11:27:49.632915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.423 qpair failed and we were unable to recover it. 00:25:54.423 [2024-11-19 11:27:49.633025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.423 [2024-11-19 11:27:49.633048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.423 qpair failed and we were unable to recover it. 00:25:54.423 [2024-11-19 11:27:49.633217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.423 [2024-11-19 11:27:49.633241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.423 qpair failed and we were unable to recover it. 00:25:54.423 [2024-11-19 11:27:49.633471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.423 [2024-11-19 11:27:49.633496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.423 qpair failed and we were unable to recover it. 00:25:54.423 [2024-11-19 11:27:49.633670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.423 [2024-11-19 11:27:49.633694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.423 qpair failed and we were unable to recover it. 00:25:54.423 [2024-11-19 11:27:49.633924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.423 [2024-11-19 11:27:49.633952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.423 qpair failed and we were unable to recover it. 00:25:54.423 [2024-11-19 11:27:49.634130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.423 [2024-11-19 11:27:49.634153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.423 qpair failed and we were unable to recover it. 00:25:54.423 [2024-11-19 11:27:49.634386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.423 [2024-11-19 11:27:49.634411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.423 qpair failed and we were unable to recover it. 00:25:54.423 [2024-11-19 11:27:49.634603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.423 [2024-11-19 11:27:49.634628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.423 qpair failed and we were unable to recover it. 00:25:54.423 [2024-11-19 11:27:49.634778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.423 [2024-11-19 11:27:49.634810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.423 qpair failed and we were unable to recover it. 00:25:54.423 [2024-11-19 11:27:49.634970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.423 [2024-11-19 11:27:49.634993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.423 qpair failed and we were unable to recover it. 00:25:54.423 [2024-11-19 11:27:49.635168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.423 [2024-11-19 11:27:49.635192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.423 qpair failed and we were unable to recover it. 00:25:54.423 [2024-11-19 11:27:49.635358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.423 [2024-11-19 11:27:49.635386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.423 qpair failed and we were unable to recover it. 00:25:54.423 [2024-11-19 11:27:49.635563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.423 [2024-11-19 11:27:49.635587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.423 qpair failed and we were unable to recover it. 00:25:54.423 [2024-11-19 11:27:49.635821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.423 [2024-11-19 11:27:49.635845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.423 qpair failed and we were unable to recover it. 00:25:54.423 [2024-11-19 11:27:49.636042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.423 [2024-11-19 11:27:49.636064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.423 qpair failed and we were unable to recover it. 00:25:54.423 [2024-11-19 11:27:49.636238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.423 [2024-11-19 11:27:49.636262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.423 qpair failed and we were unable to recover it. 00:25:54.423 [2024-11-19 11:27:49.636487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.423 [2024-11-19 11:27:49.636512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.423 qpair failed and we were unable to recover it. 00:25:54.423 [2024-11-19 11:27:49.636736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.423 [2024-11-19 11:27:49.636759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.423 qpair failed and we were unable to recover it. 00:25:54.423 [2024-11-19 11:27:49.637001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.423 [2024-11-19 11:27:49.637024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.424 qpair failed and we were unable to recover it. 00:25:54.424 [2024-11-19 11:27:49.637255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.424 [2024-11-19 11:27:49.637279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.424 qpair failed and we were unable to recover it. 00:25:54.424 [2024-11-19 11:27:49.637451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.424 [2024-11-19 11:27:49.637476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.424 qpair failed and we were unable to recover it. 00:25:54.424 [2024-11-19 11:27:49.637665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.424 [2024-11-19 11:27:49.637688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.424 qpair failed and we were unable to recover it. 00:25:54.424 [2024-11-19 11:27:49.637810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.424 [2024-11-19 11:27:49.637833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.424 qpair failed and we were unable to recover it. 00:25:54.424 [2024-11-19 11:27:49.637986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.424 [2024-11-19 11:27:49.638009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.424 qpair failed and we were unable to recover it. 00:25:54.424 [2024-11-19 11:27:49.638208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.424 [2024-11-19 11:27:49.638232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.424 qpair failed and we were unable to recover it. 00:25:54.424 [2024-11-19 11:27:49.638437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.424 [2024-11-19 11:27:49.638462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.424 qpair failed and we were unable to recover it. 00:25:54.424 [2024-11-19 11:27:49.638613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.424 [2024-11-19 11:27:49.638652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.424 qpair failed and we were unable to recover it. 00:25:54.424 [2024-11-19 11:27:49.638818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.424 [2024-11-19 11:27:49.638842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.424 qpair failed and we were unable to recover it. 00:25:54.424 [2024-11-19 11:27:49.639070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.424 [2024-11-19 11:27:49.639094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.424 qpair failed and we were unable to recover it. 00:25:54.424 [2024-11-19 11:27:49.639220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.424 [2024-11-19 11:27:49.639243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.424 qpair failed and we were unable to recover it. 00:25:54.424 [2024-11-19 11:27:49.639384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.424 [2024-11-19 11:27:49.639409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.424 qpair failed and we were unable to recover it. 00:25:54.424 [2024-11-19 11:27:49.639510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.424 [2024-11-19 11:27:49.639546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.424 qpair failed and we were unable to recover it. 00:25:54.424 [2024-11-19 11:27:49.639774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.424 [2024-11-19 11:27:49.639797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.424 qpair failed and we were unable to recover it. 00:25:54.424 [2024-11-19 11:27:49.639947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.424 [2024-11-19 11:27:49.639970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.424 qpair failed and we were unable to recover it. 00:25:54.424 [2024-11-19 11:27:49.640082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.424 [2024-11-19 11:27:49.640106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.424 qpair failed and we were unable to recover it. 00:25:54.424 [2024-11-19 11:27:49.640287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.424 [2024-11-19 11:27:49.640326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.424 qpair failed and we were unable to recover it. 00:25:54.424 [2024-11-19 11:27:49.640514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.424 [2024-11-19 11:27:49.640539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.424 qpair failed and we were unable to recover it. 00:25:54.424 [2024-11-19 11:27:49.640760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.424 [2024-11-19 11:27:49.640784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.424 qpair failed and we were unable to recover it. 00:25:54.424 [2024-11-19 11:27:49.640945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.424 [2024-11-19 11:27:49.640968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.424 qpair failed and we were unable to recover it. 00:25:54.424 [2024-11-19 11:27:49.641197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.424 [2024-11-19 11:27:49.641221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.424 qpair failed and we were unable to recover it. 00:25:54.424 [2024-11-19 11:27:49.641369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.424 [2024-11-19 11:27:49.641408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.424 qpair failed and we were unable to recover it. 00:25:54.424 [2024-11-19 11:27:49.641531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.424 [2024-11-19 11:27:49.641572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.424 qpair failed and we were unable to recover it. 00:25:54.424 [2024-11-19 11:27:49.641739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.424 [2024-11-19 11:27:49.641762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.424 qpair failed and we were unable to recover it. 00:25:54.424 [2024-11-19 11:27:49.641977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.424 [2024-11-19 11:27:49.642000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.424 qpair failed and we were unable to recover it. 00:25:54.424 [2024-11-19 11:27:49.642201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.424 [2024-11-19 11:27:49.642224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.424 qpair failed and we were unable to recover it. 00:25:54.424 [2024-11-19 11:27:49.642459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.424 [2024-11-19 11:27:49.642484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.424 qpair failed and we were unable to recover it. 00:25:54.424 [2024-11-19 11:27:49.642706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.424 [2024-11-19 11:27:49.642729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.424 qpair failed and we were unable to recover it. 00:25:54.424 [2024-11-19 11:27:49.642947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.424 [2024-11-19 11:27:49.642970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.424 qpair failed and we were unable to recover it. 00:25:54.424 [2024-11-19 11:27:49.643155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.424 [2024-11-19 11:27:49.643179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.424 qpair failed and we were unable to recover it. 00:25:54.424 [2024-11-19 11:27:49.643398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.424 [2024-11-19 11:27:49.643423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.424 qpair failed and we were unable to recover it. 00:25:54.424 [2024-11-19 11:27:49.643636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.424 [2024-11-19 11:27:49.643660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.424 qpair failed and we were unable to recover it. 00:25:54.424 [2024-11-19 11:27:49.643842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.424 [2024-11-19 11:27:49.643866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.424 qpair failed and we were unable to recover it. 00:25:54.424 [2024-11-19 11:27:49.644096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.424 [2024-11-19 11:27:49.644120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.424 qpair failed and we were unable to recover it. 00:25:54.424 [2024-11-19 11:27:49.644269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.424 [2024-11-19 11:27:49.644292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.424 qpair failed and we were unable to recover it. 00:25:54.424 [2024-11-19 11:27:49.644475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.424 [2024-11-19 11:27:49.644499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.424 qpair failed and we were unable to recover it. 00:25:54.425 [2024-11-19 11:27:49.644749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.425 [2024-11-19 11:27:49.644773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.425 qpair failed and we were unable to recover it. 00:25:54.425 [2024-11-19 11:27:49.644924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.425 [2024-11-19 11:27:49.644947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.425 qpair failed and we were unable to recover it. 00:25:54.425 [2024-11-19 11:27:49.645132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.425 [2024-11-19 11:27:49.645156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.425 qpair failed and we were unable to recover it. 00:25:54.425 [2024-11-19 11:27:49.645393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.425 [2024-11-19 11:27:49.645418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.425 qpair failed and we were unable to recover it. 00:25:54.425 [2024-11-19 11:27:49.645566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.425 [2024-11-19 11:27:49.645590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.425 qpair failed and we were unable to recover it. 00:25:54.425 [2024-11-19 11:27:49.645774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.425 [2024-11-19 11:27:49.645798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.425 qpair failed and we were unable to recover it. 00:25:54.425 [2024-11-19 11:27:49.645987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.425 [2024-11-19 11:27:49.646011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.425 qpair failed and we were unable to recover it. 00:25:54.425 [2024-11-19 11:27:49.646223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.425 [2024-11-19 11:27:49.646246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.425 qpair failed and we were unable to recover it. 00:25:54.425 [2024-11-19 11:27:49.646463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.425 [2024-11-19 11:27:49.646488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.425 qpair failed and we were unable to recover it. 00:25:54.425 [2024-11-19 11:27:49.646709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.425 [2024-11-19 11:27:49.646733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.425 qpair failed and we were unable to recover it. 00:25:54.425 [2024-11-19 11:27:49.646935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.425 [2024-11-19 11:27:49.646958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.425 qpair failed and we were unable to recover it. 00:25:54.425 [2024-11-19 11:27:49.647198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.425 [2024-11-19 11:27:49.647222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.425 qpair failed and we were unable to recover it. 00:25:54.425 [2024-11-19 11:27:49.647434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.425 [2024-11-19 11:27:49.647461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.425 qpair failed and we were unable to recover it. 00:25:54.425 [2024-11-19 11:27:49.647616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.425 [2024-11-19 11:27:49.647640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.425 qpair failed and we were unable to recover it. 00:25:54.425 [2024-11-19 11:27:49.647821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.425 [2024-11-19 11:27:49.647844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.425 qpair failed and we were unable to recover it. 00:25:54.425 [2024-11-19 11:27:49.648077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.425 [2024-11-19 11:27:49.648100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.425 qpair failed and we were unable to recover it. 00:25:54.425 [2024-11-19 11:27:49.648318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.425 [2024-11-19 11:27:49.648341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.425 qpair failed and we were unable to recover it. 00:25:54.425 [2024-11-19 11:27:49.648541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.425 [2024-11-19 11:27:49.648567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.425 qpair failed and we were unable to recover it. 00:25:54.425 [2024-11-19 11:27:49.648720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.425 [2024-11-19 11:27:49.648744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.425 qpair failed and we were unable to recover it. 00:25:54.425 [2024-11-19 11:27:49.648895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.425 [2024-11-19 11:27:49.648918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.425 qpair failed and we were unable to recover it. 00:25:54.425 [2024-11-19 11:27:49.649081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.425 [2024-11-19 11:27:49.649104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.425 qpair failed and we were unable to recover it. 00:25:54.425 [2024-11-19 11:27:49.649237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.425 [2024-11-19 11:27:49.649280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.425 qpair failed and we were unable to recover it. 00:25:54.425 [2024-11-19 11:27:49.649460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.425 [2024-11-19 11:27:49.649484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.425 qpair failed and we were unable to recover it. 00:25:54.425 [2024-11-19 11:27:49.649690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.425 [2024-11-19 11:27:49.649713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.425 qpair failed and we were unable to recover it. 00:25:54.425 [2024-11-19 11:27:49.649876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.425 [2024-11-19 11:27:49.649900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.425 qpair failed and we were unable to recover it. 00:25:54.425 [2024-11-19 11:27:49.650120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.425 [2024-11-19 11:27:49.650143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.425 qpair failed and we were unable to recover it. 00:25:54.425 [2024-11-19 11:27:49.650324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.425 [2024-11-19 11:27:49.650348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.425 qpair failed and we were unable to recover it. 00:25:54.425 [2024-11-19 11:27:49.650524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.425 [2024-11-19 11:27:49.650548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.425 qpair failed and we were unable to recover it. 00:25:54.425 [2024-11-19 11:27:49.650775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.425 [2024-11-19 11:27:49.650798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.425 qpair failed and we were unable to recover it. 00:25:54.425 [2024-11-19 11:27:49.651020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.425 [2024-11-19 11:27:49.651044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.425 qpair failed and we were unable to recover it. 00:25:54.425 [2024-11-19 11:27:49.651209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.425 [2024-11-19 11:27:49.651232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.425 qpair failed and we were unable to recover it. 00:25:54.425 [2024-11-19 11:27:49.651443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.425 [2024-11-19 11:27:49.651467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.425 qpair failed and we were unable to recover it. 00:25:54.425 [2024-11-19 11:27:49.651676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.425 [2024-11-19 11:27:49.651699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.425 qpair failed and we were unable to recover it. 00:25:54.425 [2024-11-19 11:27:49.651940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.425 [2024-11-19 11:27:49.651964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.425 qpair failed and we were unable to recover it. 00:25:54.425 [2024-11-19 11:27:49.652168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.425 [2024-11-19 11:27:49.652191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.425 qpair failed and we were unable to recover it. 00:25:54.425 [2024-11-19 11:27:49.652332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.425 [2024-11-19 11:27:49.652356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.425 qpair failed and we were unable to recover it. 00:25:54.426 [2024-11-19 11:27:49.652474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.426 [2024-11-19 11:27:49.652510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.426 qpair failed and we were unable to recover it. 00:25:54.426 [2024-11-19 11:27:49.652695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.426 [2024-11-19 11:27:49.652733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.426 qpair failed and we were unable to recover it. 00:25:54.426 [2024-11-19 11:27:49.652961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.426 [2024-11-19 11:27:49.652985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.426 qpair failed and we were unable to recover it. 00:25:54.426 [2024-11-19 11:27:49.653123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.426 [2024-11-19 11:27:49.653157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.426 qpair failed and we were unable to recover it. 00:25:54.426 [2024-11-19 11:27:49.653309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.426 [2024-11-19 11:27:49.653347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.426 qpair failed and we were unable to recover it. 00:25:54.426 [2024-11-19 11:27:49.653556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.426 [2024-11-19 11:27:49.653580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.426 qpair failed and we were unable to recover it. 00:25:54.426 [2024-11-19 11:27:49.653798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.426 [2024-11-19 11:27:49.653822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.426 qpair failed and we were unable to recover it. 00:25:54.426 [2024-11-19 11:27:49.654030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.426 [2024-11-19 11:27:49.654053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.426 qpair failed and we were unable to recover it. 00:25:54.426 [2024-11-19 11:27:49.654278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.426 [2024-11-19 11:27:49.654305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.426 qpair failed and we were unable to recover it. 00:25:54.426 [2024-11-19 11:27:49.654558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.426 [2024-11-19 11:27:49.654583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.426 qpair failed and we were unable to recover it. 00:25:54.426 [2024-11-19 11:27:49.654771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.426 [2024-11-19 11:27:49.654794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.426 qpair failed and we were unable to recover it. 00:25:54.426 [2024-11-19 11:27:49.655021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.426 [2024-11-19 11:27:49.655045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.426 qpair failed and we were unable to recover it. 00:25:54.426 [2024-11-19 11:27:49.655210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.426 [2024-11-19 11:27:49.655234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.426 qpair failed and we were unable to recover it. 00:25:54.426 [2024-11-19 11:27:49.655412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.426 [2024-11-19 11:27:49.655438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.426 qpair failed and we were unable to recover it. 00:25:54.426 [2024-11-19 11:27:49.655571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.426 [2024-11-19 11:27:49.655596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.426 qpair failed and we were unable to recover it. 00:25:54.426 [2024-11-19 11:27:49.655761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.426 [2024-11-19 11:27:49.655785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.426 qpair failed and we were unable to recover it. 00:25:54.426 [2024-11-19 11:27:49.655965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.426 [2024-11-19 11:27:49.655995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.426 qpair failed and we were unable to recover it. 00:25:54.426 [2024-11-19 11:27:49.656209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.426 [2024-11-19 11:27:49.656234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.426 qpair failed and we were unable to recover it. 00:25:54.426 [2024-11-19 11:27:49.656478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.426 [2024-11-19 11:27:49.656503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.426 qpair failed and we were unable to recover it. 00:25:54.426 [2024-11-19 11:27:49.656644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.426 [2024-11-19 11:27:49.656668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.426 qpair failed and we were unable to recover it. 00:25:54.426 [2024-11-19 11:27:49.656897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.426 [2024-11-19 11:27:49.656921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.426 qpair failed and we were unable to recover it. 00:25:54.426 [2024-11-19 11:27:49.657060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.426 [2024-11-19 11:27:49.657084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.426 qpair failed and we were unable to recover it. 00:25:54.426 [2024-11-19 11:27:49.657213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.426 [2024-11-19 11:27:49.657236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.426 qpair failed and we were unable to recover it. 00:25:54.426 [2024-11-19 11:27:49.657468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.426 [2024-11-19 11:27:49.657492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.426 qpair failed and we were unable to recover it. 00:25:54.426 [2024-11-19 11:27:49.657595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.426 [2024-11-19 11:27:49.657620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.426 qpair failed and we were unable to recover it. 00:25:54.426 [2024-11-19 11:27:49.657764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.426 [2024-11-19 11:27:49.657788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.426 qpair failed and we were unable to recover it. 00:25:54.426 [2024-11-19 11:27:49.657969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.426 [2024-11-19 11:27:49.657993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.426 qpair failed and we were unable to recover it. 00:25:54.426 [2024-11-19 11:27:49.658142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.426 [2024-11-19 11:27:49.658165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.426 qpair failed and we were unable to recover it. 00:25:54.426 [2024-11-19 11:27:49.658314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.426 [2024-11-19 11:27:49.658353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.426 qpair failed and we were unable to recover it. 00:25:54.426 [2024-11-19 11:27:49.658500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.426 [2024-11-19 11:27:49.658523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.426 qpair failed and we were unable to recover it. 00:25:54.426 [2024-11-19 11:27:49.658660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.426 [2024-11-19 11:27:49.658685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.426 qpair failed and we were unable to recover it. 00:25:54.426 [2024-11-19 11:27:49.658807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.426 [2024-11-19 11:27:49.658830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.426 qpair failed and we were unable to recover it. 00:25:54.427 [2024-11-19 11:27:49.658978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.427 [2024-11-19 11:27:49.659002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.427 qpair failed and we were unable to recover it. 00:25:54.427 [2024-11-19 11:27:49.659144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.427 [2024-11-19 11:27:49.659183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.427 qpair failed and we were unable to recover it. 00:25:54.427 [2024-11-19 11:27:49.659307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.427 [2024-11-19 11:27:49.659331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.427 qpair failed and we were unable to recover it. 00:25:54.427 [2024-11-19 11:27:49.659476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.427 [2024-11-19 11:27:49.659504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.427 qpair failed and we were unable to recover it. 00:25:54.427 [2024-11-19 11:27:49.659634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.427 [2024-11-19 11:27:49.659659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.427 qpair failed and we were unable to recover it. 00:25:54.427 [2024-11-19 11:27:49.659796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.427 [2024-11-19 11:27:49.659820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.427 qpair failed and we were unable to recover it. 00:25:54.427 [2024-11-19 11:27:49.659985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.427 [2024-11-19 11:27:49.660010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.427 qpair failed and we were unable to recover it. 00:25:54.427 [2024-11-19 11:27:49.660128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.427 [2024-11-19 11:27:49.660153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.427 qpair failed and we were unable to recover it. 00:25:54.427 [2024-11-19 11:27:49.660282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.427 [2024-11-19 11:27:49.660305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.427 qpair failed and we were unable to recover it. 00:25:54.427 [2024-11-19 11:27:49.660455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.427 [2024-11-19 11:27:49.660480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.427 qpair failed and we were unable to recover it. 00:25:54.427 [2024-11-19 11:27:49.660614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.427 [2024-11-19 11:27:49.660638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.427 qpair failed and we were unable to recover it. 00:25:54.427 [2024-11-19 11:27:49.660789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.427 [2024-11-19 11:27:49.660812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.427 qpair failed and we were unable to recover it. 00:25:54.427 [2024-11-19 11:27:49.660980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.427 [2024-11-19 11:27:49.661004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.427 qpair failed and we were unable to recover it. 00:25:54.427 [2024-11-19 11:27:49.661135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.427 [2024-11-19 11:27:49.661173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.427 qpair failed and we were unable to recover it. 00:25:54.427 [2024-11-19 11:27:49.661322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.427 [2024-11-19 11:27:49.661372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.427 qpair failed and we were unable to recover it. 00:25:54.427 [2024-11-19 11:27:49.661503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.427 [2024-11-19 11:27:49.661527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.427 qpair failed and we were unable to recover it. 00:25:54.427 [2024-11-19 11:27:49.661667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.427 [2024-11-19 11:27:49.661691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.427 qpair failed and we were unable to recover it. 00:25:54.427 [2024-11-19 11:27:49.661811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.427 [2024-11-19 11:27:49.661835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.427 qpair failed and we were unable to recover it. 00:25:54.427 [2024-11-19 11:27:49.661956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.427 [2024-11-19 11:27:49.661980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.427 qpair failed and we were unable to recover it. 00:25:54.427 [2024-11-19 11:27:49.662115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.427 [2024-11-19 11:27:49.662139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.427 qpair failed and we were unable to recover it. 00:25:54.427 [2024-11-19 11:27:49.662307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.427 [2024-11-19 11:27:49.662344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.427 qpair failed and we were unable to recover it. 00:25:54.427 [2024-11-19 11:27:49.662463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.427 [2024-11-19 11:27:49.662488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.427 qpair failed and we were unable to recover it. 00:25:54.427 [2024-11-19 11:27:49.662581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.427 [2024-11-19 11:27:49.662606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.427 qpair failed and we were unable to recover it. 00:25:54.427 [2024-11-19 11:27:49.662704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.427 [2024-11-19 11:27:49.662729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.427 qpair failed and we were unable to recover it. 00:25:54.427 [2024-11-19 11:27:49.662902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.427 [2024-11-19 11:27:49.662926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.427 qpair failed and we were unable to recover it. 00:25:54.427 [2024-11-19 11:27:49.663097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.427 [2024-11-19 11:27:49.663121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.427 qpair failed and we were unable to recover it. 00:25:54.427 [2024-11-19 11:27:49.663247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.427 [2024-11-19 11:27:49.663284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.427 qpair failed and we were unable to recover it. 00:25:54.427 [2024-11-19 11:27:49.663442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.427 [2024-11-19 11:27:49.663468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.427 qpair failed and we were unable to recover it. 00:25:54.427 [2024-11-19 11:27:49.663600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.427 [2024-11-19 11:27:49.663625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.427 qpair failed and we were unable to recover it. 00:25:54.427 [2024-11-19 11:27:49.663765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.427 [2024-11-19 11:27:49.663802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.427 qpair failed and we were unable to recover it. 00:25:54.427 [2024-11-19 11:27:49.663963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.427 [2024-11-19 11:27:49.663986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.427 qpair failed and we were unable to recover it. 00:25:54.427 [2024-11-19 11:27:49.664134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.427 [2024-11-19 11:27:49.664173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.427 qpair failed and we were unable to recover it. 00:25:54.427 [2024-11-19 11:27:49.664272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.427 [2024-11-19 11:27:49.664295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.427 qpair failed and we were unable to recover it. 00:25:54.427 [2024-11-19 11:27:49.664421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.427 [2024-11-19 11:27:49.664445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.427 qpair failed and we were unable to recover it. 00:25:54.427 [2024-11-19 11:27:49.664554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.427 [2024-11-19 11:27:49.664579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.427 qpair failed and we were unable to recover it. 00:25:54.427 [2024-11-19 11:27:49.664733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.427 [2024-11-19 11:27:49.664771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.428 qpair failed and we were unable to recover it. 00:25:54.428 [2024-11-19 11:27:49.664954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.428 [2024-11-19 11:27:49.664977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.428 qpair failed and we were unable to recover it. 00:25:54.428 [2024-11-19 11:27:49.665192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.428 [2024-11-19 11:27:49.665216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.428 qpair failed and we were unable to recover it. 00:25:54.428 [2024-11-19 11:27:49.665372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.428 [2024-11-19 11:27:49.665411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.428 qpair failed and we were unable to recover it. 00:25:54.428 [2024-11-19 11:27:49.665506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.428 [2024-11-19 11:27:49.665530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.428 qpair failed and we were unable to recover it. 00:25:54.428 [2024-11-19 11:27:49.665672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.428 [2024-11-19 11:27:49.665696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.428 qpair failed and we were unable to recover it. 00:25:54.428 [2024-11-19 11:27:49.665814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.428 [2024-11-19 11:27:49.665838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.428 qpair failed and we were unable to recover it. 00:25:54.428 [2024-11-19 11:27:49.665969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.428 [2024-11-19 11:27:49.665992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.428 qpair failed and we were unable to recover it. 00:25:54.428 [2024-11-19 11:27:49.666164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.428 [2024-11-19 11:27:49.666203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.428 qpair failed and we were unable to recover it. 00:25:54.428 [2024-11-19 11:27:49.666304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.428 [2024-11-19 11:27:49.666329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.428 qpair failed and we were unable to recover it. 00:25:54.428 [2024-11-19 11:27:49.666482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.428 [2024-11-19 11:27:49.666507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.428 qpair failed and we were unable to recover it. 00:25:54.428 [2024-11-19 11:27:49.666660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.428 [2024-11-19 11:27:49.666698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.428 qpair failed and we were unable to recover it. 00:25:54.428 [2024-11-19 11:27:49.666833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.428 [2024-11-19 11:27:49.666856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.428 qpair failed and we were unable to recover it. 00:25:54.428 [2024-11-19 11:27:49.667008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.428 [2024-11-19 11:27:49.667031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.428 qpair failed and we were unable to recover it. 00:25:54.428 [2024-11-19 11:27:49.667208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.428 [2024-11-19 11:27:49.667231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.428 qpair failed and we were unable to recover it. 00:25:54.428 [2024-11-19 11:27:49.667319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.428 [2024-11-19 11:27:49.667343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.428 qpair failed and we were unable to recover it. 00:25:54.428 [2024-11-19 11:27:49.667520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.428 [2024-11-19 11:27:49.667544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.428 qpair failed and we were unable to recover it. 00:25:54.428 [2024-11-19 11:27:49.667707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.428 [2024-11-19 11:27:49.667732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.428 qpair failed and we were unable to recover it. 00:25:54.428 [2024-11-19 11:27:49.667906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.428 [2024-11-19 11:27:49.667929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.428 qpair failed and we were unable to recover it. 00:25:54.428 [2024-11-19 11:27:49.668067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.428 [2024-11-19 11:27:49.668091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.428 qpair failed and we were unable to recover it. 00:25:54.428 [2024-11-19 11:27:49.668222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.428 [2024-11-19 11:27:49.668246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.428 qpair failed and we were unable to recover it. 00:25:54.428 [2024-11-19 11:27:49.668419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.428 [2024-11-19 11:27:49.668459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.428 qpair failed and we were unable to recover it. 00:25:54.428 [2024-11-19 11:27:49.668549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.428 [2024-11-19 11:27:49.668572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.428 qpair failed and we were unable to recover it. 00:25:54.428 [2024-11-19 11:27:49.668706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.428 [2024-11-19 11:27:49.668731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.428 qpair failed and we were unable to recover it. 00:25:54.428 [2024-11-19 11:27:49.668872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.428 [2024-11-19 11:27:49.668896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.428 qpair failed and we were unable to recover it. 00:25:54.428 [2024-11-19 11:27:49.669074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.428 [2024-11-19 11:27:49.669098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.428 qpair failed and we were unable to recover it. 00:25:54.428 [2024-11-19 11:27:49.669211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.428 [2024-11-19 11:27:49.669236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.428 qpair failed and we were unable to recover it. 00:25:54.428 [2024-11-19 11:27:49.669387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.428 [2024-11-19 11:27:49.669412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.428 qpair failed and we were unable to recover it. 00:25:54.428 [2024-11-19 11:27:49.669545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.428 [2024-11-19 11:27:49.669569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.428 qpair failed and we were unable to recover it. 00:25:54.428 [2024-11-19 11:27:49.669748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.428 [2024-11-19 11:27:49.669772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.428 qpair failed and we were unable to recover it. 00:25:54.428 [2024-11-19 11:27:49.669938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.428 [2024-11-19 11:27:49.669961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.428 qpair failed and we were unable to recover it. 00:25:54.428 [2024-11-19 11:27:49.670189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.428 [2024-11-19 11:27:49.670212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.428 qpair failed and we were unable to recover it. 00:25:54.428 [2024-11-19 11:27:49.670413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.428 [2024-11-19 11:27:49.670438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.428 qpair failed and we were unable to recover it. 00:25:54.428 [2024-11-19 11:27:49.670545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.428 [2024-11-19 11:27:49.670583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.428 qpair failed and we were unable to recover it. 00:25:54.428 [2024-11-19 11:27:49.670706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.428 [2024-11-19 11:27:49.670729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.428 qpair failed and we were unable to recover it. 00:25:54.428 [2024-11-19 11:27:49.670869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.428 [2024-11-19 11:27:49.670908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.428 qpair failed and we were unable to recover it. 00:25:54.428 [2024-11-19 11:27:49.671032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.428 [2024-11-19 11:27:49.671075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.428 qpair failed and we were unable to recover it. 00:25:54.429 [2024-11-19 11:27:49.671223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.429 [2024-11-19 11:27:49.671247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.429 qpair failed and we were unable to recover it. 00:25:54.429 [2024-11-19 11:27:49.671387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.429 [2024-11-19 11:27:49.671412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.429 qpair failed and we were unable to recover it. 00:25:54.429 [2024-11-19 11:27:49.671542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.429 [2024-11-19 11:27:49.671566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.429 qpair failed and we were unable to recover it. 00:25:54.429 [2024-11-19 11:27:49.671674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.429 [2024-11-19 11:27:49.671698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.429 qpair failed and we were unable to recover it. 00:25:54.429 [2024-11-19 11:27:49.671874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.429 [2024-11-19 11:27:49.671897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.429 qpair failed and we were unable to recover it. 00:25:54.429 [2024-11-19 11:27:49.672059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.429 [2024-11-19 11:27:49.672083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.429 qpair failed and we were unable to recover it. 00:25:54.429 [2024-11-19 11:27:49.672215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.429 [2024-11-19 11:27:49.672239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.429 qpair failed and we were unable to recover it. 00:25:54.429 [2024-11-19 11:27:49.672374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.429 [2024-11-19 11:27:49.672399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.429 qpair failed and we were unable to recover it. 00:25:54.429 [2024-11-19 11:27:49.672516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.429 [2024-11-19 11:27:49.672540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.429 qpair failed and we were unable to recover it. 00:25:54.429 [2024-11-19 11:27:49.672671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.429 [2024-11-19 11:27:49.672695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.429 qpair failed and we were unable to recover it. 00:25:54.429 [2024-11-19 11:27:49.672855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.429 [2024-11-19 11:27:49.672879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.429 qpair failed and we were unable to recover it. 00:25:54.429 [2024-11-19 11:27:49.673000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.429 [2024-11-19 11:27:49.673024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.429 qpair failed and we were unable to recover it. 00:25:54.429 [2024-11-19 11:27:49.673137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.429 [2024-11-19 11:27:49.673161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.429 qpair failed and we were unable to recover it. 00:25:54.429 [2024-11-19 11:27:49.673308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.429 [2024-11-19 11:27:49.673332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.429 qpair failed and we were unable to recover it. 00:25:54.429 [2024-11-19 11:27:49.673452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.429 [2024-11-19 11:27:49.673477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.429 qpair failed and we were unable to recover it. 00:25:54.429 [2024-11-19 11:27:49.673568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.429 [2024-11-19 11:27:49.673592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.429 qpair failed and we were unable to recover it. 00:25:54.429 [2024-11-19 11:27:49.673715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.429 [2024-11-19 11:27:49.673741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.429 qpair failed and we were unable to recover it. 00:25:54.429 [2024-11-19 11:27:49.673903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.429 [2024-11-19 11:27:49.673927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.429 qpair failed and we were unable to recover it. 00:25:54.429 [2024-11-19 11:27:49.674063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.429 [2024-11-19 11:27:49.674088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.429 qpair failed and we were unable to recover it. 00:25:54.429 [2024-11-19 11:27:49.674253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.429 [2024-11-19 11:27:49.674291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.429 qpair failed and we were unable to recover it. 00:25:54.429 [2024-11-19 11:27:49.674398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.429 [2024-11-19 11:27:49.674438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.429 qpair failed and we were unable to recover it. 00:25:54.429 [2024-11-19 11:27:49.674535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.429 [2024-11-19 11:27:49.674559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.429 qpair failed and we were unable to recover it. 00:25:54.429 [2024-11-19 11:27:49.674661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.429 [2024-11-19 11:27:49.674686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.429 qpair failed and we were unable to recover it. 00:25:54.429 [2024-11-19 11:27:49.674848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.429 [2024-11-19 11:27:49.674871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.429 qpair failed and we were unable to recover it. 00:25:54.429 [2024-11-19 11:27:49.675014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.429 [2024-11-19 11:27:49.675037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.429 qpair failed and we were unable to recover it. 00:25:54.429 [2024-11-19 11:27:49.675166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.429 [2024-11-19 11:27:49.675191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.429 qpair failed and we were unable to recover it. 00:25:54.429 [2024-11-19 11:27:49.675342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.429 [2024-11-19 11:27:49.675374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.429 qpair failed and we were unable to recover it. 00:25:54.429 [2024-11-19 11:27:49.675465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.429 [2024-11-19 11:27:49.675490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.429 qpair failed and we were unable to recover it. 00:25:54.429 [2024-11-19 11:27:49.675607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.429 [2024-11-19 11:27:49.675632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.429 qpair failed and we were unable to recover it. 00:25:54.429 [2024-11-19 11:27:49.675777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.429 [2024-11-19 11:27:49.675801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.429 qpair failed and we were unable to recover it. 00:25:54.429 [2024-11-19 11:27:49.675929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.429 [2024-11-19 11:27:49.675953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.429 qpair failed and we were unable to recover it. 00:25:54.429 [2024-11-19 11:27:49.676130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.429 [2024-11-19 11:27:49.676164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.429 qpair failed and we were unable to recover it. 00:25:54.429 [2024-11-19 11:27:49.676339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.429 [2024-11-19 11:27:49.676367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.429 qpair failed and we were unable to recover it. 00:25:54.429 [2024-11-19 11:27:49.676487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.429 [2024-11-19 11:27:49.676511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.429 qpair failed and we were unable to recover it. 00:25:54.429 [2024-11-19 11:27:49.676650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.429 [2024-11-19 11:27:49.676675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.429 qpair failed and we were unable to recover it. 00:25:54.429 [2024-11-19 11:27:49.676922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.429 [2024-11-19 11:27:49.676944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.430 qpair failed and we were unable to recover it. 00:25:54.430 [2024-11-19 11:27:49.677132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.430 [2024-11-19 11:27:49.677156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.430 qpair failed and we were unable to recover it. 00:25:54.430 [2024-11-19 11:27:49.677315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.430 [2024-11-19 11:27:49.677338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.430 qpair failed and we were unable to recover it. 00:25:54.430 [2024-11-19 11:27:49.677461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.430 [2024-11-19 11:27:49.677486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.430 qpair failed and we were unable to recover it. 00:25:54.430 [2024-11-19 11:27:49.677649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.430 [2024-11-19 11:27:49.677673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.430 qpair failed and we were unable to recover it. 00:25:54.430 [2024-11-19 11:27:49.677848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.430 [2024-11-19 11:27:49.677871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.430 qpair failed and we were unable to recover it. 00:25:54.430 [2024-11-19 11:27:49.677995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.430 [2024-11-19 11:27:49.678033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.430 qpair failed and we were unable to recover it. 00:25:54.430 [2024-11-19 11:27:49.678172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.430 [2024-11-19 11:27:49.678196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.430 qpair failed and we were unable to recover it. 00:25:54.430 [2024-11-19 11:27:49.678335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.430 [2024-11-19 11:27:49.678359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.430 qpair failed and we were unable to recover it. 00:25:54.430 [2024-11-19 11:27:49.678494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.430 [2024-11-19 11:27:49.678519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.430 qpair failed and we were unable to recover it. 00:25:54.430 [2024-11-19 11:27:49.678657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.430 [2024-11-19 11:27:49.678681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.430 qpair failed and we were unable to recover it. 00:25:54.430 [2024-11-19 11:27:49.678811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.430 [2024-11-19 11:27:49.678849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.430 qpair failed and we were unable to recover it. 00:25:54.430 [2024-11-19 11:27:49.678941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.430 [2024-11-19 11:27:49.678966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.430 qpair failed and we were unable to recover it. 00:25:54.430 [2024-11-19 11:27:49.679104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.430 [2024-11-19 11:27:49.679128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.430 qpair failed and we were unable to recover it. 00:25:54.430 [2024-11-19 11:27:49.679263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.430 [2024-11-19 11:27:49.679288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.430 qpair failed and we were unable to recover it. 00:25:54.430 [2024-11-19 11:27:49.679413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.430 [2024-11-19 11:27:49.679438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.430 qpair failed and we were unable to recover it. 00:25:54.430 [2024-11-19 11:27:49.679554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.430 [2024-11-19 11:27:49.679578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.430 qpair failed and we were unable to recover it. 00:25:54.430 [2024-11-19 11:27:49.679704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.430 [2024-11-19 11:27:49.679729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.430 qpair failed and we were unable to recover it. 00:25:54.430 [2024-11-19 11:27:49.679835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.430 [2024-11-19 11:27:49.679864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.430 qpair failed and we were unable to recover it. 00:25:54.430 [2024-11-19 11:27:49.679984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.430 [2024-11-19 11:27:49.680007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.430 qpair failed and we were unable to recover it. 00:25:54.430 [2024-11-19 11:27:49.680109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.430 [2024-11-19 11:27:49.680134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.430 qpair failed and we were unable to recover it. 00:25:54.430 [2024-11-19 11:27:49.680277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.430 [2024-11-19 11:27:49.680301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.430 qpair failed and we were unable to recover it. 00:25:54.430 [2024-11-19 11:27:49.680401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.430 [2024-11-19 11:27:49.680426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.430 qpair failed and we were unable to recover it. 00:25:54.430 [2024-11-19 11:27:49.680592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.430 [2024-11-19 11:27:49.680617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.430 qpair failed and we were unable to recover it. 00:25:54.430 [2024-11-19 11:27:49.680782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.430 [2024-11-19 11:27:49.680805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.430 qpair failed and we were unable to recover it. 00:25:54.430 [2024-11-19 11:27:49.680944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.430 [2024-11-19 11:27:49.680968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.430 qpair failed and we were unable to recover it. 00:25:54.430 [2024-11-19 11:27:49.681150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.430 [2024-11-19 11:27:49.681173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.430 qpair failed and we were unable to recover it. 00:25:54.430 [2024-11-19 11:27:49.681338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.430 [2024-11-19 11:27:49.681366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.430 qpair failed and we were unable to recover it. 00:25:54.430 [2024-11-19 11:27:49.681466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.430 [2024-11-19 11:27:49.681491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.430 qpair failed and we were unable to recover it. 00:25:54.430 [2024-11-19 11:27:49.681597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.430 [2024-11-19 11:27:49.681622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.430 qpair failed and we were unable to recover it. 00:25:54.430 [2024-11-19 11:27:49.681748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.430 [2024-11-19 11:27:49.681772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.430 qpair failed and we were unable to recover it. 00:25:54.430 [2024-11-19 11:27:49.681944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.430 [2024-11-19 11:27:49.681968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.430 qpair failed and we were unable to recover it. 00:25:54.430 [2024-11-19 11:27:49.682125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.430 [2024-11-19 11:27:49.682149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.430 qpair failed and we were unable to recover it. 00:25:54.430 [2024-11-19 11:27:49.682260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.430 [2024-11-19 11:27:49.682284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.430 qpair failed and we were unable to recover it. 00:25:54.430 [2024-11-19 11:27:49.682446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.430 [2024-11-19 11:27:49.682472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.430 qpair failed and we were unable to recover it. 00:25:54.430 [2024-11-19 11:27:49.682598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.430 [2024-11-19 11:27:49.682623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.430 qpair failed and we were unable to recover it. 00:25:54.430 [2024-11-19 11:27:49.682772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.430 [2024-11-19 11:27:49.682797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.431 qpair failed and we were unable to recover it. 00:25:54.431 [2024-11-19 11:27:49.682933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.431 [2024-11-19 11:27:49.682956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.431 qpair failed and we were unable to recover it. 00:25:54.431 [2024-11-19 11:27:49.683119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.431 [2024-11-19 11:27:49.683143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.431 qpair failed and we were unable to recover it. 00:25:54.431 [2024-11-19 11:27:49.683358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.431 [2024-11-19 11:27:49.683404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.431 qpair failed and we were unable to recover it. 00:25:54.431 [2024-11-19 11:27:49.683531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.431 [2024-11-19 11:27:49.683556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.431 qpair failed and we were unable to recover it. 00:25:54.431 [2024-11-19 11:27:49.683734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.431 [2024-11-19 11:27:49.683758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.431 qpair failed and we were unable to recover it. 00:25:54.431 [2024-11-19 11:27:49.683919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.431 [2024-11-19 11:27:49.683943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.431 qpair failed and we were unable to recover it. 00:25:54.431 [2024-11-19 11:27:49.684110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.431 [2024-11-19 11:27:49.684134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.431 qpair failed and we were unable to recover it. 00:25:54.431 [2024-11-19 11:27:49.684289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.431 [2024-11-19 11:27:49.684312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.431 qpair failed and we were unable to recover it. 00:25:54.431 [2024-11-19 11:27:49.684435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.431 [2024-11-19 11:27:49.684460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.431 qpair failed and we were unable to recover it. 00:25:54.431 [2024-11-19 11:27:49.684578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.431 [2024-11-19 11:27:49.684602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.431 qpair failed and we were unable to recover it. 00:25:54.431 [2024-11-19 11:27:49.684744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.431 [2024-11-19 11:27:49.684768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.431 qpair failed and we were unable to recover it. 00:25:54.431 [2024-11-19 11:27:49.684932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.431 [2024-11-19 11:27:49.684970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.431 qpair failed and we were unable to recover it. 00:25:54.431 [2024-11-19 11:27:49.685108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.431 [2024-11-19 11:27:49.685132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.431 qpair failed and we were unable to recover it. 00:25:54.431 [2024-11-19 11:27:49.685272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.431 [2024-11-19 11:27:49.685297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.431 qpair failed and we were unable to recover it. 00:25:54.431 [2024-11-19 11:27:49.685433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.431 [2024-11-19 11:27:49.685458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.431 qpair failed and we were unable to recover it. 00:25:54.431 [2024-11-19 11:27:49.685623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.431 [2024-11-19 11:27:49.685649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.431 qpair failed and we were unable to recover it. 00:25:54.431 [2024-11-19 11:27:49.685793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.431 [2024-11-19 11:27:49.685816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.431 qpair failed and we were unable to recover it. 00:25:54.431 [2024-11-19 11:27:49.685937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.431 [2024-11-19 11:27:49.685961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.431 qpair failed and we were unable to recover it. 00:25:54.431 [2024-11-19 11:27:49.686089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.431 [2024-11-19 11:27:49.686114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.431 qpair failed and we were unable to recover it. 00:25:54.431 [2024-11-19 11:27:49.686230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.431 [2024-11-19 11:27:49.686254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.431 qpair failed and we were unable to recover it. 00:25:54.431 [2024-11-19 11:27:49.686392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.431 [2024-11-19 11:27:49.686431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.431 qpair failed and we were unable to recover it. 00:25:54.431 [2024-11-19 11:27:49.686566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.431 [2024-11-19 11:27:49.686591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.431 qpair failed and we were unable to recover it. 00:25:54.431 [2024-11-19 11:27:49.686755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.431 [2024-11-19 11:27:49.686794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.431 qpair failed and we were unable to recover it. 00:25:54.431 [2024-11-19 11:27:49.686879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.431 [2024-11-19 11:27:49.686904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.431 qpair failed and we were unable to recover it. 00:25:54.431 [2024-11-19 11:27:49.687046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.431 [2024-11-19 11:27:49.687072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.431 qpair failed and we were unable to recover it. 00:25:54.431 [2024-11-19 11:27:49.687188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.431 [2024-11-19 11:27:49.687214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.431 qpair failed and we were unable to recover it. 00:25:54.431 [2024-11-19 11:27:49.687400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.431 [2024-11-19 11:27:49.687426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.431 qpair failed and we were unable to recover it. 00:25:54.431 [2024-11-19 11:27:49.687555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.431 [2024-11-19 11:27:49.687580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.431 qpair failed and we were unable to recover it. 00:25:54.431 [2024-11-19 11:27:49.687804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.431 [2024-11-19 11:27:49.687829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.431 qpair failed and we were unable to recover it. 00:25:54.431 [2024-11-19 11:27:49.687954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.431 [2024-11-19 11:27:49.687978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.431 qpair failed and we were unable to recover it. 00:25:54.431 [2024-11-19 11:27:49.688190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.431 [2024-11-19 11:27:49.688216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.431 qpair failed and we were unable to recover it. 00:25:54.431 [2024-11-19 11:27:49.688325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.431 [2024-11-19 11:27:49.688349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.432 qpair failed and we were unable to recover it. 00:25:54.432 [2024-11-19 11:27:49.688500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.432 [2024-11-19 11:27:49.688525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.432 qpair failed and we were unable to recover it. 00:25:54.432 [2024-11-19 11:27:49.688686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.432 [2024-11-19 11:27:49.688711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.432 qpair failed and we were unable to recover it. 00:25:54.432 [2024-11-19 11:27:49.688851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.432 [2024-11-19 11:27:49.688874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.432 qpair failed and we were unable to recover it. 00:25:54.432 [2024-11-19 11:27:49.689077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.432 [2024-11-19 11:27:49.689100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.432 qpair failed and we were unable to recover it. 00:25:54.432 [2024-11-19 11:27:49.689296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.432 [2024-11-19 11:27:49.689327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.432 qpair failed and we were unable to recover it. 00:25:54.432 [2024-11-19 11:27:49.689478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.432 [2024-11-19 11:27:49.689516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.432 qpair failed and we were unable to recover it. 00:25:54.432 [2024-11-19 11:27:49.689616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.432 [2024-11-19 11:27:49.689656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.432 qpair failed and we were unable to recover it. 00:25:54.432 [2024-11-19 11:27:49.689820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.432 [2024-11-19 11:27:49.689845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.432 qpair failed and we were unable to recover it. 00:25:54.432 [2024-11-19 11:27:49.689972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.432 [2024-11-19 11:27:49.689997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.432 qpair failed and we were unable to recover it. 00:25:54.432 [2024-11-19 11:27:49.690107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.432 [2024-11-19 11:27:49.690131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.432 qpair failed and we were unable to recover it. 00:25:54.432 [2024-11-19 11:27:49.690293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.432 [2024-11-19 11:27:49.690317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.432 qpair failed and we were unable to recover it. 00:25:54.432 [2024-11-19 11:27:49.690450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.432 [2024-11-19 11:27:49.690475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.432 qpair failed and we were unable to recover it. 00:25:54.432 [2024-11-19 11:27:49.690597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.432 [2024-11-19 11:27:49.690622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.432 qpair failed and we were unable to recover it. 00:25:54.432 [2024-11-19 11:27:49.690769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.432 [2024-11-19 11:27:49.690792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.432 qpair failed and we were unable to recover it. 00:25:54.432 [2024-11-19 11:27:49.690931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.432 [2024-11-19 11:27:49.690956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.432 qpair failed and we were unable to recover it. 00:25:54.432 [2024-11-19 11:27:49.691111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.432 [2024-11-19 11:27:49.691134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.432 qpair failed and we were unable to recover it. 00:25:54.432 [2024-11-19 11:27:49.691274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.432 [2024-11-19 11:27:49.691298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.432 qpair failed and we were unable to recover it. 00:25:54.432 [2024-11-19 11:27:49.691462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.432 [2024-11-19 11:27:49.691503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.432 qpair failed and we were unable to recover it. 00:25:54.432 [2024-11-19 11:27:49.691654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.432 [2024-11-19 11:27:49.691681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.432 qpair failed and we were unable to recover it. 00:25:54.432 [2024-11-19 11:27:49.691870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.432 [2024-11-19 11:27:49.691895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.432 qpair failed and we were unable to recover it. 00:25:54.432 [2024-11-19 11:27:49.692082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.432 [2024-11-19 11:27:49.692107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.432 qpair failed and we were unable to recover it. 00:25:54.432 [2024-11-19 11:27:49.692317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.432 [2024-11-19 11:27:49.692341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.432 qpair failed and we were unable to recover it. 00:25:54.432 [2024-11-19 11:27:49.692466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.432 [2024-11-19 11:27:49.692491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.432 qpair failed and we were unable to recover it. 00:25:54.432 [2024-11-19 11:27:49.692595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.432 [2024-11-19 11:27:49.692621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.432 qpair failed and we were unable to recover it. 00:25:54.432 [2024-11-19 11:27:49.692748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.432 [2024-11-19 11:27:49.692773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.432 qpair failed and we were unable to recover it. 00:25:54.432 [2024-11-19 11:27:49.692945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.432 [2024-11-19 11:27:49.692971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.432 qpair failed and we were unable to recover it. 00:25:54.432 [2024-11-19 11:27:49.693103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.432 [2024-11-19 11:27:49.693144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.432 qpair failed and we were unable to recover it. 00:25:54.432 [2024-11-19 11:27:49.693270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.432 [2024-11-19 11:27:49.693295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.432 qpair failed and we were unable to recover it. 00:25:54.432 [2024-11-19 11:27:49.693403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.432 [2024-11-19 11:27:49.693428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.432 qpair failed and we were unable to recover it. 00:25:54.432 [2024-11-19 11:27:49.693552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.432 [2024-11-19 11:27:49.693576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.432 qpair failed and we were unable to recover it. 00:25:54.432 [2024-11-19 11:27:49.693724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.432 [2024-11-19 11:27:49.693754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.432 qpair failed and we were unable to recover it. 00:25:54.432 [2024-11-19 11:27:49.693898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.432 [2024-11-19 11:27:49.693923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.432 qpair failed and we were unable to recover it. 00:25:54.432 [2024-11-19 11:27:49.694046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.432 [2024-11-19 11:27:49.694071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.432 qpair failed and we were unable to recover it. 00:25:54.432 [2024-11-19 11:27:49.694231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.432 [2024-11-19 11:27:49.694256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.432 qpair failed and we were unable to recover it. 00:25:54.432 [2024-11-19 11:27:49.694385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.432 [2024-11-19 11:27:49.694410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.432 qpair failed and we were unable to recover it. 00:25:54.433 [2024-11-19 11:27:49.694510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.433 [2024-11-19 11:27:49.694536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.433 qpair failed and we were unable to recover it. 00:25:54.433 [2024-11-19 11:27:49.694669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.433 [2024-11-19 11:27:49.694694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.433 qpair failed and we were unable to recover it. 00:25:54.433 [2024-11-19 11:27:49.694844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.433 [2024-11-19 11:27:49.694870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.433 qpair failed and we were unable to recover it. 00:25:54.433 [2024-11-19 11:27:49.694990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.433 [2024-11-19 11:27:49.695015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.433 qpair failed and we were unable to recover it. 00:25:54.433 [2024-11-19 11:27:49.695177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.433 [2024-11-19 11:27:49.695202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.433 qpair failed and we were unable to recover it. 00:25:54.433 [2024-11-19 11:27:49.695376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.433 [2024-11-19 11:27:49.695402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.433 qpair failed and we were unable to recover it. 00:25:54.433 [2024-11-19 11:27:49.695506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.433 [2024-11-19 11:27:49.695531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.433 qpair failed and we were unable to recover it. 00:25:54.433 [2024-11-19 11:27:49.695634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.433 [2024-11-19 11:27:49.695660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.433 qpair failed and we were unable to recover it. 00:25:54.433 [2024-11-19 11:27:49.695785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.433 [2024-11-19 11:27:49.695812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.433 qpair failed and we were unable to recover it. 00:25:54.433 [2024-11-19 11:27:49.695978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.433 [2024-11-19 11:27:49.696020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.433 qpair failed and we were unable to recover it. 00:25:54.433 [2024-11-19 11:27:49.696119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.433 [2024-11-19 11:27:49.696144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.433 qpair failed and we were unable to recover it. 00:25:54.433 [2024-11-19 11:27:49.696304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.433 [2024-11-19 11:27:49.696329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.433 qpair failed and we were unable to recover it. 00:25:54.433 [2024-11-19 11:27:49.696463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.433 [2024-11-19 11:27:49.696488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.433 qpair failed and we were unable to recover it. 00:25:54.433 [2024-11-19 11:27:49.696597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.433 [2024-11-19 11:27:49.696622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.433 qpair failed and we were unable to recover it. 00:25:54.433 [2024-11-19 11:27:49.696741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.433 [2024-11-19 11:27:49.696766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.433 qpair failed and we were unable to recover it. 00:25:54.433 [2024-11-19 11:27:49.696916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.433 [2024-11-19 11:27:49.696940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.433 qpair failed and we were unable to recover it. 00:25:54.433 [2024-11-19 11:27:49.697067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.433 [2024-11-19 11:27:49.697091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.433 qpair failed and we were unable to recover it. 00:25:54.433 [2024-11-19 11:27:49.697216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.433 [2024-11-19 11:27:49.697240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.433 qpair failed and we were unable to recover it. 00:25:54.433 [2024-11-19 11:27:49.697392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.433 [2024-11-19 11:27:49.697417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.433 qpair failed and we were unable to recover it. 00:25:54.433 [2024-11-19 11:27:49.697517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.433 [2024-11-19 11:27:49.697541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.433 qpair failed and we were unable to recover it. 00:25:54.433 [2024-11-19 11:27:49.697626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.433 [2024-11-19 11:27:49.697651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.433 qpair failed and we were unable to recover it. 00:25:54.433 [2024-11-19 11:27:49.697804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.433 [2024-11-19 11:27:49.697831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.433 qpair failed and we were unable to recover it. 00:25:54.433 [2024-11-19 11:27:49.697938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.433 [2024-11-19 11:27:49.697968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.433 qpair failed and we were unable to recover it. 00:25:54.433 [2024-11-19 11:27:49.698103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.433 [2024-11-19 11:27:49.698129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.433 qpair failed and we were unable to recover it. 00:25:54.433 [2024-11-19 11:27:49.698244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.433 [2024-11-19 11:27:49.698269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.433 qpair failed and we were unable to recover it. 00:25:54.433 [2024-11-19 11:27:49.698413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.433 [2024-11-19 11:27:49.698439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.433 qpair failed and we were unable to recover it. 00:25:54.433 [2024-11-19 11:27:49.698549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.433 [2024-11-19 11:27:49.698574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.433 qpair failed and we were unable to recover it. 00:25:54.433 [2024-11-19 11:27:49.698690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.433 [2024-11-19 11:27:49.698715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.433 qpair failed and we were unable to recover it. 00:25:54.433 [2024-11-19 11:27:49.698885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.433 [2024-11-19 11:27:49.698909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.433 qpair failed and we were unable to recover it. 00:25:54.433 [2024-11-19 11:27:49.699042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.433 [2024-11-19 11:27:49.699067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.433 qpair failed and we were unable to recover it. 00:25:54.433 [2024-11-19 11:27:49.699225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.433 [2024-11-19 11:27:49.699251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.433 qpair failed and we were unable to recover it. 00:25:54.433 [2024-11-19 11:27:49.699356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.433 [2024-11-19 11:27:49.699390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.433 qpair failed and we were unable to recover it. 00:25:54.433 [2024-11-19 11:27:49.699493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.433 [2024-11-19 11:27:49.699518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.433 qpair failed and we were unable to recover it. 00:25:54.433 [2024-11-19 11:27:49.699617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.433 [2024-11-19 11:27:49.699641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.433 qpair failed and we were unable to recover it. 00:25:54.433 [2024-11-19 11:27:49.699766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.433 [2024-11-19 11:27:49.699789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.433 qpair failed and we were unable to recover it. 00:25:54.433 [2024-11-19 11:27:49.699928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.433 [2024-11-19 11:27:49.699952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.434 qpair failed and we were unable to recover it. 00:25:54.434 [2024-11-19 11:27:49.700084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.434 [2024-11-19 11:27:49.700110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.434 qpair failed and we were unable to recover it. 00:25:54.434 [2024-11-19 11:27:49.700245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.434 [2024-11-19 11:27:49.700269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.434 qpair failed and we were unable to recover it. 00:25:54.434 [2024-11-19 11:27:49.700424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.434 [2024-11-19 11:27:49.700450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.434 qpair failed and we were unable to recover it. 00:25:54.434 [2024-11-19 11:27:49.700573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.434 [2024-11-19 11:27:49.700597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.434 qpair failed and we were unable to recover it. 00:25:54.434 [2024-11-19 11:27:49.700726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.434 [2024-11-19 11:27:49.700750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.434 qpair failed and we were unable to recover it. 00:25:54.434 [2024-11-19 11:27:49.700887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.434 [2024-11-19 11:27:49.700911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.434 qpair failed and we were unable to recover it. 00:25:54.434 [2024-11-19 11:27:49.701030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.434 [2024-11-19 11:27:49.701056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.434 qpair failed and we were unable to recover it. 00:25:54.434 [2024-11-19 11:27:49.701193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.434 [2024-11-19 11:27:49.701216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.434 qpair failed and we were unable to recover it. 00:25:54.434 [2024-11-19 11:27:49.701360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.434 [2024-11-19 11:27:49.701389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.434 qpair failed and we were unable to recover it. 00:25:54.434 [2024-11-19 11:27:49.701500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.434 [2024-11-19 11:27:49.701525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.434 qpair failed and we were unable to recover it. 00:25:54.434 [2024-11-19 11:27:49.701630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.434 [2024-11-19 11:27:49.701668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.434 qpair failed and we were unable to recover it. 00:25:54.434 [2024-11-19 11:27:49.701804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.434 [2024-11-19 11:27:49.701828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.434 qpair failed and we were unable to recover it. 00:25:54.434 [2024-11-19 11:27:49.701955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.434 [2024-11-19 11:27:49.701980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.434 qpair failed and we were unable to recover it. 00:25:54.434 [2024-11-19 11:27:49.702125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.434 [2024-11-19 11:27:49.702154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.434 qpair failed and we were unable to recover it. 00:25:54.434 [2024-11-19 11:27:49.702286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.434 [2024-11-19 11:27:49.702311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.434 qpair failed and we were unable to recover it. 00:25:54.434 [2024-11-19 11:27:49.702409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.434 [2024-11-19 11:27:49.702433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.434 qpair failed and we were unable to recover it. 00:25:54.434 [2024-11-19 11:27:49.702564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.434 [2024-11-19 11:27:49.702589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.434 qpair failed and we were unable to recover it. 00:25:54.434 [2024-11-19 11:27:49.702720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.434 [2024-11-19 11:27:49.702744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.434 qpair failed and we were unable to recover it. 00:25:54.434 [2024-11-19 11:27:49.702917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.434 [2024-11-19 11:27:49.702941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.434 qpair failed and we were unable to recover it. 00:25:54.434 [2024-11-19 11:27:49.703037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.434 [2024-11-19 11:27:49.703061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.434 qpair failed and we were unable to recover it. 00:25:54.434 [2024-11-19 11:27:49.703174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.434 [2024-11-19 11:27:49.703199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.434 qpair failed and we were unable to recover it. 00:25:54.434 [2024-11-19 11:27:49.703314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.434 [2024-11-19 11:27:49.703339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.434 qpair failed and we were unable to recover it. 00:25:54.434 [2024-11-19 11:27:49.703447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.434 [2024-11-19 11:27:49.703472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.434 qpair failed and we were unable to recover it. 00:25:54.434 [2024-11-19 11:27:49.703565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.434 [2024-11-19 11:27:49.703589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.434 qpair failed and we were unable to recover it. 00:25:54.434 [2024-11-19 11:27:49.703715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.434 [2024-11-19 11:27:49.703740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.434 qpair failed and we were unable to recover it. 00:25:54.434 [2024-11-19 11:27:49.703906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.434 [2024-11-19 11:27:49.703930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.434 qpair failed and we were unable to recover it. 00:25:54.434 [2024-11-19 11:27:49.704041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.434 [2024-11-19 11:27:49.704073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.434 qpair failed and we were unable to recover it. 00:25:54.434 [2024-11-19 11:27:49.704231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.434 [2024-11-19 11:27:49.704257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.434 qpair failed and we were unable to recover it. 00:25:54.434 [2024-11-19 11:27:49.704400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.434 [2024-11-19 11:27:49.704426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.434 qpair failed and we were unable to recover it. 00:25:54.434 [2024-11-19 11:27:49.704553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.434 [2024-11-19 11:27:49.704577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.434 qpair failed and we were unable to recover it. 00:25:54.434 [2024-11-19 11:27:49.704710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.434 [2024-11-19 11:27:49.704749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.434 qpair failed and we were unable to recover it. 00:25:54.434 [2024-11-19 11:27:49.704872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.434 [2024-11-19 11:27:49.704895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.434 qpair failed and we were unable to recover it. 00:25:54.434 [2024-11-19 11:27:49.705060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.434 [2024-11-19 11:27:49.705084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.434 qpair failed and we were unable to recover it. 00:25:54.434 [2024-11-19 11:27:49.705246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.434 [2024-11-19 11:27:49.705271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.434 qpair failed and we were unable to recover it. 00:25:54.434 [2024-11-19 11:27:49.705430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.434 [2024-11-19 11:27:49.705455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.434 qpair failed and we were unable to recover it. 00:25:54.434 [2024-11-19 11:27:49.705588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.434 [2024-11-19 11:27:49.705612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.435 qpair failed and we were unable to recover it. 00:25:54.435 [2024-11-19 11:27:49.705722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.435 [2024-11-19 11:27:49.705747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.435 qpair failed and we were unable to recover it. 00:25:54.435 [2024-11-19 11:27:49.705862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.435 [2024-11-19 11:27:49.705886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.435 qpair failed and we were unable to recover it. 00:25:54.435 [2024-11-19 11:27:49.706021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.435 [2024-11-19 11:27:49.706045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.435 qpair failed and we were unable to recover it. 00:25:54.435 [2024-11-19 11:27:49.706134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.435 [2024-11-19 11:27:49.706158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.435 qpair failed and we were unable to recover it. 00:25:54.435 [2024-11-19 11:27:49.706299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.435 [2024-11-19 11:27:49.706329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.435 qpair failed and we were unable to recover it. 00:25:54.435 [2024-11-19 11:27:49.706476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.435 [2024-11-19 11:27:49.706502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.435 qpair failed and we were unable to recover it. 00:25:54.435 [2024-11-19 11:27:49.706612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.435 [2024-11-19 11:27:49.706636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.435 qpair failed and we were unable to recover it. 00:25:54.435 [2024-11-19 11:27:49.706814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.435 [2024-11-19 11:27:49.706837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.435 qpair failed and we were unable to recover it. 00:25:54.435 [2024-11-19 11:27:49.706988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.435 [2024-11-19 11:27:49.707012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.435 qpair failed and we were unable to recover it. 00:25:54.435 [2024-11-19 11:27:49.707150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.435 [2024-11-19 11:27:49.707189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.435 qpair failed and we were unable to recover it. 00:25:54.435 [2024-11-19 11:27:49.707367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.435 [2024-11-19 11:27:49.707392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.435 qpair failed and we were unable to recover it. 00:25:54.435 [2024-11-19 11:27:49.707493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.435 [2024-11-19 11:27:49.707518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.435 qpair failed and we were unable to recover it. 00:25:54.435 [2024-11-19 11:27:49.707613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.435 [2024-11-19 11:27:49.707638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.435 qpair failed and we were unable to recover it. 00:25:54.435 [2024-11-19 11:27:49.707765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.435 [2024-11-19 11:27:49.707788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.435 qpair failed and we were unable to recover it. 00:25:54.435 [2024-11-19 11:27:49.707901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.435 [2024-11-19 11:27:49.707925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.435 qpair failed and we were unable to recover it. 00:25:54.435 [2024-11-19 11:27:49.708029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.435 [2024-11-19 11:27:49.708053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.435 qpair failed and we were unable to recover it. 00:25:54.435 [2024-11-19 11:27:49.708152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.435 [2024-11-19 11:27:49.708176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.435 qpair failed and we were unable to recover it. 00:25:54.435 [2024-11-19 11:27:49.708313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.435 [2024-11-19 11:27:49.708338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.435 qpair failed and we were unable to recover it. 00:25:54.435 [2024-11-19 11:27:49.708498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.435 [2024-11-19 11:27:49.708536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.435 qpair failed and we were unable to recover it. 00:25:54.435 [2024-11-19 11:27:49.708684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.435 [2024-11-19 11:27:49.708726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.435 qpair failed and we were unable to recover it. 00:25:54.435 [2024-11-19 11:27:49.708868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.435 [2024-11-19 11:27:49.708893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.435 qpair failed and we were unable to recover it. 00:25:54.435 [2024-11-19 11:27:49.709057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.435 [2024-11-19 11:27:49.709081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.435 qpair failed and we were unable to recover it. 00:25:54.435 [2024-11-19 11:27:49.709226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.435 [2024-11-19 11:27:49.709250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.435 qpair failed and we were unable to recover it. 00:25:54.435 [2024-11-19 11:27:49.709392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.435 [2024-11-19 11:27:49.709418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.435 qpair failed and we were unable to recover it. 00:25:54.435 [2024-11-19 11:27:49.709529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.435 [2024-11-19 11:27:49.709555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.435 qpair failed and we were unable to recover it. 00:25:54.435 [2024-11-19 11:27:49.709695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.435 [2024-11-19 11:27:49.709718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.435 qpair failed and we were unable to recover it. 00:25:54.435 [2024-11-19 11:27:49.709885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.435 [2024-11-19 11:27:49.709909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.435 qpair failed and we were unable to recover it. 00:25:54.435 [2024-11-19 11:27:49.710040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.435 [2024-11-19 11:27:49.710078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.435 qpair failed and we were unable to recover it. 00:25:54.435 [2024-11-19 11:27:49.710232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.435 [2024-11-19 11:27:49.710256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.435 qpair failed and we were unable to recover it. 00:25:54.435 [2024-11-19 11:27:49.710376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.435 [2024-11-19 11:27:49.710400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.435 qpair failed and we were unable to recover it. 00:25:54.435 [2024-11-19 11:27:49.710532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.435 [2024-11-19 11:27:49.710556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.435 qpair failed and we were unable to recover it. 00:25:54.435 [2024-11-19 11:27:49.710729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.435 [2024-11-19 11:27:49.710758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.435 qpair failed and we were unable to recover it. 00:25:54.435 [2024-11-19 11:27:49.710861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.435 [2024-11-19 11:27:49.710885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.435 qpair failed and we were unable to recover it. 00:25:54.435 [2024-11-19 11:27:49.711005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.435 [2024-11-19 11:27:49.711030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.435 qpair failed and we were unable to recover it. 00:25:54.435 [2024-11-19 11:27:49.711153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.435 [2024-11-19 11:27:49.711177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.435 qpair failed and we were unable to recover it. 00:25:54.435 [2024-11-19 11:27:49.711284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.435 [2024-11-19 11:27:49.711308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.436 qpair failed and we were unable to recover it. 00:25:54.436 [2024-11-19 11:27:49.711429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.436 [2024-11-19 11:27:49.711467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.436 qpair failed and we were unable to recover it. 00:25:54.436 [2024-11-19 11:27:49.711582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.436 [2024-11-19 11:27:49.711608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.436 qpair failed and we were unable to recover it. 00:25:54.436 [2024-11-19 11:27:49.711716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.436 [2024-11-19 11:27:49.711741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.436 qpair failed and we were unable to recover it. 00:25:54.436 [2024-11-19 11:27:49.711890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.436 [2024-11-19 11:27:49.711929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.436 qpair failed and we were unable to recover it. 00:25:54.436 [2024-11-19 11:27:49.712044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.436 [2024-11-19 11:27:49.712067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.436 qpair failed and we were unable to recover it. 00:25:54.436 [2024-11-19 11:27:49.712209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.436 [2024-11-19 11:27:49.712233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.436 qpair failed and we were unable to recover it. 00:25:54.436 [2024-11-19 11:27:49.712356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.436 [2024-11-19 11:27:49.712391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.436 qpair failed and we were unable to recover it. 00:25:54.436 [2024-11-19 11:27:49.712517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.436 [2024-11-19 11:27:49.712541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.436 qpair failed and we were unable to recover it. 00:25:54.436 [2024-11-19 11:27:49.712623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.436 [2024-11-19 11:27:49.712661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.436 qpair failed and we were unable to recover it. 00:25:54.436 [2024-11-19 11:27:49.712812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.436 [2024-11-19 11:27:49.712850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.436 qpair failed and we were unable to recover it. 00:25:54.436 [2024-11-19 11:27:49.712985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.436 [2024-11-19 11:27:49.713009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.436 qpair failed and we were unable to recover it. 00:25:54.436 [2024-11-19 11:27:49.713151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.436 [2024-11-19 11:27:49.713176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.436 qpair failed and we were unable to recover it. 00:25:54.436 [2024-11-19 11:27:49.713292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.436 [2024-11-19 11:27:49.713317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.436 qpair failed and we were unable to recover it. 00:25:54.436 [2024-11-19 11:27:49.713444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.436 [2024-11-19 11:27:49.713469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.436 qpair failed and we were unable to recover it. 00:25:54.436 [2024-11-19 11:27:49.713568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.436 [2024-11-19 11:27:49.713593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.436 qpair failed and we were unable to recover it. 00:25:54.436 [2024-11-19 11:27:49.713682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.436 [2024-11-19 11:27:49.713706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.436 qpair failed and we were unable to recover it. 00:25:54.436 [2024-11-19 11:27:49.713836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.436 [2024-11-19 11:27:49.713860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.436 qpair failed and we were unable to recover it. 00:25:54.436 [2024-11-19 11:27:49.714014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.436 [2024-11-19 11:27:49.714037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.436 qpair failed and we were unable to recover it. 00:25:54.436 [2024-11-19 11:27:49.714218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.436 [2024-11-19 11:27:49.714241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.436 qpair failed and we were unable to recover it. 00:25:54.436 [2024-11-19 11:27:49.714342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.436 [2024-11-19 11:27:49.714388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.436 qpair failed and we were unable to recover it. 00:25:54.436 [2024-11-19 11:27:49.714516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.436 [2024-11-19 11:27:49.714540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.436 qpair failed and we were unable to recover it. 00:25:54.436 [2024-11-19 11:27:49.714661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.436 [2024-11-19 11:27:49.714685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.436 qpair failed and we were unable to recover it. 00:25:54.436 [2024-11-19 11:27:49.714825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.436 [2024-11-19 11:27:49.714863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.436 qpair failed and we were unable to recover it. 00:25:54.436 [2024-11-19 11:27:49.715025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.436 [2024-11-19 11:27:49.715048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.436 qpair failed and we were unable to recover it. 00:25:54.436 [2024-11-19 11:27:49.715228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.436 [2024-11-19 11:27:49.715251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.436 qpair failed and we were unable to recover it. 00:25:54.436 [2024-11-19 11:27:49.715391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.436 [2024-11-19 11:27:49.715417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.436 qpair failed and we were unable to recover it. 00:25:54.436 [2024-11-19 11:27:49.715519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.436 [2024-11-19 11:27:49.715543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.436 qpair failed and we were unable to recover it. 00:25:54.436 [2024-11-19 11:27:49.715667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.436 [2024-11-19 11:27:49.715692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.436 qpair failed and we were unable to recover it. 00:25:54.436 [2024-11-19 11:27:49.715866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.436 [2024-11-19 11:27:49.715889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.436 qpair failed and we were unable to recover it. 00:25:54.436 [2024-11-19 11:27:49.716028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.436 [2024-11-19 11:27:49.716052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.436 qpair failed and we were unable to recover it. 00:25:54.436 [2024-11-19 11:27:49.716159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.437 [2024-11-19 11:27:49.716184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.437 qpair failed and we were unable to recover it. 00:25:54.437 [2024-11-19 11:27:49.716329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.437 [2024-11-19 11:27:49.716355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.437 qpair failed and we were unable to recover it. 00:25:54.437 [2024-11-19 11:27:49.716486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.437 [2024-11-19 11:27:49.716512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.437 qpair failed and we were unable to recover it. 00:25:54.437 [2024-11-19 11:27:49.716622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.437 [2024-11-19 11:27:49.716648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.437 qpair failed and we were unable to recover it. 00:25:54.437 [2024-11-19 11:27:49.716786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.437 [2024-11-19 11:27:49.716826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.437 qpair failed and we were unable to recover it. 00:25:54.437 [2024-11-19 11:27:49.716977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.437 [2024-11-19 11:27:49.717006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.437 qpair failed and we were unable to recover it. 00:25:54.437 [2024-11-19 11:27:49.717127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.437 [2024-11-19 11:27:49.717153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.437 qpair failed and we were unable to recover it. 00:25:54.437 [2024-11-19 11:27:49.717308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.437 [2024-11-19 11:27:49.717333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.437 qpair failed and we were unable to recover it. 00:25:54.437 [2024-11-19 11:27:49.717459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.437 [2024-11-19 11:27:49.717485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.437 qpair failed and we were unable to recover it. 00:25:54.437 [2024-11-19 11:27:49.717603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.437 [2024-11-19 11:27:49.717630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.437 qpair failed and we were unable to recover it. 00:25:54.437 [2024-11-19 11:27:49.717750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.437 [2024-11-19 11:27:49.717775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.437 qpair failed and we were unable to recover it. 00:25:54.437 [2024-11-19 11:27:49.717881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.437 [2024-11-19 11:27:49.717907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.437 qpair failed and we were unable to recover it. 00:25:54.437 [2024-11-19 11:27:49.718066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.437 [2024-11-19 11:27:49.718092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.437 qpair failed and we were unable to recover it. 00:25:54.437 [2024-11-19 11:27:49.718209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.437 [2024-11-19 11:27:49.718234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.437 qpair failed and we were unable to recover it. 00:25:54.437 [2024-11-19 11:27:49.718341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.437 [2024-11-19 11:27:49.718373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.437 qpair failed and we were unable to recover it. 00:25:54.437 [2024-11-19 11:27:49.718530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.437 [2024-11-19 11:27:49.718554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.437 qpair failed and we were unable to recover it. 00:25:54.437 [2024-11-19 11:27:49.718687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.437 [2024-11-19 11:27:49.718712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.437 qpair failed and we were unable to recover it. 00:25:54.437 [2024-11-19 11:27:49.718814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.437 [2024-11-19 11:27:49.718840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.437 qpair failed and we were unable to recover it. 00:25:54.437 [2024-11-19 11:27:49.718959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.437 [2024-11-19 11:27:49.718985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.437 qpair failed and we were unable to recover it. 00:25:54.437 [2024-11-19 11:27:49.719149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.437 [2024-11-19 11:27:49.719175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.437 qpair failed and we were unable to recover it. 00:25:54.437 [2024-11-19 11:27:49.719308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.437 [2024-11-19 11:27:49.719333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.437 qpair failed and we were unable to recover it. 00:25:54.437 [2024-11-19 11:27:49.719437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.437 [2024-11-19 11:27:49.719463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.437 qpair failed and we were unable to recover it. 00:25:54.437 [2024-11-19 11:27:49.719559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.437 [2024-11-19 11:27:49.719584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.437 qpair failed and we were unable to recover it. 00:25:54.437 [2024-11-19 11:27:49.719714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.437 [2024-11-19 11:27:49.719739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.437 qpair failed and we were unable to recover it. 00:25:54.437 [2024-11-19 11:27:49.719862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.437 [2024-11-19 11:27:49.719887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.437 qpair failed and we were unable to recover it. 00:25:54.437 [2024-11-19 11:27:49.720020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.437 [2024-11-19 11:27:49.720045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.437 qpair failed and we were unable to recover it. 00:25:54.437 [2024-11-19 11:27:49.720150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.437 [2024-11-19 11:27:49.720176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.437 qpair failed and we were unable to recover it. 00:25:54.437 [2024-11-19 11:27:49.720339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.437 [2024-11-19 11:27:49.720371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.437 qpair failed and we were unable to recover it. 00:25:54.437 [2024-11-19 11:27:49.720477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.437 [2024-11-19 11:27:49.720502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.437 qpair failed and we were unable to recover it. 00:25:54.437 [2024-11-19 11:27:49.720638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.437 [2024-11-19 11:27:49.720663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.437 qpair failed and we were unable to recover it. 00:25:54.437 [2024-11-19 11:27:49.720818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.437 [2024-11-19 11:27:49.720843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.437 qpair failed and we were unable to recover it. 00:25:54.437 [2024-11-19 11:27:49.720997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.437 [2024-11-19 11:27:49.721023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.437 qpair failed and we were unable to recover it. 00:25:54.437 [2024-11-19 11:27:49.721160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.437 [2024-11-19 11:27:49.721186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.437 qpair failed and we were unable to recover it. 00:25:54.437 [2024-11-19 11:27:49.721287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.437 [2024-11-19 11:27:49.721312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.437 qpair failed and we were unable to recover it. 00:25:54.437 [2024-11-19 11:27:49.721424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.437 [2024-11-19 11:27:49.721451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.437 qpair failed and we were unable to recover it. 00:25:54.437 [2024-11-19 11:27:49.721589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.437 [2024-11-19 11:27:49.721614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.437 qpair failed and we were unable to recover it. 00:25:54.438 [2024-11-19 11:27:49.721745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.438 [2024-11-19 11:27:49.721771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.438 qpair failed and we were unable to recover it. 00:25:54.438 [2024-11-19 11:27:49.721896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.438 [2024-11-19 11:27:49.721921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.438 qpair failed and we were unable to recover it. 00:25:54.438 [2024-11-19 11:27:49.722073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.438 [2024-11-19 11:27:49.722098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.438 qpair failed and we were unable to recover it. 00:25:54.438 [2024-11-19 11:27:49.722225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.438 [2024-11-19 11:27:49.722251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.438 qpair failed and we were unable to recover it. 00:25:54.438 [2024-11-19 11:27:49.722384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.438 [2024-11-19 11:27:49.722410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.438 qpair failed and we were unable to recover it. 00:25:54.438 [2024-11-19 11:27:49.722501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.438 [2024-11-19 11:27:49.722527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.438 qpair failed and we were unable to recover it. 00:25:54.438 [2024-11-19 11:27:49.722661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.438 [2024-11-19 11:27:49.722686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.438 qpair failed and we were unable to recover it. 00:25:54.438 [2024-11-19 11:27:49.722803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.438 [2024-11-19 11:27:49.722829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.438 qpair failed and we were unable to recover it. 00:25:54.438 [2024-11-19 11:27:49.722946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.438 [2024-11-19 11:27:49.722971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.438 qpair failed and we were unable to recover it. 00:25:54.438 [2024-11-19 11:27:49.723100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.438 [2024-11-19 11:27:49.723133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.438 qpair failed and we were unable to recover it. 00:25:54.438 [2024-11-19 11:27:49.723263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.438 [2024-11-19 11:27:49.723289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.438 qpair failed and we were unable to recover it. 00:25:54.438 [2024-11-19 11:27:49.723414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.438 [2024-11-19 11:27:49.723439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.438 qpair failed and we were unable to recover it. 00:25:54.438 [2024-11-19 11:27:49.723543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.438 [2024-11-19 11:27:49.723568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.438 qpair failed and we were unable to recover it. 00:25:54.438 [2024-11-19 11:27:49.723692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.438 [2024-11-19 11:27:49.723716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.438 qpair failed and we were unable to recover it. 00:25:54.438 [2024-11-19 11:27:49.723870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.438 [2024-11-19 11:27:49.723894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.438 qpair failed and we were unable to recover it. 00:25:54.438 [2024-11-19 11:27:49.724048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.438 [2024-11-19 11:27:49.724075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.438 qpair failed and we were unable to recover it. 00:25:54.438 [2024-11-19 11:27:49.724206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.438 [2024-11-19 11:27:49.724231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.438 qpair failed and we were unable to recover it. 00:25:54.438 [2024-11-19 11:27:49.724391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.438 [2024-11-19 11:27:49.724418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.438 qpair failed and we were unable to recover it. 00:25:54.438 [2024-11-19 11:27:49.724520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.438 [2024-11-19 11:27:49.724545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.438 qpair failed and we were unable to recover it. 00:25:54.438 [2024-11-19 11:27:49.724673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.438 [2024-11-19 11:27:49.724697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.438 qpair failed and we were unable to recover it. 00:25:54.438 [2024-11-19 11:27:49.724877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.438 [2024-11-19 11:27:49.724903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.438 qpair failed and we were unable to recover it. 00:25:54.438 [2024-11-19 11:27:49.725060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.438 [2024-11-19 11:27:49.725086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.438 qpair failed and we were unable to recover it. 00:25:54.438 [2024-11-19 11:27:49.725214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.438 [2024-11-19 11:27:49.725253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.438 qpair failed and we were unable to recover it. 00:25:54.438 [2024-11-19 11:27:49.725406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.438 [2024-11-19 11:27:49.725432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.438 qpair failed and we were unable to recover it. 00:25:54.438 [2024-11-19 11:27:49.725521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.438 [2024-11-19 11:27:49.725546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.438 qpair failed and we were unable to recover it. 00:25:54.438 [2024-11-19 11:27:49.725658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.438 [2024-11-19 11:27:49.725683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.438 qpair failed and we were unable to recover it. 00:25:54.438 [2024-11-19 11:27:49.725832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.438 [2024-11-19 11:27:49.725857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.438 qpair failed and we were unable to recover it. 00:25:54.438 [2024-11-19 11:27:49.726002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.438 [2024-11-19 11:27:49.726043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.438 qpair failed and we were unable to recover it. 00:25:54.438 [2024-11-19 11:27:49.726139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.438 [2024-11-19 11:27:49.726163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.438 qpair failed and we were unable to recover it. 00:25:54.438 [2024-11-19 11:27:49.726291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.438 [2024-11-19 11:27:49.726315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.438 qpair failed and we were unable to recover it. 00:25:54.438 [2024-11-19 11:27:49.726445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.438 [2024-11-19 11:27:49.726470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.438 qpair failed and we were unable to recover it. 00:25:54.438 [2024-11-19 11:27:49.726566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.438 [2024-11-19 11:27:49.726592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.438 qpair failed and we were unable to recover it. 00:25:54.438 [2024-11-19 11:27:49.726718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.438 [2024-11-19 11:27:49.726742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.438 qpair failed and we were unable to recover it. 00:25:54.438 [2024-11-19 11:27:49.726873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.438 [2024-11-19 11:27:49.726912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.438 qpair failed and we were unable to recover it. 00:25:54.438 [2024-11-19 11:27:49.727029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.438 [2024-11-19 11:27:49.727053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.438 qpair failed and we were unable to recover it. 00:25:54.438 [2024-11-19 11:27:49.727200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.438 [2024-11-19 11:27:49.727224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.438 qpair failed and we were unable to recover it. 00:25:54.439 [2024-11-19 11:27:49.727381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.439 [2024-11-19 11:27:49.727405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.439 qpair failed and we were unable to recover it. 00:25:54.439 [2024-11-19 11:27:49.727513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.439 [2024-11-19 11:27:49.727538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.439 qpair failed and we were unable to recover it. 00:25:54.439 [2024-11-19 11:27:49.727707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.439 [2024-11-19 11:27:49.727731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.439 qpair failed and we were unable to recover it. 00:25:54.439 [2024-11-19 11:27:49.727908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.439 [2024-11-19 11:27:49.727946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.439 qpair failed and we were unable to recover it. 00:25:54.439 [2024-11-19 11:27:49.728085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.439 [2024-11-19 11:27:49.728108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.439 qpair failed and we were unable to recover it. 00:25:54.439 [2024-11-19 11:27:49.728205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.439 [2024-11-19 11:27:49.728229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.439 qpair failed and we were unable to recover it. 00:25:54.439 [2024-11-19 11:27:49.728393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.439 [2024-11-19 11:27:49.728420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.439 qpair failed and we were unable to recover it. 00:25:54.439 [2024-11-19 11:27:49.728537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.439 [2024-11-19 11:27:49.728561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.439 qpair failed and we were unable to recover it. 00:25:54.439 [2024-11-19 11:27:49.728740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.439 [2024-11-19 11:27:49.728763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.439 qpair failed and we were unable to recover it. 00:25:54.439 [2024-11-19 11:27:49.728936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.439 [2024-11-19 11:27:49.728959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.439 qpair failed and we were unable to recover it. 00:25:54.439 [2024-11-19 11:27:49.729087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.439 [2024-11-19 11:27:49.729126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.439 qpair failed and we were unable to recover it. 00:25:54.439 [2024-11-19 11:27:49.729291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.439 [2024-11-19 11:27:49.729314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.439 qpair failed and we were unable to recover it. 00:25:54.439 [2024-11-19 11:27:49.729468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.439 [2024-11-19 11:27:49.729493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.439 qpair failed and we were unable to recover it. 00:25:54.439 [2024-11-19 11:27:49.729586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.439 [2024-11-19 11:27:49.729615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.439 qpair failed and we were unable to recover it. 00:25:54.439 [2024-11-19 11:27:49.729787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.439 [2024-11-19 11:27:49.729810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.439 qpair failed and we were unable to recover it. 00:25:54.439 [2024-11-19 11:27:49.729916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.439 [2024-11-19 11:27:49.729940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.439 qpair failed and we were unable to recover it. 00:25:54.439 [2024-11-19 11:27:49.730079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.439 [2024-11-19 11:27:49.730103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.439 qpair failed and we were unable to recover it. 00:25:54.439 [2024-11-19 11:27:49.730277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.439 [2024-11-19 11:27:49.730301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.439 qpair failed and we were unable to recover it. 00:25:54.439 [2024-11-19 11:27:49.730419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.439 [2024-11-19 11:27:49.730445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.439 qpair failed and we were unable to recover it. 00:25:54.439 [2024-11-19 11:27:49.730545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.439 [2024-11-19 11:27:49.730568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.439 qpair failed and we were unable to recover it. 00:25:54.439 [2024-11-19 11:27:49.730734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.439 [2024-11-19 11:27:49.730758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.439 qpair failed and we were unable to recover it. 00:25:54.439 [2024-11-19 11:27:49.730867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.439 [2024-11-19 11:27:49.730892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.439 qpair failed and we were unable to recover it. 00:25:54.439 [2024-11-19 11:27:49.731032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.439 [2024-11-19 11:27:49.731055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.439 qpair failed and we were unable to recover it. 00:25:54.439 [2024-11-19 11:27:49.731196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.439 [2024-11-19 11:27:49.731220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.439 qpair failed and we were unable to recover it. 00:25:54.439 [2024-11-19 11:27:49.731368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.439 [2024-11-19 11:27:49.731392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.439 qpair failed and we were unable to recover it. 00:25:54.439 [2024-11-19 11:27:49.731505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.439 [2024-11-19 11:27:49.731529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.439 qpair failed and we were unable to recover it. 00:25:54.439 [2024-11-19 11:27:49.731715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.439 [2024-11-19 11:27:49.731740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.439 qpair failed and we were unable to recover it. 00:25:54.439 [2024-11-19 11:27:49.731879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.439 [2024-11-19 11:27:49.731902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.439 qpair failed and we were unable to recover it. 00:25:54.439 [2024-11-19 11:27:49.731990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.439 [2024-11-19 11:27:49.732015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.439 qpair failed and we were unable to recover it. 00:25:54.439 [2024-11-19 11:27:49.732163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.439 [2024-11-19 11:27:49.732187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.439 qpair failed and we were unable to recover it. 00:25:54.439 [2024-11-19 11:27:49.732331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.439 [2024-11-19 11:27:49.732374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.439 qpair failed and we were unable to recover it. 00:25:54.439 [2024-11-19 11:27:49.732454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.439 [2024-11-19 11:27:49.732477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.439 qpair failed and we were unable to recover it. 00:25:54.439 [2024-11-19 11:27:49.732623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.439 [2024-11-19 11:27:49.732648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.439 qpair failed and we were unable to recover it. 00:25:54.439 [2024-11-19 11:27:49.732792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.439 [2024-11-19 11:27:49.732818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.439 qpair failed and we were unable to recover it. 00:25:54.439 [2024-11-19 11:27:49.732918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.439 [2024-11-19 11:27:49.732942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.439 qpair failed and we were unable to recover it. 00:25:54.439 [2024-11-19 11:27:49.733112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.440 [2024-11-19 11:27:49.733136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.440 qpair failed and we were unable to recover it. 00:25:54.440 [2024-11-19 11:27:49.733273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.440 [2024-11-19 11:27:49.733297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.440 qpair failed and we were unable to recover it. 00:25:54.440 [2024-11-19 11:27:49.733449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.440 [2024-11-19 11:27:49.733474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.440 qpair failed and we were unable to recover it. 00:25:54.440 [2024-11-19 11:27:49.733587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.440 [2024-11-19 11:27:49.733611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.440 qpair failed and we were unable to recover it. 00:25:54.440 [2024-11-19 11:27:49.733793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.440 [2024-11-19 11:27:49.733832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.440 qpair failed and we were unable to recover it. 00:25:54.440 [2024-11-19 11:27:49.733988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.440 [2024-11-19 11:27:49.734027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.440 qpair failed and we were unable to recover it. 00:25:54.440 [2024-11-19 11:27:49.734159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.440 [2024-11-19 11:27:49.734183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.440 qpair failed and we were unable to recover it. 00:25:54.440 [2024-11-19 11:27:49.734326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.440 [2024-11-19 11:27:49.734350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.440 qpair failed and we were unable to recover it. 00:25:54.440 [2024-11-19 11:27:49.734484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.440 [2024-11-19 11:27:49.734509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.440 qpair failed and we were unable to recover it. 00:25:54.440 [2024-11-19 11:27:49.734618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.440 [2024-11-19 11:27:49.734658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.440 qpair failed and we were unable to recover it. 00:25:54.440 [2024-11-19 11:27:49.734760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.440 [2024-11-19 11:27:49.734785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.440 qpair failed and we were unable to recover it. 00:25:54.440 [2024-11-19 11:27:49.734931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.440 [2024-11-19 11:27:49.734955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.440 qpair failed and we were unable to recover it. 00:25:54.440 [2024-11-19 11:27:49.735132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.440 [2024-11-19 11:27:49.735155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.440 qpair failed and we were unable to recover it. 00:25:54.440 [2024-11-19 11:27:49.735313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.440 [2024-11-19 11:27:49.735337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.440 qpair failed and we were unable to recover it. 00:25:54.440 [2024-11-19 11:27:49.735465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.440 [2024-11-19 11:27:49.735490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.440 qpair failed and we were unable to recover it. 00:25:54.440 [2024-11-19 11:27:49.735596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.440 [2024-11-19 11:27:49.735620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.440 qpair failed and we were unable to recover it. 00:25:54.440 [2024-11-19 11:27:49.735735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.440 [2024-11-19 11:27:49.735760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.440 qpair failed and we were unable to recover it. 00:25:54.440 [2024-11-19 11:27:49.735862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.440 [2024-11-19 11:27:49.735885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.440 qpair failed and we were unable to recover it. 00:25:54.440 [2024-11-19 11:27:49.736053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.440 [2024-11-19 11:27:49.736081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.440 qpair failed and we were unable to recover it. 00:25:54.440 [2024-11-19 11:27:49.736240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.440 [2024-11-19 11:27:49.736265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.440 qpair failed and we were unable to recover it. 00:25:54.440 [2024-11-19 11:27:49.736412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.440 [2024-11-19 11:27:49.736436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.440 qpair failed and we were unable to recover it. 00:25:54.440 [2024-11-19 11:27:49.736582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.440 [2024-11-19 11:27:49.736607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.440 qpair failed and we were unable to recover it. 00:25:54.440 [2024-11-19 11:27:49.736707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.440 [2024-11-19 11:27:49.736732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.440 qpair failed and we were unable to recover it. 00:25:54.440 [2024-11-19 11:27:49.736831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.440 [2024-11-19 11:27:49.736854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.440 qpair failed and we were unable to recover it. 00:25:54.440 [2024-11-19 11:27:49.737043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.440 [2024-11-19 11:27:49.737068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.440 qpair failed and we were unable to recover it. 00:25:54.440 [2024-11-19 11:27:49.737213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.440 [2024-11-19 11:27:49.737236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.440 qpair failed and we were unable to recover it. 00:25:54.440 [2024-11-19 11:27:49.737407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.440 [2024-11-19 11:27:49.737432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.440 qpair failed and we were unable to recover it. 00:25:54.440 [2024-11-19 11:27:49.737548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.440 [2024-11-19 11:27:49.737573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.440 qpair failed and we were unable to recover it. 00:25:54.440 [2024-11-19 11:27:49.737699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.440 [2024-11-19 11:27:49.737724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.440 qpair failed and we were unable to recover it. 00:25:54.440 [2024-11-19 11:27:49.737897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.440 [2024-11-19 11:27:49.737920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.440 qpair failed and we were unable to recover it. 00:25:54.440 [2024-11-19 11:27:49.738061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.440 [2024-11-19 11:27:49.738086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.440 qpair failed and we were unable to recover it. 00:25:54.440 [2024-11-19 11:27:49.738219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.440 [2024-11-19 11:27:49.738243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.440 qpair failed and we were unable to recover it. 00:25:54.440 [2024-11-19 11:27:49.738399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.440 [2024-11-19 11:27:49.738424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.440 qpair failed and we were unable to recover it. 00:25:54.440 [2024-11-19 11:27:49.738539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.440 [2024-11-19 11:27:49.738564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.440 qpair failed and we were unable to recover it. 00:25:54.440 [2024-11-19 11:27:49.738678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.440 [2024-11-19 11:27:49.738703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.440 qpair failed and we were unable to recover it. 00:25:54.440 [2024-11-19 11:27:49.738834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.440 [2024-11-19 11:27:49.738873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.440 qpair failed and we were unable to recover it. 00:25:54.441 [2024-11-19 11:27:49.738988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.441 [2024-11-19 11:27:49.739012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.441 qpair failed and we were unable to recover it. 00:25:54.441 [2024-11-19 11:27:49.739156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.441 [2024-11-19 11:27:49.739181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.441 qpair failed and we were unable to recover it. 00:25:54.441 [2024-11-19 11:27:49.739330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.441 [2024-11-19 11:27:49.739354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.441 qpair failed and we were unable to recover it. 00:25:54.441 [2024-11-19 11:27:49.739489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.441 [2024-11-19 11:27:49.739513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.441 qpair failed and we were unable to recover it. 00:25:54.441 [2024-11-19 11:27:49.739606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.441 [2024-11-19 11:27:49.739632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.441 qpair failed and we were unable to recover it. 00:25:54.441 [2024-11-19 11:27:49.739785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.441 [2024-11-19 11:27:49.739809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.441 qpair failed and we were unable to recover it. 00:25:54.441 [2024-11-19 11:27:49.739972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.441 [2024-11-19 11:27:49.739996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.441 qpair failed and we were unable to recover it. 00:25:54.441 [2024-11-19 11:27:49.740141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.441 [2024-11-19 11:27:49.740166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.441 qpair failed and we were unable to recover it. 00:25:54.441 [2024-11-19 11:27:49.740301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.441 [2024-11-19 11:27:49.740325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.441 qpair failed and we were unable to recover it. 00:25:54.441 [2024-11-19 11:27:49.740462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.441 [2024-11-19 11:27:49.740487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.441 qpair failed and we were unable to recover it. 00:25:54.441 [2024-11-19 11:27:49.740584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.441 [2024-11-19 11:27:49.740609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.441 qpair failed and we were unable to recover it. 00:25:54.441 [2024-11-19 11:27:49.740774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.441 [2024-11-19 11:27:49.740812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.441 qpair failed and we were unable to recover it. 00:25:54.441 [2024-11-19 11:27:49.740987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.441 [2024-11-19 11:27:49.741011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.441 qpair failed and we were unable to recover it. 00:25:54.441 [2024-11-19 11:27:49.741119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.441 [2024-11-19 11:27:49.741143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.441 qpair failed and we were unable to recover it. 00:25:54.441 [2024-11-19 11:27:49.741288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.441 [2024-11-19 11:27:49.741312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.441 qpair failed and we were unable to recover it. 00:25:54.441 [2024-11-19 11:27:49.741463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.441 [2024-11-19 11:27:49.741489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.441 qpair failed and we were unable to recover it. 00:25:54.441 [2024-11-19 11:27:49.741594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.441 [2024-11-19 11:27:49.741618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.441 qpair failed and we were unable to recover it. 00:25:54.441 [2024-11-19 11:27:49.741754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.441 [2024-11-19 11:27:49.741778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.441 qpair failed and we were unable to recover it. 00:25:54.441 [2024-11-19 11:27:49.741942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.441 [2024-11-19 11:27:49.741967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.441 qpair failed and we were unable to recover it. 00:25:54.441 [2024-11-19 11:27:49.742071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.441 [2024-11-19 11:27:49.742096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.441 qpair failed and we were unable to recover it. 00:25:54.441 [2024-11-19 11:27:49.742251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.441 [2024-11-19 11:27:49.742275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.441 qpair failed and we were unable to recover it. 00:25:54.441 [2024-11-19 11:27:49.742417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.441 [2024-11-19 11:27:49.742458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.441 qpair failed and we were unable to recover it. 00:25:54.441 [2024-11-19 11:27:49.742547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.441 [2024-11-19 11:27:49.742576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.441 qpair failed and we were unable to recover it. 00:25:54.441 [2024-11-19 11:27:49.742704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.441 [2024-11-19 11:27:49.742728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.441 qpair failed and we were unable to recover it. 00:25:54.441 [2024-11-19 11:27:49.742885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.441 [2024-11-19 11:27:49.742909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.441 qpair failed and we were unable to recover it. 00:25:54.441 [2024-11-19 11:27:49.743061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.441 [2024-11-19 11:27:49.743085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.441 qpair failed and we were unable to recover it. 00:25:54.441 [2024-11-19 11:27:49.743208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.442 [2024-11-19 11:27:49.743232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.442 qpair failed and we were unable to recover it. 00:25:54.442 [2024-11-19 11:27:49.743337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.442 [2024-11-19 11:27:49.743387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.442 qpair failed and we were unable to recover it. 00:25:54.442 [2024-11-19 11:27:49.743501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.442 [2024-11-19 11:27:49.743526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.442 qpair failed and we were unable to recover it. 00:25:54.442 [2024-11-19 11:27:49.743645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.442 [2024-11-19 11:27:49.743683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.442 qpair failed and we were unable to recover it. 00:25:54.442 [2024-11-19 11:27:49.743837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.442 [2024-11-19 11:27:49.743861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.442 qpair failed and we were unable to recover it. 00:25:54.442 [2024-11-19 11:27:49.743996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.442 [2024-11-19 11:27:49.744034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.442 qpair failed and we were unable to recover it. 00:25:54.442 [2024-11-19 11:27:49.744223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.442 [2024-11-19 11:27:49.744246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.442 qpair failed and we were unable to recover it. 00:25:54.442 [2024-11-19 11:27:49.744398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.442 [2024-11-19 11:27:49.744423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.442 qpair failed and we were unable to recover it. 00:25:54.442 [2024-11-19 11:27:49.744526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.442 [2024-11-19 11:27:49.744551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.442 qpair failed and we were unable to recover it. 00:25:54.442 [2024-11-19 11:27:49.744715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.442 [2024-11-19 11:27:49.744739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.442 qpair failed and we were unable to recover it. 00:25:54.442 [2024-11-19 11:27:49.744871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.442 [2024-11-19 11:27:49.744895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.442 qpair failed and we were unable to recover it. 00:25:54.442 [2024-11-19 11:27:49.745011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.442 [2024-11-19 11:27:49.745035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.442 qpair failed and we were unable to recover it. 00:25:54.442 [2024-11-19 11:27:49.745193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.442 [2024-11-19 11:27:49.745217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.442 qpair failed and we were unable to recover it. 00:25:54.442 [2024-11-19 11:27:49.745344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.442 [2024-11-19 11:27:49.745391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.442 qpair failed and we were unable to recover it. 00:25:54.442 [2024-11-19 11:27:49.745510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.442 [2024-11-19 11:27:49.745535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.442 qpair failed and we were unable to recover it. 00:25:54.442 [2024-11-19 11:27:49.745632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.442 [2024-11-19 11:27:49.745656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.442 qpair failed and we were unable to recover it. 00:25:54.442 [2024-11-19 11:27:49.745777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.442 [2024-11-19 11:27:49.745801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.442 qpair failed and we were unable to recover it. 00:25:54.442 [2024-11-19 11:27:49.745968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.442 [2024-11-19 11:27:49.746007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.442 qpair failed and we were unable to recover it. 00:25:54.442 [2024-11-19 11:27:49.746174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.442 [2024-11-19 11:27:49.746197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.442 qpair failed and we were unable to recover it. 00:25:54.442 [2024-11-19 11:27:49.746338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.442 [2024-11-19 11:27:49.746369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.442 qpair failed and we were unable to recover it. 00:25:54.442 [2024-11-19 11:27:49.746489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.442 [2024-11-19 11:27:49.746514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.442 qpair failed and we were unable to recover it. 00:25:54.442 [2024-11-19 11:27:49.746666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.442 [2024-11-19 11:27:49.746705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.442 qpair failed and we were unable to recover it. 00:25:54.442 [2024-11-19 11:27:49.746836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.442 [2024-11-19 11:27:49.746860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.442 qpair failed and we were unable to recover it. 00:25:54.442 [2024-11-19 11:27:49.747005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.442 [2024-11-19 11:27:49.747030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.442 qpair failed and we were unable to recover it. 00:25:54.442 [2024-11-19 11:27:49.747169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.442 [2024-11-19 11:27:49.747208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.442 qpair failed and we were unable to recover it. 00:25:54.442 [2024-11-19 11:27:49.747329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.442 [2024-11-19 11:27:49.747353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.442 qpair failed and we were unable to recover it. 00:25:54.442 [2024-11-19 11:27:49.747485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.442 [2024-11-19 11:27:49.747510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.442 qpair failed and we were unable to recover it. 00:25:54.442 [2024-11-19 11:27:49.747609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.442 [2024-11-19 11:27:49.747633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.442 qpair failed and we were unable to recover it. 00:25:54.442 [2024-11-19 11:27:49.747812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.442 [2024-11-19 11:27:49.747835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.442 qpair failed and we were unable to recover it. 00:25:54.442 [2024-11-19 11:27:49.747988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.442 [2024-11-19 11:27:49.748011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.442 qpair failed and we were unable to recover it. 00:25:54.442 [2024-11-19 11:27:49.748162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.442 [2024-11-19 11:27:49.748200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.442 qpair failed and we were unable to recover it. 00:25:54.442 [2024-11-19 11:27:49.748305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.442 [2024-11-19 11:27:49.748328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.442 qpair failed and we were unable to recover it. 00:25:54.442 [2024-11-19 11:27:49.748476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.442 [2024-11-19 11:27:49.748501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.442 qpair failed and we were unable to recover it. 00:25:54.442 [2024-11-19 11:27:49.748601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.442 [2024-11-19 11:27:49.748625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.442 qpair failed and we were unable to recover it. 00:25:54.442 [2024-11-19 11:27:49.748778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.442 [2024-11-19 11:27:49.748801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.442 qpair failed and we were unable to recover it. 00:25:54.442 [2024-11-19 11:27:49.748928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.442 [2024-11-19 11:27:49.748968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.442 qpair failed and we were unable to recover it. 00:25:54.443 [2024-11-19 11:27:49.749109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.443 [2024-11-19 11:27:49.749135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.443 qpair failed and we were unable to recover it. 00:25:54.443 [2024-11-19 11:27:49.749274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.443 [2024-11-19 11:27:49.749297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.443 qpair failed and we were unable to recover it. 00:25:54.443 [2024-11-19 11:27:49.749432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.443 [2024-11-19 11:27:49.749459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.443 qpair failed and we were unable to recover it. 00:25:54.443 [2024-11-19 11:27:49.749596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.443 [2024-11-19 11:27:49.749620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.443 qpair failed and we were unable to recover it. 00:25:54.443 [2024-11-19 11:27:49.749796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.443 [2024-11-19 11:27:49.749820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.443 qpair failed and we were unable to recover it. 00:25:54.443 [2024-11-19 11:27:49.749935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.443 [2024-11-19 11:27:49.749974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.443 qpair failed and we were unable to recover it. 00:25:54.443 [2024-11-19 11:27:49.750075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.443 [2024-11-19 11:27:49.750099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.443 qpair failed and we were unable to recover it. 00:25:54.443 [2024-11-19 11:27:49.750247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.443 [2024-11-19 11:27:49.750270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.443 qpair failed and we were unable to recover it. 00:25:54.443 [2024-11-19 11:27:49.750407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.443 [2024-11-19 11:27:49.750447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.443 qpair failed and we were unable to recover it. 00:25:54.443 [2024-11-19 11:27:49.750571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.443 [2024-11-19 11:27:49.750595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.443 qpair failed and we were unable to recover it. 00:25:54.443 [2024-11-19 11:27:49.750773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.443 [2024-11-19 11:27:49.750797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.443 qpair failed and we were unable to recover it. 00:25:54.443 [2024-11-19 11:27:49.750965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.443 [2024-11-19 11:27:49.750989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.443 qpair failed and we were unable to recover it. 00:25:54.443 [2024-11-19 11:27:49.751172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.443 [2024-11-19 11:27:49.751195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.443 qpair failed and we were unable to recover it. 00:25:54.443 [2024-11-19 11:27:49.751300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.443 [2024-11-19 11:27:49.751324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.443 qpair failed and we were unable to recover it. 00:25:54.443 [2024-11-19 11:27:49.751444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.443 [2024-11-19 11:27:49.751468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.443 qpair failed and we were unable to recover it. 00:25:54.443 [2024-11-19 11:27:49.751568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.443 [2024-11-19 11:27:49.751593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.443 qpair failed and we were unable to recover it. 00:25:54.443 [2024-11-19 11:27:49.751754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.443 [2024-11-19 11:27:49.751777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.443 qpair failed and we were unable to recover it. 00:25:54.443 [2024-11-19 11:27:49.751951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.443 [2024-11-19 11:27:49.751974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.443 qpair failed and we were unable to recover it. 00:25:54.443 [2024-11-19 11:27:49.752100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.443 [2024-11-19 11:27:49.752139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.443 qpair failed and we were unable to recover it. 00:25:54.443 [2024-11-19 11:27:49.752267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.443 [2024-11-19 11:27:49.752291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.443 qpair failed and we were unable to recover it. 00:25:54.443 [2024-11-19 11:27:49.752409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.443 [2024-11-19 11:27:49.752434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.443 qpair failed and we were unable to recover it. 00:25:54.443 [2024-11-19 11:27:49.752539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.443 [2024-11-19 11:27:49.752562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.443 qpair failed and we were unable to recover it. 00:25:54.443 [2024-11-19 11:27:49.752702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.443 [2024-11-19 11:27:49.752725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.443 qpair failed and we were unable to recover it. 00:25:54.443 [2024-11-19 11:27:49.752866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.443 [2024-11-19 11:27:49.752891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.443 qpair failed and we were unable to recover it. 00:25:54.443 [2024-11-19 11:27:49.753036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.443 [2024-11-19 11:27:49.753074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.443 qpair failed and we were unable to recover it. 00:25:54.443 [2024-11-19 11:27:49.753201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.443 [2024-11-19 11:27:49.753223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.443 qpair failed and we were unable to recover it. 00:25:54.443 [2024-11-19 11:27:49.753359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.443 [2024-11-19 11:27:49.753391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.443 qpair failed and we were unable to recover it. 00:25:54.443 [2024-11-19 11:27:49.753506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.443 [2024-11-19 11:27:49.753530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.443 qpair failed and we were unable to recover it. 00:25:54.443 [2024-11-19 11:27:49.753712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.444 [2024-11-19 11:27:49.753736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.444 qpair failed and we were unable to recover it. 00:25:54.444 [2024-11-19 11:27:49.753887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.444 [2024-11-19 11:27:49.753911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.444 qpair failed and we were unable to recover it. 00:25:54.444 [2024-11-19 11:27:49.754042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.444 [2024-11-19 11:27:49.754081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.444 qpair failed and we were unable to recover it. 00:25:54.444 [2024-11-19 11:27:49.754226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.444 [2024-11-19 11:27:49.754249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.444 qpair failed and we were unable to recover it. 00:25:54.444 [2024-11-19 11:27:49.754394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.444 [2024-11-19 11:27:49.754420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.444 qpair failed and we were unable to recover it. 00:25:54.444 [2024-11-19 11:27:49.754536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.444 [2024-11-19 11:27:49.754559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.444 qpair failed and we were unable to recover it. 00:25:54.444 [2024-11-19 11:27:49.754694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.444 [2024-11-19 11:27:49.754718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.444 qpair failed and we were unable to recover it. 00:25:54.444 [2024-11-19 11:27:49.754885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.444 [2024-11-19 11:27:49.754924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.444 qpair failed and we were unable to recover it. 00:25:54.444 [2024-11-19 11:27:49.755018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.444 [2024-11-19 11:27:49.755041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.444 qpair failed and we were unable to recover it. 00:25:54.444 [2024-11-19 11:27:49.755143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.444 [2024-11-19 11:27:49.755168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.444 qpair failed and we were unable to recover it. 00:25:54.444 [2024-11-19 11:27:49.755313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.444 [2024-11-19 11:27:49.755354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.444 qpair failed and we were unable to recover it. 00:25:54.444 [2024-11-19 11:27:49.755495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.444 [2024-11-19 11:27:49.755522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.444 qpair failed and we were unable to recover it. 00:25:54.444 [2024-11-19 11:27:49.755628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.444 [2024-11-19 11:27:49.755652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.444 qpair failed and we were unable to recover it. 00:25:54.444 [2024-11-19 11:27:49.755778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.444 [2024-11-19 11:27:49.755803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.444 qpair failed and we were unable to recover it. 00:25:54.444 [2024-11-19 11:27:49.755946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.444 [2024-11-19 11:27:49.755970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.444 qpair failed and we were unable to recover it. 00:25:54.444 [2024-11-19 11:27:49.756140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.444 [2024-11-19 11:27:49.756165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.444 qpair failed and we were unable to recover it. 00:25:54.444 [2024-11-19 11:27:49.756286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.444 [2024-11-19 11:27:49.756324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.444 qpair failed and we were unable to recover it. 00:25:54.444 [2024-11-19 11:27:49.756438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.444 [2024-11-19 11:27:49.756463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.444 qpair failed and we were unable to recover it. 00:25:54.444 [2024-11-19 11:27:49.756587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.444 [2024-11-19 11:27:49.756612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.444 qpair failed and we were unable to recover it. 00:25:54.444 [2024-11-19 11:27:49.756736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.444 [2024-11-19 11:27:49.756760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.444 qpair failed and we were unable to recover it. 00:25:54.444 [2024-11-19 11:27:49.756874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.444 [2024-11-19 11:27:49.756898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.444 qpair failed and we were unable to recover it. 00:25:54.444 [2024-11-19 11:27:49.757064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.444 [2024-11-19 11:27:49.757089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.444 qpair failed and we were unable to recover it. 00:25:54.444 [2024-11-19 11:27:49.757195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.444 [2024-11-19 11:27:49.757222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.444 qpair failed and we were unable to recover it. 00:25:54.444 [2024-11-19 11:27:49.757405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.444 [2024-11-19 11:27:49.757431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.444 qpair failed and we were unable to recover it. 00:25:54.444 [2024-11-19 11:27:49.757528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.444 [2024-11-19 11:27:49.757552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.444 qpair failed and we were unable to recover it. 00:25:54.444 [2024-11-19 11:27:49.757704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.444 [2024-11-19 11:27:49.757729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.444 qpair failed and we were unable to recover it. 00:25:54.444 [2024-11-19 11:27:49.757875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.444 [2024-11-19 11:27:49.757899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.444 qpair failed and we were unable to recover it. 00:25:54.444 [2024-11-19 11:27:49.758016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.444 [2024-11-19 11:27:49.758040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.444 qpair failed and we were unable to recover it. 00:25:54.444 [2024-11-19 11:27:49.758168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.444 [2024-11-19 11:27:49.758193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.444 qpair failed and we were unable to recover it. 00:25:54.444 [2024-11-19 11:27:49.758299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.444 [2024-11-19 11:27:49.758322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.444 qpair failed and we were unable to recover it. 00:25:54.444 [2024-11-19 11:27:49.758452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.444 [2024-11-19 11:27:49.758478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.444 qpair failed and we were unable to recover it. 00:25:54.444 [2024-11-19 11:27:49.758572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.444 [2024-11-19 11:27:49.758596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.444 qpair failed and we were unable to recover it. 00:25:54.444 [2024-11-19 11:27:49.758718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.444 [2024-11-19 11:27:49.758742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.444 qpair failed and we were unable to recover it. 00:25:54.444 [2024-11-19 11:27:49.758896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.445 [2024-11-19 11:27:49.758919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.445 qpair failed and we were unable to recover it. 00:25:54.445 [2024-11-19 11:27:49.759039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.445 [2024-11-19 11:27:49.759076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.445 qpair failed and we were unable to recover it. 00:25:54.445 [2024-11-19 11:27:49.759231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.445 [2024-11-19 11:27:49.759255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.445 qpair failed and we were unable to recover it. 00:25:54.445 [2024-11-19 11:27:49.759391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.445 [2024-11-19 11:27:49.759416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.445 qpair failed and we were unable to recover it. 00:25:54.445 [2024-11-19 11:27:49.759510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.445 [2024-11-19 11:27:49.759534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.445 qpair failed and we were unable to recover it. 00:25:54.445 [2024-11-19 11:27:49.759616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.445 [2024-11-19 11:27:49.759654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.445 qpair failed and we were unable to recover it. 00:25:54.445 [2024-11-19 11:27:49.759788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.445 [2024-11-19 11:27:49.759815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.445 qpair failed and we were unable to recover it. 00:25:54.445 [2024-11-19 11:27:49.759937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.445 [2024-11-19 11:27:49.759962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.445 qpair failed and we were unable to recover it. 00:25:54.445 [2024-11-19 11:27:49.760091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.445 [2024-11-19 11:27:49.760114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.445 qpair failed and we were unable to recover it. 00:25:54.445 [2024-11-19 11:27:49.760256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.445 [2024-11-19 11:27:49.760280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.445 qpair failed and we were unable to recover it. 00:25:54.445 [2024-11-19 11:27:49.760408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.445 [2024-11-19 11:27:49.760446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.445 qpair failed and we were unable to recover it. 00:25:54.445 [2024-11-19 11:27:49.760545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.445 [2024-11-19 11:27:49.760571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.445 qpair failed and we were unable to recover it. 00:25:54.445 [2024-11-19 11:27:49.760728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.445 [2024-11-19 11:27:49.760753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.445 qpair failed and we were unable to recover it. 00:25:54.445 [2024-11-19 11:27:49.760907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.445 [2024-11-19 11:27:49.760931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.445 qpair failed and we were unable to recover it. 00:25:54.445 [2024-11-19 11:27:49.761010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.445 [2024-11-19 11:27:49.761033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.445 qpair failed and we were unable to recover it. 00:25:54.445 [2024-11-19 11:27:49.761169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.445 [2024-11-19 11:27:49.761194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.445 qpair failed and we were unable to recover it. 00:25:54.445 [2024-11-19 11:27:49.761298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.445 [2024-11-19 11:27:49.761323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.445 qpair failed and we were unable to recover it. 00:25:54.445 [2024-11-19 11:27:49.761457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.445 [2024-11-19 11:27:49.761482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.445 qpair failed and we were unable to recover it. 00:25:54.445 [2024-11-19 11:27:49.761581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.445 [2024-11-19 11:27:49.761606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.445 qpair failed and we were unable to recover it. 00:25:54.445 [2024-11-19 11:27:49.761757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.445 [2024-11-19 11:27:49.761781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.445 qpair failed and we were unable to recover it. 00:25:54.445 [2024-11-19 11:27:49.761930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.445 [2024-11-19 11:27:49.761953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.445 qpair failed and we were unable to recover it. 00:25:54.445 [2024-11-19 11:27:49.762161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.445 [2024-11-19 11:27:49.762186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.445 qpair failed and we were unable to recover it. 00:25:54.445 [2024-11-19 11:27:49.762317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.445 [2024-11-19 11:27:49.762342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.445 qpair failed and we were unable to recover it. 00:25:54.445 [2024-11-19 11:27:49.762471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.445 [2024-11-19 11:27:49.762495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.445 qpair failed and we were unable to recover it. 00:25:54.445 [2024-11-19 11:27:49.762593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.445 [2024-11-19 11:27:49.762617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.445 qpair failed and we were unable to recover it. 00:25:54.445 [2024-11-19 11:27:49.762736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.445 [2024-11-19 11:27:49.762760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.445 qpair failed and we were unable to recover it. 00:25:54.445 [2024-11-19 11:27:49.762938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.445 [2024-11-19 11:27:49.762962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.445 qpair failed and we were unable to recover it. 00:25:54.445 [2024-11-19 11:27:49.763098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.445 [2024-11-19 11:27:49.763123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.445 qpair failed and we were unable to recover it. 00:25:54.445 [2024-11-19 11:27:49.763242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.445 [2024-11-19 11:27:49.763280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.445 qpair failed and we were unable to recover it. 00:25:54.445 [2024-11-19 11:27:49.763403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.445 [2024-11-19 11:27:49.763431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.445 qpair failed and we were unable to recover it. 00:25:54.445 [2024-11-19 11:27:49.763529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.445 [2024-11-19 11:27:49.763553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.445 qpair failed and we were unable to recover it. 00:25:54.445 [2024-11-19 11:27:49.763714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.445 [2024-11-19 11:27:49.763753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.445 qpair failed and we were unable to recover it. 00:25:54.445 [2024-11-19 11:27:49.763881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.445 [2024-11-19 11:27:49.763905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.445 qpair failed and we were unable to recover it. 00:25:54.445 [2024-11-19 11:27:49.764042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.446 [2024-11-19 11:27:49.764072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.446 qpair failed and we were unable to recover it. 00:25:54.446 [2024-11-19 11:27:49.764230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.446 [2024-11-19 11:27:49.764269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.446 qpair failed and we were unable to recover it. 00:25:54.446 [2024-11-19 11:27:49.764406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.446 [2024-11-19 11:27:49.764431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.446 qpair failed and we were unable to recover it. 00:25:54.446 [2024-11-19 11:27:49.764568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.446 [2024-11-19 11:27:49.764592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.446 qpair failed and we were unable to recover it. 00:25:54.446 [2024-11-19 11:27:49.764754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.446 [2024-11-19 11:27:49.764778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.446 qpair failed and we were unable to recover it. 00:25:54.446 [2024-11-19 11:27:49.764917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.446 [2024-11-19 11:27:49.764941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.446 qpair failed and we were unable to recover it. 00:25:54.446 [2024-11-19 11:27:49.765095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.446 [2024-11-19 11:27:49.765120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.446 qpair failed and we were unable to recover it. 00:25:54.446 [2024-11-19 11:27:49.765271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.446 [2024-11-19 11:27:49.765309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.446 qpair failed and we were unable to recover it. 00:25:54.446 [2024-11-19 11:27:49.765417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.446 [2024-11-19 11:27:49.765442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.446 qpair failed and we were unable to recover it. 00:25:54.446 [2024-11-19 11:27:49.765531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.446 [2024-11-19 11:27:49.765556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.446 qpair failed and we were unable to recover it. 00:25:54.446 [2024-11-19 11:27:49.765649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.446 [2024-11-19 11:27:49.765674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.446 qpair failed and we were unable to recover it. 00:25:54.446 [2024-11-19 11:27:49.765813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.446 [2024-11-19 11:27:49.765837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.446 qpair failed and we were unable to recover it. 00:25:54.446 [2024-11-19 11:27:49.765979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.446 [2024-11-19 11:27:49.766003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.446 qpair failed and we were unable to recover it. 00:25:54.446 [2024-11-19 11:27:49.766181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.446 [2024-11-19 11:27:49.766220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.446 qpair failed and we were unable to recover it. 00:25:54.446 [2024-11-19 11:27:49.766409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.446 [2024-11-19 11:27:49.766435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.446 qpair failed and we were unable to recover it. 00:25:54.446 [2024-11-19 11:27:49.766579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.446 [2024-11-19 11:27:49.766604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.446 qpair failed and we were unable to recover it. 00:25:54.446 [2024-11-19 11:27:49.766739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.446 [2024-11-19 11:27:49.766779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.446 qpair failed and we were unable to recover it. 00:25:54.446 [2024-11-19 11:27:49.766933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.446 [2024-11-19 11:27:49.766957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.446 qpair failed and we were unable to recover it. 00:25:54.446 [2024-11-19 11:27:49.767125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.446 [2024-11-19 11:27:49.767150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.446 qpair failed and we were unable to recover it. 00:25:54.446 [2024-11-19 11:27:49.767279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.446 [2024-11-19 11:27:49.767319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.446 qpair failed and we were unable to recover it. 00:25:54.446 [2024-11-19 11:27:49.767461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.446 [2024-11-19 11:27:49.767486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.446 qpair failed and we were unable to recover it. 00:25:54.446 [2024-11-19 11:27:49.767612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.446 [2024-11-19 11:27:49.767651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.446 qpair failed and we were unable to recover it. 00:25:54.446 [2024-11-19 11:27:49.767829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.446 [2024-11-19 11:27:49.767853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.446 qpair failed and we were unable to recover it. 00:25:54.446 [2024-11-19 11:27:49.767940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.446 [2024-11-19 11:27:49.767964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.446 qpair failed and we were unable to recover it. 00:25:54.446 [2024-11-19 11:27:49.768136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.446 [2024-11-19 11:27:49.768160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.446 qpair failed and we were unable to recover it. 00:25:54.446 [2024-11-19 11:27:49.768308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.446 [2024-11-19 11:27:49.768332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.446 qpair failed and we were unable to recover it. 00:25:54.446 [2024-11-19 11:27:49.768475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.446 [2024-11-19 11:27:49.768501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.446 qpair failed and we were unable to recover it. 00:25:54.446 [2024-11-19 11:27:49.768636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.446 [2024-11-19 11:27:49.768675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.446 qpair failed and we were unable to recover it. 00:25:54.446 [2024-11-19 11:27:49.768818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.446 [2024-11-19 11:27:49.768843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.447 qpair failed and we were unable to recover it. 00:25:54.447 [2024-11-19 11:27:49.768997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.447 [2024-11-19 11:27:49.769035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.447 qpair failed and we were unable to recover it. 00:25:54.447 [2024-11-19 11:27:49.769163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.447 [2024-11-19 11:27:49.769187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.447 qpair failed and we were unable to recover it. 00:25:54.447 [2024-11-19 11:27:49.769352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.447 [2024-11-19 11:27:49.769385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.447 qpair failed and we were unable to recover it. 00:25:54.447 [2024-11-19 11:27:49.769478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.447 [2024-11-19 11:27:49.769503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.447 qpair failed and we were unable to recover it. 00:25:54.447 [2024-11-19 11:27:49.769672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.447 [2024-11-19 11:27:49.769695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.447 qpair failed and we were unable to recover it. 00:25:54.447 [2024-11-19 11:27:49.769838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.447 [2024-11-19 11:27:49.769862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.447 qpair failed and we were unable to recover it. 00:25:54.447 [2024-11-19 11:27:49.769985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.447 [2024-11-19 11:27:49.770009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.447 qpair failed and we were unable to recover it. 00:25:54.447 [2024-11-19 11:27:49.770126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.447 [2024-11-19 11:27:49.770150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.447 qpair failed and we were unable to recover it. 00:25:54.447 [2024-11-19 11:27:49.770347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.447 [2024-11-19 11:27:49.770379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.447 qpair failed and we were unable to recover it. 00:25:54.447 [2024-11-19 11:27:49.770460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.447 [2024-11-19 11:27:49.770486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.447 qpair failed and we were unable to recover it. 00:25:54.447 [2024-11-19 11:27:49.770596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.447 [2024-11-19 11:27:49.770620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.447 qpair failed and we were unable to recover it. 00:25:54.447 [2024-11-19 11:27:49.770800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.447 [2024-11-19 11:27:49.770828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.447 qpair failed and we were unable to recover it. 00:25:54.447 [2024-11-19 11:27:49.770986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.447 [2024-11-19 11:27:49.771024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.447 qpair failed and we were unable to recover it. 00:25:54.447 [2024-11-19 11:27:49.771118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.447 [2024-11-19 11:27:49.771142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.447 qpair failed and we were unable to recover it. 00:25:54.447 [2024-11-19 11:27:49.771284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.447 [2024-11-19 11:27:49.771308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.447 qpair failed and we were unable to recover it. 00:25:54.447 [2024-11-19 11:27:49.771412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.447 [2024-11-19 11:27:49.771438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.447 qpair failed and we were unable to recover it. 00:25:54.447 [2024-11-19 11:27:49.771531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.447 [2024-11-19 11:27:49.771556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.447 qpair failed and we were unable to recover it. 00:25:54.447 [2024-11-19 11:27:49.771685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.447 [2024-11-19 11:27:49.771710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.447 qpair failed and we were unable to recover it. 00:25:54.447 [2024-11-19 11:27:49.771882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.447 [2024-11-19 11:27:49.771907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.447 qpair failed and we were unable to recover it. 00:25:54.447 [2024-11-19 11:27:49.772081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.447 [2024-11-19 11:27:49.772105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.447 qpair failed and we were unable to recover it. 00:25:54.447 [2024-11-19 11:27:49.772277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.447 [2024-11-19 11:27:49.772301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.447 qpair failed and we were unable to recover it. 00:25:54.447 [2024-11-19 11:27:49.772424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.447 [2024-11-19 11:27:49.772450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.447 qpair failed and we were unable to recover it. 00:25:54.447 [2024-11-19 11:27:49.772540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.447 [2024-11-19 11:27:49.772564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.447 qpair failed and we were unable to recover it. 00:25:54.447 [2024-11-19 11:27:49.772712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.447 [2024-11-19 11:27:49.772737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.447 qpair failed and we were unable to recover it. 00:25:54.447 [2024-11-19 11:27:49.772925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.447 [2024-11-19 11:27:49.772948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.447 qpair failed and we were unable to recover it. 00:25:54.447 [2024-11-19 11:27:49.773098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.447 [2024-11-19 11:27:49.773122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.447 qpair failed and we were unable to recover it. 00:25:54.447 [2024-11-19 11:27:49.773261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.447 [2024-11-19 11:27:49.773300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.447 qpair failed and we were unable to recover it. 00:25:54.447 [2024-11-19 11:27:49.773430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.447 [2024-11-19 11:27:49.773456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.447 qpair failed and we were unable to recover it. 00:25:54.447 [2024-11-19 11:27:49.773561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.447 [2024-11-19 11:27:49.773585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.447 qpair failed and we were unable to recover it. 00:25:54.447 [2024-11-19 11:27:49.773711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.447 [2024-11-19 11:27:49.773736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.447 qpair failed and we were unable to recover it. 00:25:54.447 [2024-11-19 11:27:49.773868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.447 [2024-11-19 11:27:49.773892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.447 qpair failed and we were unable to recover it. 00:25:54.447 [2024-11-19 11:27:49.774057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.447 [2024-11-19 11:27:49.774082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.447 qpair failed and we were unable to recover it. 00:25:54.447 [2024-11-19 11:27:49.774222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.447 [2024-11-19 11:27:49.774273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.447 qpair failed and we were unable to recover it. 00:25:54.447 [2024-11-19 11:27:49.774387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.447 [2024-11-19 11:27:49.774414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.447 qpair failed and we were unable to recover it. 00:25:54.447 [2024-11-19 11:27:49.774512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.447 [2024-11-19 11:27:49.774537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.447 qpair failed and we were unable to recover it. 00:25:54.448 [2024-11-19 11:27:49.774671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.448 [2024-11-19 11:27:49.774695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.448 qpair failed and we were unable to recover it. 00:25:54.448 [2024-11-19 11:27:49.774858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.448 [2024-11-19 11:27:49.774882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.448 qpair failed and we were unable to recover it. 00:25:54.448 [2024-11-19 11:27:49.775012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.448 [2024-11-19 11:27:49.775037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.448 qpair failed and we were unable to recover it. 00:25:54.448 [2024-11-19 11:27:49.775191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.448 [2024-11-19 11:27:49.775231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.448 qpair failed and we were unable to recover it. 00:25:54.448 [2024-11-19 11:27:49.775371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.448 [2024-11-19 11:27:49.775397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.448 qpair failed and we were unable to recover it. 00:25:54.448 [2024-11-19 11:27:49.775495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.448 [2024-11-19 11:27:49.775520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.448 qpair failed and we were unable to recover it. 00:25:54.448 [2024-11-19 11:27:49.775617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.448 [2024-11-19 11:27:49.775642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.448 qpair failed and we were unable to recover it. 00:25:54.448 [2024-11-19 11:27:49.775768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.448 [2024-11-19 11:27:49.775791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.448 qpair failed and we were unable to recover it. 00:25:54.448 [2024-11-19 11:27:49.775930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.448 [2024-11-19 11:27:49.775955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.448 qpair failed and we were unable to recover it. 00:25:54.448 [2024-11-19 11:27:49.776097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.448 [2024-11-19 11:27:49.776122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.448 qpair failed and we were unable to recover it. 00:25:54.448 [2024-11-19 11:27:49.776263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.448 [2024-11-19 11:27:49.776288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.448 qpair failed and we were unable to recover it. 00:25:54.448 [2024-11-19 11:27:49.776408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.448 [2024-11-19 11:27:49.776449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.448 qpair failed and we were unable to recover it. 00:25:54.448 [2024-11-19 11:27:49.776578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.448 [2024-11-19 11:27:49.776612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.448 qpair failed and we were unable to recover it. 00:25:54.448 [2024-11-19 11:27:49.776754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.448 [2024-11-19 11:27:49.776787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.448 qpair failed and we were unable to recover it. 00:25:54.448 [2024-11-19 11:27:49.776940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.448 [2024-11-19 11:27:49.776974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.448 qpair failed and we were unable to recover it. 00:25:54.448 [2024-11-19 11:27:49.777144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.448 [2024-11-19 11:27:49.777178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.448 qpair failed and we were unable to recover it. 00:25:54.448 [2024-11-19 11:27:49.777392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.448 [2024-11-19 11:27:49.777432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.448 qpair failed and we were unable to recover it. 00:25:54.448 [2024-11-19 11:27:49.777576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.448 [2024-11-19 11:27:49.777610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.448 qpair failed and we were unable to recover it. 00:25:54.448 [2024-11-19 11:27:49.777761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.448 [2024-11-19 11:27:49.777795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.448 qpair failed and we were unable to recover it. 00:25:54.448 [2024-11-19 11:27:49.777937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.448 [2024-11-19 11:27:49.777970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.448 qpair failed and we were unable to recover it. 00:25:54.448 [2024-11-19 11:27:49.778144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.448 [2024-11-19 11:27:49.778178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.448 qpair failed and we were unable to recover it. 00:25:54.448 [2024-11-19 11:27:49.778343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.448 [2024-11-19 11:27:49.778384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.448 qpair failed and we were unable to recover it. 00:25:54.448 [2024-11-19 11:27:49.778512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.448 [2024-11-19 11:27:49.778545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.448 qpair failed and we were unable to recover it. 00:25:54.448 [2024-11-19 11:27:49.778685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.448 [2024-11-19 11:27:49.778718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.448 qpair failed and we were unable to recover it. 00:25:54.448 [2024-11-19 11:27:49.778825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.448 [2024-11-19 11:27:49.778859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.448 qpair failed and we were unable to recover it. 00:25:54.448 [2024-11-19 11:27:49.779000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.448 [2024-11-19 11:27:49.779033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.448 qpair failed and we were unable to recover it. 00:25:54.448 [2024-11-19 11:27:49.779220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.448 [2024-11-19 11:27:49.779258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.448 qpair failed and we were unable to recover it. 00:25:54.448 [2024-11-19 11:27:49.779419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.448 [2024-11-19 11:27:49.779454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.448 qpair failed and we were unable to recover it. 00:25:54.448 [2024-11-19 11:27:49.779579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.448 [2024-11-19 11:27:49.779613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.448 qpair failed and we were unable to recover it. 00:25:54.448 [2024-11-19 11:27:49.779815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.448 [2024-11-19 11:27:49.779853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.448 qpair failed and we were unable to recover it. 00:25:54.448 [2024-11-19 11:27:49.780003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.448 [2024-11-19 11:27:49.780041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.448 qpair failed and we were unable to recover it. 00:25:54.448 [2024-11-19 11:27:49.780198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.448 [2024-11-19 11:27:49.780236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.448 qpair failed and we were unable to recover it. 00:25:54.448 [2024-11-19 11:27:49.780421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.448 [2024-11-19 11:27:49.780456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.448 qpair failed and we were unable to recover it. 00:25:54.448 [2024-11-19 11:27:49.780591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.448 [2024-11-19 11:27:49.780624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.448 qpair failed and we were unable to recover it. 00:25:54.448 [2024-11-19 11:27:49.780790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.448 [2024-11-19 11:27:49.780828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.448 qpair failed and we were unable to recover it. 00:25:54.448 [2024-11-19 11:27:49.780967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.448 [2024-11-19 11:27:49.781006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.448 qpair failed and we were unable to recover it. 00:25:54.448 [2024-11-19 11:27:49.781125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.448 [2024-11-19 11:27:49.781163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.448 qpair failed and we were unable to recover it. 00:25:54.448 [2024-11-19 11:27:49.781320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.448 [2024-11-19 11:27:49.781358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.448 qpair failed and we were unable to recover it. 00:25:54.449 [2024-11-19 11:27:49.781515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.449 [2024-11-19 11:27:49.781549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.449 qpair failed and we were unable to recover it. 00:25:54.449 [2024-11-19 11:27:49.781709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.449 [2024-11-19 11:27:49.781747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.449 qpair failed and we were unable to recover it. 00:25:54.449 [2024-11-19 11:27:49.781859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.449 [2024-11-19 11:27:49.781897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.449 qpair failed and we were unable to recover it. 00:25:54.449 [2024-11-19 11:27:49.782057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.449 [2024-11-19 11:27:49.782096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.449 qpair failed and we were unable to recover it. 00:25:54.449 [2024-11-19 11:27:49.782267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.449 [2024-11-19 11:27:49.782306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.449 qpair failed and we were unable to recover it. 00:25:54.449 [2024-11-19 11:27:49.782456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.449 [2024-11-19 11:27:49.782490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.449 qpair failed and we were unable to recover it. 00:25:54.449 [2024-11-19 11:27:49.782634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.449 [2024-11-19 11:27:49.782688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.449 qpair failed and we were unable to recover it. 00:25:54.449 [2024-11-19 11:27:49.782828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.449 [2024-11-19 11:27:49.782867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.449 qpair failed and we were unable to recover it. 00:25:54.449 [2024-11-19 11:27:49.783017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.449 [2024-11-19 11:27:49.783055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.449 qpair failed and we were unable to recover it. 00:25:54.449 [2024-11-19 11:27:49.783226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.449 [2024-11-19 11:27:49.783264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.449 qpair failed and we were unable to recover it. 00:25:54.449 [2024-11-19 11:27:49.783418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.449 [2024-11-19 11:27:49.783453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.449 qpair failed and we were unable to recover it. 00:25:54.449 [2024-11-19 11:27:49.783590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.449 [2024-11-19 11:27:49.783623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.449 qpair failed and we were unable to recover it. 00:25:54.449 [2024-11-19 11:27:49.783785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.449 [2024-11-19 11:27:49.783824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.449 qpair failed and we were unable to recover it. 00:25:54.449 [2024-11-19 11:27:49.783946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.449 [2024-11-19 11:27:49.783986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.449 qpair failed and we were unable to recover it. 00:25:54.449 [2024-11-19 11:27:49.784180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.449 [2024-11-19 11:27:49.784218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.449 qpair failed and we were unable to recover it. 00:25:54.449 [2024-11-19 11:27:49.784420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.449 [2024-11-19 11:27:49.784454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.449 qpair failed and we were unable to recover it. 00:25:54.449 [2024-11-19 11:27:49.784580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.449 [2024-11-19 11:27:49.784613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.449 qpair failed and we were unable to recover it. 00:25:54.449 [2024-11-19 11:27:49.784785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.449 [2024-11-19 11:27:49.784824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.449 qpair failed and we were unable to recover it. 00:25:54.449 [2024-11-19 11:27:49.785002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.449 [2024-11-19 11:27:49.785047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.449 qpair failed and we were unable to recover it. 00:25:54.449 [2024-11-19 11:27:49.785197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.449 [2024-11-19 11:27:49.785235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.449 qpair failed and we were unable to recover it. 00:25:54.449 [2024-11-19 11:27:49.785421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.449 [2024-11-19 11:27:49.785455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.449 qpair failed and we were unable to recover it. 00:25:54.449 [2024-11-19 11:27:49.785563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.449 [2024-11-19 11:27:49.785596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.449 qpair failed and we were unable to recover it. 00:25:54.449 [2024-11-19 11:27:49.785776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.449 [2024-11-19 11:27:49.785814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.449 qpair failed and we were unable to recover it. 00:25:54.449 [2024-11-19 11:27:49.785995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.449 [2024-11-19 11:27:49.786033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.449 qpair failed and we were unable to recover it. 00:25:54.449 [2024-11-19 11:27:49.786225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.449 [2024-11-19 11:27:49.786264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.449 qpair failed and we were unable to recover it. 00:25:54.449 [2024-11-19 11:27:49.786432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.449 [2024-11-19 11:27:49.786465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.449 qpair failed and we were unable to recover it. 00:25:54.449 [2024-11-19 11:27:49.786594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.449 [2024-11-19 11:27:49.786628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.449 qpair failed and we were unable to recover it. 00:25:54.449 [2024-11-19 11:27:49.786788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.449 [2024-11-19 11:27:49.786826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.449 qpair failed and we were unable to recover it. 00:25:54.449 [2024-11-19 11:27:49.786987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.449 [2024-11-19 11:27:49.787026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.449 qpair failed and we were unable to recover it. 00:25:54.449 [2024-11-19 11:27:49.787185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.449 [2024-11-19 11:27:49.787223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.449 qpair failed and we were unable to recover it. 00:25:54.449 [2024-11-19 11:27:49.787333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.449 [2024-11-19 11:27:49.787411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.449 qpair failed and we were unable to recover it. 00:25:54.449 [2024-11-19 11:27:49.787564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.449 [2024-11-19 11:27:49.787597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.449 qpair failed and we were unable to recover it. 00:25:54.449 [2024-11-19 11:27:49.787776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.449 [2024-11-19 11:27:49.787817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.449 qpair failed and we were unable to recover it. 00:25:54.449 [2024-11-19 11:27:49.788007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.449 [2024-11-19 11:27:49.788048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.449 qpair failed and we were unable to recover it. 00:25:54.449 [2024-11-19 11:27:49.788227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.449 [2024-11-19 11:27:49.788267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.449 qpair failed and we were unable to recover it. 00:25:54.449 [2024-11-19 11:27:49.788435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.449 [2024-11-19 11:27:49.788470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.449 qpair failed and we were unable to recover it. 00:25:54.449 [2024-11-19 11:27:49.788589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.449 [2024-11-19 11:27:49.788623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.450 qpair failed and we were unable to recover it. 00:25:54.450 [2024-11-19 11:27:49.788792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.450 [2024-11-19 11:27:49.788837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.450 qpair failed and we were unable to recover it. 00:25:54.450 [2024-11-19 11:27:49.788967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.450 [2024-11-19 11:27:49.789008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.450 qpair failed and we were unable to recover it. 00:25:54.450 [2024-11-19 11:27:49.789200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.450 [2024-11-19 11:27:49.789240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.450 qpair failed and we were unable to recover it. 00:25:54.450 [2024-11-19 11:27:49.789418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.450 [2024-11-19 11:27:49.789452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.450 qpair failed and we were unable to recover it. 00:25:54.450 [2024-11-19 11:27:49.789583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.450 [2024-11-19 11:27:49.789616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.450 qpair failed and we were unable to recover it. 00:25:54.450 [2024-11-19 11:27:49.789810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.450 [2024-11-19 11:27:49.789852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.450 qpair failed and we were unable to recover it. 00:25:54.450 [2024-11-19 11:27:49.790020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.450 [2024-11-19 11:27:49.790060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.450 qpair failed and we were unable to recover it. 00:25:54.450 [2024-11-19 11:27:49.790265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.450 [2024-11-19 11:27:49.790306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.450 qpair failed and we were unable to recover it. 00:25:54.450 [2024-11-19 11:27:49.790466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.450 [2024-11-19 11:27:49.790500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.450 qpair failed and we were unable to recover it. 00:25:54.450 [2024-11-19 11:27:49.790642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.450 [2024-11-19 11:27:49.790698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.450 qpair failed and we were unable to recover it. 00:25:54.450 [2024-11-19 11:27:49.790859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.450 [2024-11-19 11:27:49.790899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.450 qpair failed and we were unable to recover it. 00:25:54.450 [2024-11-19 11:27:49.791033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.450 [2024-11-19 11:27:49.791074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.450 qpair failed and we were unable to recover it. 00:25:54.450 [2024-11-19 11:27:49.791251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.450 [2024-11-19 11:27:49.791294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.450 qpair failed and we were unable to recover it. 00:25:54.450 [2024-11-19 11:27:49.791466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.450 [2024-11-19 11:27:49.791500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.450 qpair failed and we were unable to recover it. 00:25:54.450 [2024-11-19 11:27:49.791639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.450 [2024-11-19 11:27:49.791693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.450 qpair failed and we were unable to recover it. 00:25:54.450 [2024-11-19 11:27:49.791867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.450 [2024-11-19 11:27:49.791908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.450 qpair failed and we were unable to recover it. 00:25:54.450 [2024-11-19 11:27:49.792067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.450 [2024-11-19 11:27:49.792108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.450 qpair failed and we were unable to recover it. 00:25:54.450 [2024-11-19 11:27:49.792314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.450 [2024-11-19 11:27:49.792357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.450 qpair failed and we were unable to recover it. 00:25:54.450 [2024-11-19 11:27:49.792499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.450 [2024-11-19 11:27:49.792532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.450 qpair failed and we were unable to recover it. 00:25:54.450 [2024-11-19 11:27:49.792662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.450 [2024-11-19 11:27:49.792705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.450 qpair failed and we were unable to recover it. 00:25:54.450 [2024-11-19 11:27:49.792911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.450 [2024-11-19 11:27:49.792953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.450 qpair failed and we were unable to recover it. 00:25:54.450 [2024-11-19 11:27:49.793147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.450 [2024-11-19 11:27:49.793197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.450 qpair failed and we were unable to recover it. 00:25:54.450 [2024-11-19 11:27:49.793394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.450 [2024-11-19 11:27:49.793445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.450 qpair failed and we were unable to recover it. 00:25:54.450 [2024-11-19 11:27:49.793564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.450 [2024-11-19 11:27:49.793597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.450 qpair failed and we were unable to recover it. 00:25:54.450 [2024-11-19 11:27:49.793738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.450 [2024-11-19 11:27:49.793779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.450 qpair failed and we were unable to recover it. 00:25:54.450 [2024-11-19 11:27:49.793939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.450 [2024-11-19 11:27:49.793980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.450 qpair failed and we were unable to recover it. 00:25:54.450 [2024-11-19 11:27:49.794136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.450 [2024-11-19 11:27:49.794176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.450 qpair failed and we were unable to recover it. 00:25:54.450 [2024-11-19 11:27:49.794381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.450 [2024-11-19 11:27:49.794434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.450 qpair failed and we were unable to recover it. 00:25:54.450 [2024-11-19 11:27:49.794559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.450 [2024-11-19 11:27:49.794592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.450 qpair failed and we were unable to recover it. 00:25:54.450 [2024-11-19 11:27:49.794760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.450 [2024-11-19 11:27:49.794800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.450 qpair failed and we were unable to recover it. 00:25:54.450 [2024-11-19 11:27:49.794964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.450 [2024-11-19 11:27:49.795005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.450 qpair failed and we were unable to recover it. 00:25:54.450 [2024-11-19 11:27:49.795192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.450 [2024-11-19 11:27:49.795233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.450 qpair failed and we were unable to recover it. 00:25:54.450 [2024-11-19 11:27:49.795424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.450 [2024-11-19 11:27:49.795457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.450 qpair failed and we were unable to recover it. 00:25:54.450 [2024-11-19 11:27:49.795574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.450 [2024-11-19 11:27:49.795608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.450 qpair failed and we were unable to recover it. 00:25:54.450 [2024-11-19 11:27:49.795747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.451 [2024-11-19 11:27:49.795787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.451 qpair failed and we were unable to recover it. 00:25:54.451 [2024-11-19 11:27:49.795992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.451 [2024-11-19 11:27:49.796033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.451 qpair failed and we were unable to recover it. 00:25:54.451 [2024-11-19 11:27:49.796158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.451 [2024-11-19 11:27:49.796199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.451 qpair failed and we were unable to recover it. 00:25:54.451 [2024-11-19 11:27:49.796391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.451 [2024-11-19 11:27:49.796444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.451 qpair failed and we were unable to recover it. 00:25:54.451 [2024-11-19 11:27:49.796584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.451 [2024-11-19 11:27:49.796617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.451 qpair failed and we were unable to recover it. 00:25:54.451 [2024-11-19 11:27:49.796759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.451 [2024-11-19 11:27:49.796801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.451 qpair failed and we were unable to recover it. 00:25:54.451 [2024-11-19 11:27:49.796999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.451 [2024-11-19 11:27:49.797042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.451 qpair failed and we were unable to recover it. 00:25:54.451 [2024-11-19 11:27:49.797250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.451 [2024-11-19 11:27:49.797293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.451 qpair failed and we were unable to recover it. 00:25:54.451 [2024-11-19 11:27:49.797448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.451 [2024-11-19 11:27:49.797482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.451 qpair failed and we were unable to recover it. 00:25:54.451 [2024-11-19 11:27:49.797637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.451 [2024-11-19 11:27:49.797693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.451 qpair failed and we were unable to recover it. 00:25:54.451 [2024-11-19 11:27:49.797846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.451 [2024-11-19 11:27:49.797889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.451 qpair failed and we were unable to recover it. 00:25:54.451 [2024-11-19 11:27:49.798088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.451 [2024-11-19 11:27:49.798128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.451 qpair failed and we were unable to recover it. 00:25:54.451 [2024-11-19 11:27:49.798297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.451 [2024-11-19 11:27:49.798337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.451 qpair failed and we were unable to recover it. 00:25:54.451 [2024-11-19 11:27:49.798510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.451 [2024-11-19 11:27:49.798545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:54.451 qpair failed and we were unable to recover it. 00:25:54.451 [2024-11-19 11:27:49.798762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.451 [2024-11-19 11:27:49.798822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.451 qpair failed and we were unable to recover it. 00:25:54.451 [2024-11-19 11:27:49.799033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.451 [2024-11-19 11:27:49.799079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.451 qpair failed and we were unable to recover it. 00:25:54.451 [2024-11-19 11:27:49.799239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.451 [2024-11-19 11:27:49.799283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.451 qpair failed and we were unable to recover it. 00:25:54.451 [2024-11-19 11:27:49.799435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.451 [2024-11-19 11:27:49.799469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.451 qpair failed and we were unable to recover it. 00:25:54.451 [2024-11-19 11:27:49.799584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.451 [2024-11-19 11:27:49.799617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.451 qpair failed and we were unable to recover it. 00:25:54.451 [2024-11-19 11:27:49.799815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.451 [2024-11-19 11:27:49.799858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.451 qpair failed and we were unable to recover it. 00:25:54.451 [2024-11-19 11:27:49.800050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.451 [2024-11-19 11:27:49.800092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.451 qpair failed and we were unable to recover it. 00:25:54.451 [2024-11-19 11:27:49.800220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.451 [2024-11-19 11:27:49.800262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.451 qpair failed and we were unable to recover it. 00:25:54.451 [2024-11-19 11:27:49.800447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.451 [2024-11-19 11:27:49.800492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.451 qpair failed and we were unable to recover it. 00:25:54.451 [2024-11-19 11:27:49.800654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.451 [2024-11-19 11:27:49.800697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.451 qpair failed and we were unable to recover it. 00:25:54.451 [2024-11-19 11:27:49.800894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.451 [2024-11-19 11:27:49.800936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.451 qpair failed and we were unable to recover it. 00:25:54.451 [2024-11-19 11:27:49.801132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.451 [2024-11-19 11:27:49.801174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.451 qpair failed and we were unable to recover it. 00:25:54.451 [2024-11-19 11:27:49.801374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.451 [2024-11-19 11:27:49.801423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.451 qpair failed and we were unable to recover it. 00:25:54.451 [2024-11-19 11:27:49.801555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.451 [2024-11-19 11:27:49.801597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.451 qpair failed and we were unable to recover it. 00:25:54.451 [2024-11-19 11:27:49.801834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.451 [2024-11-19 11:27:49.801876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.451 qpair failed and we were unable to recover it. 00:25:54.451 [2024-11-19 11:27:49.802030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.451 [2024-11-19 11:27:49.802072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.451 qpair failed and we were unable to recover it. 00:25:54.451 [2024-11-19 11:27:49.802265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.451 [2024-11-19 11:27:49.802307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.451 qpair failed and we were unable to recover it. 00:25:54.451 [2024-11-19 11:27:49.802464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.451 [2024-11-19 11:27:49.802507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.451 qpair failed and we were unable to recover it. 00:25:54.451 [2024-11-19 11:27:49.802661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.451 [2024-11-19 11:27:49.802703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.451 qpair failed and we were unable to recover it. 00:25:54.451 [2024-11-19 11:27:49.802904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.451 [2024-11-19 11:27:49.802956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.451 qpair failed and we were unable to recover it. 00:25:54.452 [2024-11-19 11:27:49.803156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.452 [2024-11-19 11:27:49.803198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.452 qpair failed and we were unable to recover it. 00:25:54.452 [2024-11-19 11:27:49.803404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.452 [2024-11-19 11:27:49.803447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.452 qpair failed and we were unable to recover it. 00:25:54.452 [2024-11-19 11:27:49.803589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.452 [2024-11-19 11:27:49.803637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.452 qpair failed and we were unable to recover it. 00:25:54.452 [2024-11-19 11:27:49.803831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.452 [2024-11-19 11:27:49.803873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.452 qpair failed and we were unable to recover it. 00:25:54.452 [2024-11-19 11:27:49.804038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.452 [2024-11-19 11:27:49.804079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.452 qpair failed and we were unable to recover it. 00:25:54.452 [2024-11-19 11:27:49.804253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.452 [2024-11-19 11:27:49.804295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.452 qpair failed and we were unable to recover it. 00:25:54.452 [2024-11-19 11:27:49.804459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.452 [2024-11-19 11:27:49.804502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.452 qpair failed and we were unable to recover it. 00:25:54.452 [2024-11-19 11:27:49.804647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.452 [2024-11-19 11:27:49.804708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.452 qpair failed and we were unable to recover it. 00:25:54.452 [2024-11-19 11:27:49.804904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.452 [2024-11-19 11:27:49.804947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.452 qpair failed and we were unable to recover it. 00:25:54.452 [2024-11-19 11:27:49.805112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.452 [2024-11-19 11:27:49.805153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.452 qpair failed and we were unable to recover it. 00:25:54.452 [2024-11-19 11:27:49.805337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.452 [2024-11-19 11:27:49.805390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.452 qpair failed and we were unable to recover it. 00:25:54.452 [2024-11-19 11:27:49.805545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.452 [2024-11-19 11:27:49.805587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.452 qpair failed and we were unable to recover it. 00:25:54.452 [2024-11-19 11:27:49.805746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.452 [2024-11-19 11:27:49.805788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.452 qpair failed and we were unable to recover it. 00:25:54.452 [2024-11-19 11:27:49.805979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.452 [2024-11-19 11:27:49.806022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.452 qpair failed and we were unable to recover it. 00:25:54.452 [2024-11-19 11:27:49.806221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.452 [2024-11-19 11:27:49.806263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.452 qpair failed and we were unable to recover it. 00:25:54.452 [2024-11-19 11:27:49.806437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.452 [2024-11-19 11:27:49.806480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.452 qpair failed and we were unable to recover it. 00:25:54.452 [2024-11-19 11:27:49.806638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.452 [2024-11-19 11:27:49.806680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.452 qpair failed and we were unable to recover it. 00:25:54.452 [2024-11-19 11:27:49.806852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.452 [2024-11-19 11:27:49.806894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.452 qpair failed and we were unable to recover it. 00:25:54.452 [2024-11-19 11:27:49.807052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.452 [2024-11-19 11:27:49.807094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.452 qpair failed and we were unable to recover it. 00:25:54.452 [2024-11-19 11:27:49.807256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.452 [2024-11-19 11:27:49.807298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.452 qpair failed and we were unable to recover it. 00:25:54.452 [2024-11-19 11:27:49.807462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.452 [2024-11-19 11:27:49.807505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.452 qpair failed and we were unable to recover it. 00:25:54.452 [2024-11-19 11:27:49.807723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.452 [2024-11-19 11:27:49.807766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.452 qpair failed and we were unable to recover it. 00:25:54.452 [2024-11-19 11:27:49.807928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.452 [2024-11-19 11:27:49.807970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.452 qpair failed and we were unable to recover it. 00:25:54.452 [2024-11-19 11:27:49.808182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.452 [2024-11-19 11:27:49.808223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.452 qpair failed and we were unable to recover it. 00:25:54.452 [2024-11-19 11:27:49.808442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.452 [2024-11-19 11:27:49.808486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.452 qpair failed and we were unable to recover it. 00:25:54.452 [2024-11-19 11:27:49.808624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.452 [2024-11-19 11:27:49.808666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.452 qpair failed and we were unable to recover it. 00:25:54.452 [2024-11-19 11:27:49.808852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.452 [2024-11-19 11:27:49.808900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.452 qpair failed and we were unable to recover it. 00:25:54.452 [2024-11-19 11:27:49.809101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.452 [2024-11-19 11:27:49.809142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.452 qpair failed and we were unable to recover it. 00:25:54.452 [2024-11-19 11:27:49.809322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.452 [2024-11-19 11:27:49.809374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.452 qpair failed and we were unable to recover it. 00:25:54.452 [2024-11-19 11:27:49.809521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.452 [2024-11-19 11:27:49.809563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.452 qpair failed and we were unable to recover it. 00:25:54.452 [2024-11-19 11:27:49.809745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.452 [2024-11-19 11:27:49.809789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.452 qpair failed and we were unable to recover it. 00:25:54.452 [2024-11-19 11:27:49.809999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.452 [2024-11-19 11:27:49.810044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.452 qpair failed and we were unable to recover it. 00:25:54.452 [2024-11-19 11:27:49.810243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.452 [2024-11-19 11:27:49.810287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.452 qpair failed and we were unable to recover it. 00:25:54.452 [2024-11-19 11:27:49.810473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.452 [2024-11-19 11:27:49.810515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.452 qpair failed and we were unable to recover it. 00:25:54.452 [2024-11-19 11:27:49.810717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.452 [2024-11-19 11:27:49.810778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.452 qpair failed and we were unable to recover it. 00:25:54.452 [2024-11-19 11:27:49.810975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.452 [2024-11-19 11:27:49.811016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.452 qpair failed and we were unable to recover it. 00:25:54.452 [2024-11-19 11:27:49.811220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.452 [2024-11-19 11:27:49.811262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.452 qpair failed and we were unable to recover it. 00:25:54.452 [2024-11-19 11:27:49.811470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.452 [2024-11-19 11:27:49.811514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.452 qpair failed and we were unable to recover it. 00:25:54.452 [2024-11-19 11:27:49.811700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.452 [2024-11-19 11:27:49.811742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.452 qpair failed and we were unable to recover it. 00:25:54.452 [2024-11-19 11:27:49.811902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.452 [2024-11-19 11:27:49.811944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.452 qpair failed and we were unable to recover it. 00:25:54.452 [2024-11-19 11:27:49.812079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.452 [2024-11-19 11:27:49.812120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.452 qpair failed and we were unable to recover it. 00:25:54.452 [2024-11-19 11:27:49.812289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.452 [2024-11-19 11:27:49.812330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.453 qpair failed and we were unable to recover it. 00:25:54.453 [2024-11-19 11:27:49.812495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.453 [2024-11-19 11:27:49.812537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.453 qpair failed and we were unable to recover it. 00:25:54.453 [2024-11-19 11:27:49.812736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.453 [2024-11-19 11:27:49.812777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.453 qpair failed and we were unable to recover it. 00:25:54.453 [2024-11-19 11:27:49.812894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.453 [2024-11-19 11:27:49.812936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.453 qpair failed and we were unable to recover it. 00:25:54.453 [2024-11-19 11:27:49.813126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.453 [2024-11-19 11:27:49.813168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.453 qpair failed and we were unable to recover it. 00:25:54.453 [2024-11-19 11:27:49.813370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.453 [2024-11-19 11:27:49.813424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.453 qpair failed and we were unable to recover it. 00:25:54.453 [2024-11-19 11:27:49.813567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.453 [2024-11-19 11:27:49.813609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.453 qpair failed and we were unable to recover it. 00:25:54.453 [2024-11-19 11:27:49.813815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.453 [2024-11-19 11:27:49.813860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.453 qpair failed and we were unable to recover it. 00:25:54.453 [2024-11-19 11:27:49.814086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.453 [2024-11-19 11:27:49.814129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.453 qpair failed and we were unable to recover it. 00:25:54.453 [2024-11-19 11:27:49.814324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.453 [2024-11-19 11:27:49.814378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.453 qpair failed and we were unable to recover it. 00:25:54.453 [2024-11-19 11:27:49.814544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.453 [2024-11-19 11:27:49.814587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.453 qpair failed and we were unable to recover it. 00:25:54.453 [2024-11-19 11:27:49.814760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.453 [2024-11-19 11:27:49.814801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.453 qpair failed and we were unable to recover it. 00:25:54.453 [2024-11-19 11:27:49.814983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.453 [2024-11-19 11:27:49.815027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.453 qpair failed and we were unable to recover it. 00:25:54.453 [2024-11-19 11:27:49.815222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.453 [2024-11-19 11:27:49.815267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.453 qpair failed and we were unable to recover it. 00:25:54.453 [2024-11-19 11:27:49.815458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.453 [2024-11-19 11:27:49.815500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.453 qpair failed and we were unable to recover it. 00:25:54.453 [2024-11-19 11:27:49.815694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.453 [2024-11-19 11:27:49.815738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.453 qpair failed and we were unable to recover it. 00:25:54.453 [2024-11-19 11:27:49.815910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.453 [2024-11-19 11:27:49.815955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.453 qpair failed and we were unable to recover it. 00:25:54.453 [2024-11-19 11:27:49.816157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.453 [2024-11-19 11:27:49.816209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.453 qpair failed and we were unable to recover it. 00:25:54.453 [2024-11-19 11:27:49.816430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.453 [2024-11-19 11:27:49.816474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.453 qpair failed and we were unable to recover it. 00:25:54.453 [2024-11-19 11:27:49.816671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.453 [2024-11-19 11:27:49.816716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.453 qpair failed and we were unable to recover it. 00:25:54.453 [2024-11-19 11:27:49.816913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.453 [2024-11-19 11:27:49.816962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.453 qpair failed and we were unable to recover it. 00:25:54.453 [2024-11-19 11:27:49.817163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.453 [2024-11-19 11:27:49.817208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.453 qpair failed and we were unable to recover it. 00:25:54.453 [2024-11-19 11:27:49.817350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.453 [2024-11-19 11:27:49.817427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.453 qpair failed and we were unable to recover it. 00:25:54.453 [2024-11-19 11:27:49.817592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.453 [2024-11-19 11:27:49.817633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.453 qpair failed and we were unable to recover it. 00:25:54.453 [2024-11-19 11:27:49.817804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.453 [2024-11-19 11:27:49.817849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.453 qpair failed and we were unable to recover it. 00:25:54.453 [2024-11-19 11:27:49.818017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.453 [2024-11-19 11:27:49.818062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.453 qpair failed and we were unable to recover it. 00:25:54.453 [2024-11-19 11:27:49.818257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.453 [2024-11-19 11:27:49.818302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.453 qpair failed and we were unable to recover it. 00:25:54.453 [2024-11-19 11:27:49.818471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.453 [2024-11-19 11:27:49.818514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.453 qpair failed and we were unable to recover it. 00:25:54.453 [2024-11-19 11:27:49.818702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.453 [2024-11-19 11:27:49.818746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.453 qpair failed and we were unable to recover it. 00:25:54.453 [2024-11-19 11:27:49.818891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.453 [2024-11-19 11:27:49.818933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.453 qpair failed and we were unable to recover it. 00:25:54.453 [2024-11-19 11:27:49.819118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.453 [2024-11-19 11:27:49.819177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.453 qpair failed and we were unable to recover it. 00:25:54.453 [2024-11-19 11:27:49.819315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.453 [2024-11-19 11:27:49.819359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.453 qpair failed and we were unable to recover it. 00:25:54.453 [2024-11-19 11:27:49.819519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.453 [2024-11-19 11:27:49.819562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.453 qpair failed and we were unable to recover it. 00:25:54.453 [2024-11-19 11:27:49.819730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.453 [2024-11-19 11:27:49.819775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.453 qpair failed and we were unable to recover it. 00:25:54.453 [2024-11-19 11:27:49.820017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.453 [2024-11-19 11:27:49.820062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.453 qpair failed and we were unable to recover it. 00:25:54.453 [2024-11-19 11:27:49.820292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.453 [2024-11-19 11:27:49.820337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.453 qpair failed and we were unable to recover it. 00:25:54.453 [2024-11-19 11:27:49.820551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.453 [2024-11-19 11:27:49.820594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.453 qpair failed and we were unable to recover it. 00:25:54.453 [2024-11-19 11:27:49.820749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.453 [2024-11-19 11:27:49.820793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.453 qpair failed and we were unable to recover it. 00:25:54.453 [2024-11-19 11:27:49.820975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.453 [2024-11-19 11:27:49.821019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.453 qpair failed and we were unable to recover it. 00:25:54.453 [2024-11-19 11:27:49.821187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.453 [2024-11-19 11:27:49.821231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.453 qpair failed and we were unable to recover it. 00:25:54.453 [2024-11-19 11:27:49.821418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.454 [2024-11-19 11:27:49.821461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.454 qpair failed and we were unable to recover it. 00:25:54.454 [2024-11-19 11:27:49.821676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.454 [2024-11-19 11:27:49.821718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.454 qpair failed and we were unable to recover it. 00:25:54.454 [2024-11-19 11:27:49.821944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.454 [2024-11-19 11:27:49.821988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.454 qpair failed and we were unable to recover it. 00:25:54.454 [2024-11-19 11:27:49.822173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.454 [2024-11-19 11:27:49.822217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.454 qpair failed and we were unable to recover it. 00:25:54.454 [2024-11-19 11:27:49.822449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.454 [2024-11-19 11:27:49.822494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.454 qpair failed and we were unable to recover it. 00:25:54.454 [2024-11-19 11:27:49.822632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.454 [2024-11-19 11:27:49.822676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.454 qpair failed and we were unable to recover it. 00:25:54.454 [2024-11-19 11:27:49.822811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.454 [2024-11-19 11:27:49.822856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.454 qpair failed and we were unable to recover it. 00:25:54.454 [2024-11-19 11:27:49.823048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.454 [2024-11-19 11:27:49.823092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.454 qpair failed and we were unable to recover it. 00:25:54.454 [2024-11-19 11:27:49.823257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.454 [2024-11-19 11:27:49.823313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.454 qpair failed and we were unable to recover it. 00:25:54.454 [2024-11-19 11:27:49.823486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.454 [2024-11-19 11:27:49.823531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.454 qpair failed and we were unable to recover it. 00:25:54.454 [2024-11-19 11:27:49.823729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.454 [2024-11-19 11:27:49.823774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.454 qpair failed and we were unable to recover it. 00:25:54.454 [2024-11-19 11:27:49.823949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.454 [2024-11-19 11:27:49.823993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.454 qpair failed and we were unable to recover it. 00:25:54.454 [2024-11-19 11:27:49.824157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.454 [2024-11-19 11:27:49.824202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.454 qpair failed and we were unable to recover it. 00:25:54.454 [2024-11-19 11:27:49.824332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.454 [2024-11-19 11:27:49.824388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.454 qpair failed and we were unable to recover it. 00:25:54.454 [2024-11-19 11:27:49.824578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.454 [2024-11-19 11:27:49.824622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.454 qpair failed and we were unable to recover it. 00:25:54.454 [2024-11-19 11:27:49.824836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.454 [2024-11-19 11:27:49.824880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.454 qpair failed and we were unable to recover it. 00:25:54.454 [2024-11-19 11:27:49.825014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.454 [2024-11-19 11:27:49.825059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.454 qpair failed and we were unable to recover it. 00:25:54.454 [2024-11-19 11:27:49.825260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.454 [2024-11-19 11:27:49.825304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.454 qpair failed and we were unable to recover it. 00:25:54.454 [2024-11-19 11:27:49.825479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.454 [2024-11-19 11:27:49.825524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.454 qpair failed and we were unable to recover it. 00:25:54.454 [2024-11-19 11:27:49.825723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.454 [2024-11-19 11:27:49.825769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.454 qpair failed and we were unable to recover it. 00:25:54.454 [2024-11-19 11:27:49.825946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.454 [2024-11-19 11:27:49.825991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.454 qpair failed and we were unable to recover it. 00:25:54.454 [2024-11-19 11:27:49.826250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.454 [2024-11-19 11:27:49.826323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.454 qpair failed and we were unable to recover it. 00:25:54.454 [2024-11-19 11:27:49.826594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.454 [2024-11-19 11:27:49.826639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.454 qpair failed and we were unable to recover it. 00:25:54.454 [2024-11-19 11:27:49.826849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.454 [2024-11-19 11:27:49.826893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.454 qpair failed and we were unable to recover it. 00:25:54.454 [2024-11-19 11:27:49.827061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.454 [2024-11-19 11:27:49.827105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.454 qpair failed and we were unable to recover it. 00:25:54.454 [2024-11-19 11:27:49.827353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.454 [2024-11-19 11:27:49.827441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.454 qpair failed and we were unable to recover it. 00:25:54.454 [2024-11-19 11:27:49.827620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.454 [2024-11-19 11:27:49.827665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.454 qpair failed and we were unable to recover it. 00:25:54.454 [2024-11-19 11:27:49.827879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.454 [2024-11-19 11:27:49.827924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.454 qpair failed and we were unable to recover it. 00:25:54.454 [2024-11-19 11:27:49.828151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.454 [2024-11-19 11:27:49.828195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.454 qpair failed and we were unable to recover it. 00:25:54.454 [2024-11-19 11:27:49.828376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.454 [2024-11-19 11:27:49.828422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.454 qpair failed and we were unable to recover it. 00:25:54.454 [2024-11-19 11:27:49.828577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.454 [2024-11-19 11:27:49.828641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.454 qpair failed and we were unable to recover it. 00:25:54.454 [2024-11-19 11:27:49.828783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.454 [2024-11-19 11:27:49.828830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.454 qpair failed and we were unable to recover it. 00:25:54.454 [2024-11-19 11:27:49.829032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.454 [2024-11-19 11:27:49.829080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.454 qpair failed and we were unable to recover it. 00:25:54.454 [2024-11-19 11:27:49.829266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.454 [2024-11-19 11:27:49.829312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.454 qpair failed and we were unable to recover it. 00:25:54.454 [2024-11-19 11:27:49.829523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.454 [2024-11-19 11:27:49.829571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.454 qpair failed and we were unable to recover it. 00:25:54.454 [2024-11-19 11:27:49.829774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.454 [2024-11-19 11:27:49.829819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.454 qpair failed and we were unable to recover it. 00:25:54.454 [2024-11-19 11:27:49.829970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.454 [2024-11-19 11:27:49.830015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.454 qpair failed and we were unable to recover it. 00:25:54.454 [2024-11-19 11:27:49.830217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.454 [2024-11-19 11:27:49.830261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.454 qpair failed and we were unable to recover it. 00:25:54.454 [2024-11-19 11:27:49.830465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.454 [2024-11-19 11:27:49.830511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.454 qpair failed and we were unable to recover it. 00:25:54.454 [2024-11-19 11:27:49.830724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.454 [2024-11-19 11:27:49.830772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.454 qpair failed and we were unable to recover it. 00:25:54.454 [2024-11-19 11:27:49.830937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.454 [2024-11-19 11:27:49.830984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.454 qpair failed and we were unable to recover it. 00:25:54.454 [2024-11-19 11:27:49.831163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.454 [2024-11-19 11:27:49.831210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.454 qpair failed and we were unable to recover it. 00:25:54.454 [2024-11-19 11:27:49.831384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.455 [2024-11-19 11:27:49.831433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.455 qpair failed and we were unable to recover it. 00:25:54.455 [2024-11-19 11:27:49.831634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.455 [2024-11-19 11:27:49.831681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.455 qpair failed and we were unable to recover it. 00:25:54.455 [2024-11-19 11:27:49.831876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.455 [2024-11-19 11:27:49.831921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.455 qpair failed and we were unable to recover it. 00:25:54.455 [2024-11-19 11:27:49.832113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.455 [2024-11-19 11:27:49.832158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.455 qpair failed and we were unable to recover it. 00:25:54.455 [2024-11-19 11:27:49.832352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.455 [2024-11-19 11:27:49.832405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.455 qpair failed and we were unable to recover it. 00:25:54.455 [2024-11-19 11:27:49.832572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.455 [2024-11-19 11:27:49.832616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.455 qpair failed and we were unable to recover it. 00:25:54.455 [2024-11-19 11:27:49.832790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.455 [2024-11-19 11:27:49.832846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.455 qpair failed and we were unable to recover it. 00:25:54.455 [2024-11-19 11:27:49.833062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.455 [2024-11-19 11:27:49.833109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.455 qpair failed and we were unable to recover it. 00:25:54.455 [2024-11-19 11:27:49.833337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.455 [2024-11-19 11:27:49.833415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.455 qpair failed and we were unable to recover it. 00:25:54.455 [2024-11-19 11:27:49.833632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.455 [2024-11-19 11:27:49.833679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.455 qpair failed and we were unable to recover it. 00:25:54.455 [2024-11-19 11:27:49.833841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.455 [2024-11-19 11:27:49.833889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.455 qpair failed and we were unable to recover it. 00:25:54.455 [2024-11-19 11:27:49.834053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.455 [2024-11-19 11:27:49.834100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.455 qpair failed and we were unable to recover it. 00:25:54.455 [2024-11-19 11:27:49.834319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.455 [2024-11-19 11:27:49.834389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.455 qpair failed and we were unable to recover it. 00:25:54.455 [2024-11-19 11:27:49.834586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.455 [2024-11-19 11:27:49.834634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.455 qpair failed and we were unable to recover it. 00:25:54.455 [2024-11-19 11:27:49.834835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.455 [2024-11-19 11:27:49.834882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.455 qpair failed and we were unable to recover it. 00:25:54.455 [2024-11-19 11:27:49.835054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.455 [2024-11-19 11:27:49.835101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.455 qpair failed and we were unable to recover it. 00:25:54.455 [2024-11-19 11:27:49.835275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.455 [2024-11-19 11:27:49.835322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.455 qpair failed and we were unable to recover it. 00:25:54.455 [2024-11-19 11:27:49.835512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.455 [2024-11-19 11:27:49.835559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.455 qpair failed and we were unable to recover it. 00:25:54.455 [2024-11-19 11:27:49.835774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.455 [2024-11-19 11:27:49.835821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.455 qpair failed and we were unable to recover it. 00:25:54.455 [2024-11-19 11:27:49.836034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.455 [2024-11-19 11:27:49.836083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.455 qpair failed and we were unable to recover it. 00:25:54.455 [2024-11-19 11:27:49.836299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.455 [2024-11-19 11:27:49.836347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.455 qpair failed and we were unable to recover it. 00:25:54.455 [2024-11-19 11:27:49.836544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.455 [2024-11-19 11:27:49.836591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.455 qpair failed and we were unable to recover it. 00:25:54.455 [2024-11-19 11:27:49.836807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.455 [2024-11-19 11:27:49.836853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.455 qpair failed and we were unable to recover it. 00:25:54.455 [2024-11-19 11:27:49.837034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.455 [2024-11-19 11:27:49.837081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.455 qpair failed and we were unable to recover it. 00:25:54.455 [2024-11-19 11:27:49.837278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.455 [2024-11-19 11:27:49.837325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.455 qpair failed and we were unable to recover it. 00:25:54.455 [2024-11-19 11:27:49.837548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.455 [2024-11-19 11:27:49.837595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.455 qpair failed and we were unable to recover it. 00:25:54.455 [2024-11-19 11:27:49.837764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.455 [2024-11-19 11:27:49.837812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.455 qpair failed and we were unable to recover it. 00:25:54.455 [2024-11-19 11:27:49.837991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.455 [2024-11-19 11:27:49.838039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.455 qpair failed and we were unable to recover it. 00:25:54.455 [2024-11-19 11:27:49.838231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.455 [2024-11-19 11:27:49.838293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.455 qpair failed and we were unable to recover it. 00:25:54.455 [2024-11-19 11:27:49.838509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.455 [2024-11-19 11:27:49.838557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.455 qpair failed and we were unable to recover it. 00:25:54.455 [2024-11-19 11:27:49.838740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.455 [2024-11-19 11:27:49.838788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.455 qpair failed and we were unable to recover it. 00:25:54.455 [2024-11-19 11:27:49.838999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.455 [2024-11-19 11:27:49.839046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.455 qpair failed and we were unable to recover it. 00:25:54.455 [2024-11-19 11:27:49.839194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.455 [2024-11-19 11:27:49.839241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.455 qpair failed and we were unable to recover it. 00:25:54.455 [2024-11-19 11:27:49.839399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.455 [2024-11-19 11:27:49.839456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.455 qpair failed and we were unable to recover it. 00:25:54.455 [2024-11-19 11:27:49.839607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.455 [2024-11-19 11:27:49.839654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.455 qpair failed and we were unable to recover it. 00:25:54.456 [2024-11-19 11:27:49.839824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.456 [2024-11-19 11:27:49.839871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.456 qpair failed and we were unable to recover it. 00:25:54.456 [2024-11-19 11:27:49.840072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.456 [2024-11-19 11:27:49.840120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.456 qpair failed and we were unable to recover it. 00:25:54.456 [2024-11-19 11:27:49.840278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.456 [2024-11-19 11:27:49.840325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.456 qpair failed and we were unable to recover it. 00:25:54.456 [2024-11-19 11:27:49.840514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.456 [2024-11-19 11:27:49.840562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.456 qpair failed and we were unable to recover it. 00:25:54.456 [2024-11-19 11:27:49.840785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.456 [2024-11-19 11:27:49.840833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.456 qpair failed and we were unable to recover it. 00:25:54.456 [2024-11-19 11:27:49.840990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.456 [2024-11-19 11:27:49.841037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.456 qpair failed and we were unable to recover it. 00:25:54.456 [2024-11-19 11:27:49.841179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.456 [2024-11-19 11:27:49.841227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.456 qpair failed and we were unable to recover it. 00:25:54.456 [2024-11-19 11:27:49.841428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.456 [2024-11-19 11:27:49.841478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.456 qpair failed and we were unable to recover it. 00:25:54.456 [2024-11-19 11:27:49.841662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.456 [2024-11-19 11:27:49.841709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.456 qpair failed and we were unable to recover it. 00:25:54.456 [2024-11-19 11:27:49.841857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.456 [2024-11-19 11:27:49.841904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.456 qpair failed and we were unable to recover it. 00:25:54.456 [2024-11-19 11:27:49.842104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.456 [2024-11-19 11:27:49.842152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.456 qpair failed and we were unable to recover it. 00:25:54.456 [2024-11-19 11:27:49.842324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.456 [2024-11-19 11:27:49.842394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.456 qpair failed and we were unable to recover it. 00:25:54.456 [2024-11-19 11:27:49.842608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.456 [2024-11-19 11:27:49.842656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.456 qpair failed and we were unable to recover it. 00:25:54.456 [2024-11-19 11:27:49.842855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.456 [2024-11-19 11:27:49.842903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.456 qpair failed and we were unable to recover it. 00:25:54.456 [2024-11-19 11:27:49.843110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.456 [2024-11-19 11:27:49.843158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.456 qpair failed and we were unable to recover it. 00:25:54.456 [2024-11-19 11:27:49.843307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.456 [2024-11-19 11:27:49.843355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.456 qpair failed and we were unable to recover it. 00:25:54.456 [2024-11-19 11:27:49.843521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.456 [2024-11-19 11:27:49.843568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.456 qpair failed and we were unable to recover it. 00:25:54.456 [2024-11-19 11:27:49.843743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.456 [2024-11-19 11:27:49.843792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.456 qpair failed and we were unable to recover it. 00:25:54.456 [2024-11-19 11:27:49.843985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.456 [2024-11-19 11:27:49.844049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.456 qpair failed and we were unable to recover it. 00:25:54.456 [2024-11-19 11:27:49.844235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.456 [2024-11-19 11:27:49.844297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.456 qpair failed and we were unable to recover it. 00:25:54.456 [2024-11-19 11:27:49.844528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.456 [2024-11-19 11:27:49.844594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.456 qpair failed and we were unable to recover it. 00:25:54.456 [2024-11-19 11:27:49.844837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.456 [2024-11-19 11:27:49.844885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.456 qpair failed and we were unable to recover it. 00:25:54.456 [2024-11-19 11:27:49.845027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.456 [2024-11-19 11:27:49.845074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.456 qpair failed and we were unable to recover it. 00:25:54.456 [2024-11-19 11:27:49.845289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.456 [2024-11-19 11:27:49.845336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.456 qpair failed and we were unable to recover it. 00:25:54.456 [2024-11-19 11:27:49.845530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.456 [2024-11-19 11:27:49.845578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.456 qpair failed and we were unable to recover it. 00:25:54.456 [2024-11-19 11:27:49.845726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.457 [2024-11-19 11:27:49.845773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.457 qpair failed and we were unable to recover it. 00:25:54.457 [2024-11-19 11:27:49.845927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.457 [2024-11-19 11:27:49.845975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.457 qpair failed and we were unable to recover it. 00:25:54.457 [2024-11-19 11:27:49.846117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.457 [2024-11-19 11:27:49.846166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.457 qpair failed and we were unable to recover it. 00:25:54.457 [2024-11-19 11:27:49.846334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.457 [2024-11-19 11:27:49.846396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.457 qpair failed and we were unable to recover it. 00:25:54.457 [2024-11-19 11:27:49.846601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.457 [2024-11-19 11:27:49.846649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.457 qpair failed and we were unable to recover it. 00:25:54.457 [2024-11-19 11:27:49.846827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.457 [2024-11-19 11:27:49.846874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.457 qpair failed and we were unable to recover it. 00:25:54.457 [2024-11-19 11:27:49.847045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.457 [2024-11-19 11:27:49.847092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.457 qpair failed and we were unable to recover it. 00:25:54.457 [2024-11-19 11:27:49.847264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.457 [2024-11-19 11:27:49.847311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.457 qpair failed and we were unable to recover it. 00:25:54.457 [2024-11-19 11:27:49.847517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.457 [2024-11-19 11:27:49.847584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.457 qpair failed and we were unable to recover it. 00:25:54.457 [2024-11-19 11:27:49.847804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.457 [2024-11-19 11:27:49.847868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.457 qpair failed and we were unable to recover it. 00:25:54.457 [2024-11-19 11:27:49.848112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.457 [2024-11-19 11:27:49.848175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.457 qpair failed and we were unable to recover it. 00:25:54.457 [2024-11-19 11:27:49.848380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.457 [2024-11-19 11:27:49.848428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.457 qpair failed and we were unable to recover it. 00:25:54.457 [2024-11-19 11:27:49.848620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.457 [2024-11-19 11:27:49.848670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.457 qpair failed and we were unable to recover it. 00:25:54.457 [2024-11-19 11:27:49.848918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.457 [2024-11-19 11:27:49.848968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.457 qpair failed and we were unable to recover it. 00:25:54.457 [2024-11-19 11:27:49.849200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.457 [2024-11-19 11:27:49.849250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.457 qpair failed and we were unable to recover it. 00:25:54.457 [2024-11-19 11:27:49.849490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.457 [2024-11-19 11:27:49.849542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.457 qpair failed and we were unable to recover it. 00:25:54.457 [2024-11-19 11:27:49.849739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.457 [2024-11-19 11:27:49.849789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.457 qpair failed and we were unable to recover it. 00:25:54.457 [2024-11-19 11:27:49.849970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.457 [2024-11-19 11:27:49.850021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.457 qpair failed and we were unable to recover it. 00:25:54.457 [2024-11-19 11:27:49.850225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.457 [2024-11-19 11:27:49.850275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.457 qpair failed and we were unable to recover it. 00:25:54.457 [2024-11-19 11:27:49.850509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.457 [2024-11-19 11:27:49.850558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.457 qpair failed and we were unable to recover it. 00:25:54.457 [2024-11-19 11:27:49.850737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.457 [2024-11-19 11:27:49.850785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.457 qpair failed and we were unable to recover it. 00:25:54.457 [2024-11-19 11:27:49.850952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.457 [2024-11-19 11:27:49.850976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.457 qpair failed and we were unable to recover it. 00:25:54.457 [2024-11-19 11:27:49.851168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.457 [2024-11-19 11:27:49.851216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.457 qpair failed and we were unable to recover it. 00:25:54.457 [2024-11-19 11:27:49.851419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.457 [2024-11-19 11:27:49.851467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.457 qpair failed and we were unable to recover it. 00:25:54.457 [2024-11-19 11:27:49.851664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.457 [2024-11-19 11:27:49.851727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.457 qpair failed and we were unable to recover it. 00:25:54.457 [2024-11-19 11:27:49.851966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.457 [2024-11-19 11:27:49.852030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.457 qpair failed and we were unable to recover it. 00:25:54.457 [2024-11-19 11:27:49.852284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.457 [2024-11-19 11:27:49.852346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.457 qpair failed and we were unable to recover it. 00:25:54.457 [2024-11-19 11:27:49.852634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.457 [2024-11-19 11:27:49.852681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.457 qpair failed and we were unable to recover it. 00:25:54.457 [2024-11-19 11:27:49.852896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.457 [2024-11-19 11:27:49.852947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.457 qpair failed and we were unable to recover it. 00:25:54.457 [2024-11-19 11:27:49.853099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.458 [2024-11-19 11:27:49.853149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.458 qpair failed and we were unable to recover it. 00:25:54.458 [2024-11-19 11:27:49.853354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.458 [2024-11-19 11:27:49.853418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.458 qpair failed and we were unable to recover it. 00:25:54.458 [2024-11-19 11:27:49.853609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.458 [2024-11-19 11:27:49.853660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.458 qpair failed and we were unable to recover it. 00:25:54.458 [2024-11-19 11:27:49.853875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.458 [2024-11-19 11:27:49.853926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.458 qpair failed and we were unable to recover it. 00:25:54.458 [2024-11-19 11:27:49.854120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.458 [2024-11-19 11:27:49.854167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.458 qpair failed and we were unable to recover it. 00:25:54.458 [2024-11-19 11:27:49.854318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.458 [2024-11-19 11:27:49.854393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.458 qpair failed and we were unable to recover it. 00:25:54.458 [2024-11-19 11:27:49.854617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.458 [2024-11-19 11:27:49.854665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.458 qpair failed and we were unable to recover it. 00:25:54.458 [2024-11-19 11:27:49.854814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.458 [2024-11-19 11:27:49.854861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.458 qpair failed and we were unable to recover it. 00:25:54.458 [2024-11-19 11:27:49.855086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.458 [2024-11-19 11:27:49.855136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.458 qpair failed and we were unable to recover it. 00:25:54.458 [2024-11-19 11:27:49.855380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.458 [2024-11-19 11:27:49.855432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.458 qpair failed and we were unable to recover it. 00:25:54.458 [2024-11-19 11:27:49.855614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.458 [2024-11-19 11:27:49.855664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.458 qpair failed and we were unable to recover it. 00:25:54.458 [2024-11-19 11:27:49.855870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.458 [2024-11-19 11:27:49.855920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.458 qpair failed and we were unable to recover it. 00:25:54.458 [2024-11-19 11:27:49.856092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.458 [2024-11-19 11:27:49.856151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.458 qpair failed and we were unable to recover it. 00:25:54.458 [2024-11-19 11:27:49.856335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.458 [2024-11-19 11:27:49.856400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.458 qpair failed and we were unable to recover it. 00:25:54.458 [2024-11-19 11:27:49.856618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.458 [2024-11-19 11:27:49.856668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.458 qpair failed and we were unable to recover it. 00:25:54.458 [2024-11-19 11:27:49.856853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.458 [2024-11-19 11:27:49.856904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.458 qpair failed and we were unable to recover it. 00:25:54.458 [2024-11-19 11:27:49.857090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.458 [2024-11-19 11:27:49.857141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.458 qpair failed and we were unable to recover it. 00:25:54.458 [2024-11-19 11:27:49.857352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.458 [2024-11-19 11:27:49.857416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.458 qpair failed and we were unable to recover it. 00:25:54.458 [2024-11-19 11:27:49.857625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.458 [2024-11-19 11:27:49.857675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.458 qpair failed and we were unable to recover it. 00:25:54.458 [2024-11-19 11:27:49.857860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.458 [2024-11-19 11:27:49.857910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.458 qpair failed and we were unable to recover it. 00:25:54.458 [2024-11-19 11:27:49.858115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.458 [2024-11-19 11:27:49.858165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.458 qpair failed and we were unable to recover it. 00:25:54.458 [2024-11-19 11:27:49.858389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.458 [2024-11-19 11:27:49.858441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.458 qpair failed and we were unable to recover it. 00:25:54.458 [2024-11-19 11:27:49.858624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.458 [2024-11-19 11:27:49.858675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.458 qpair failed and we were unable to recover it. 00:25:54.458 [2024-11-19 11:27:49.858847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.458 [2024-11-19 11:27:49.858896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.458 qpair failed and we were unable to recover it. 00:25:54.458 [2024-11-19 11:27:49.859078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.458 [2024-11-19 11:27:49.859129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.458 qpair failed and we were unable to recover it. 00:25:54.458 [2024-11-19 11:27:49.859317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.458 [2024-11-19 11:27:49.859381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.458 qpair failed and we were unable to recover it. 00:25:54.458 [2024-11-19 11:27:49.859623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.458 [2024-11-19 11:27:49.859673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.458 qpair failed and we were unable to recover it. 00:25:54.458 [2024-11-19 11:27:49.859845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.458 [2024-11-19 11:27:49.859895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.458 qpair failed and we were unable to recover it. 00:25:54.458 [2024-11-19 11:27:49.860104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.458 [2024-11-19 11:27:49.860155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.458 qpair failed and we were unable to recover it. 00:25:54.458 [2024-11-19 11:27:49.860375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.458 [2024-11-19 11:27:49.860427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.458 qpair failed and we were unable to recover it. 00:25:54.458 [2024-11-19 11:27:49.860673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.458 [2024-11-19 11:27:49.860723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.458 qpair failed and we were unable to recover it. 00:25:54.459 [2024-11-19 11:27:49.860905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.459 [2024-11-19 11:27:49.860956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.459 qpair failed and we were unable to recover it. 00:25:54.459 [2024-11-19 11:27:49.861136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.459 [2024-11-19 11:27:49.861186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.459 qpair failed and we were unable to recover it. 00:25:54.459 [2024-11-19 11:27:49.861406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.459 [2024-11-19 11:27:49.861458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.459 qpair failed and we were unable to recover it. 00:25:54.459 [2024-11-19 11:27:49.861707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.459 [2024-11-19 11:27:49.861757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.459 qpair failed and we were unable to recover it. 00:25:54.459 [2024-11-19 11:27:49.861989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.459 [2024-11-19 11:27:49.862039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.459 qpair failed and we were unable to recover it. 00:25:54.459 [2024-11-19 11:27:49.862247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.459 [2024-11-19 11:27:49.862298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.459 qpair failed and we were unable to recover it. 00:25:54.732 [2024-11-19 11:27:49.862545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.732 [2024-11-19 11:27:49.862597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.732 qpair failed and we were unable to recover it. 00:25:54.732 [2024-11-19 11:27:49.862837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.732 [2024-11-19 11:27:49.862888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.732 qpair failed and we were unable to recover it. 00:25:54.732 [2024-11-19 11:27:49.863109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.732 [2024-11-19 11:27:49.863168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.732 qpair failed and we were unable to recover it. 00:25:54.732 [2024-11-19 11:27:49.863381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.732 [2024-11-19 11:27:49.863418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.732 qpair failed and we were unable to recover it. 00:25:54.732 [2024-11-19 11:27:49.863608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.732 [2024-11-19 11:27:49.863659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.732 qpair failed and we were unable to recover it. 00:25:54.732 [2024-11-19 11:27:49.863908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.732 [2024-11-19 11:27:49.863959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.732 qpair failed and we were unable to recover it. 00:25:54.732 [2024-11-19 11:27:49.864189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.732 [2024-11-19 11:27:49.864239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.732 qpair failed and we were unable to recover it. 00:25:54.732 [2024-11-19 11:27:49.864479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.732 [2024-11-19 11:27:49.864531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.732 qpair failed and we were unable to recover it. 00:25:54.732 [2024-11-19 11:27:49.864755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.732 [2024-11-19 11:27:49.864807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.732 qpair failed and we were unable to recover it. 00:25:54.732 [2024-11-19 11:27:49.865001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.732 [2024-11-19 11:27:49.865051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.732 qpair failed and we were unable to recover it. 00:25:54.732 [2024-11-19 11:27:49.865267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.732 [2024-11-19 11:27:49.865318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.732 qpair failed and we were unable to recover it. 00:25:54.732 [2024-11-19 11:27:49.865557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.732 [2024-11-19 11:27:49.865608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.732 qpair failed and we were unable to recover it. 00:25:54.732 [2024-11-19 11:27:49.865816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.732 [2024-11-19 11:27:49.865866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.732 qpair failed and we were unable to recover it. 00:25:54.732 [2024-11-19 11:27:49.866087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.732 [2024-11-19 11:27:49.866150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.732 qpair failed and we were unable to recover it. 00:25:54.732 [2024-11-19 11:27:49.866378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.732 [2024-11-19 11:27:49.866456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.732 qpair failed and we were unable to recover it. 00:25:54.732 [2024-11-19 11:27:49.866713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.732 [2024-11-19 11:27:49.866777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.732 qpair failed and we were unable to recover it. 00:25:54.732 [2024-11-19 11:27:49.867043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.732 [2024-11-19 11:27:49.867106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.732 qpair failed and we were unable to recover it. 00:25:54.732 [2024-11-19 11:27:49.867399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.732 [2024-11-19 11:27:49.867450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.732 qpair failed and we were unable to recover it. 00:25:54.732 [2024-11-19 11:27:49.867663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.732 [2024-11-19 11:27:49.867713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.732 qpair failed and we were unable to recover it. 00:25:54.732 [2024-11-19 11:27:49.867918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.732 [2024-11-19 11:27:49.867969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.732 qpair failed and we were unable to recover it. 00:25:54.732 [2024-11-19 11:27:49.868121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.732 [2024-11-19 11:27:49.868171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.732 qpair failed and we were unable to recover it. 00:25:54.732 [2024-11-19 11:27:49.868380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.732 [2024-11-19 11:27:49.868432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.732 qpair failed and we were unable to recover it. 00:25:54.733 [2024-11-19 11:27:49.868669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.733 [2024-11-19 11:27:49.868719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.733 qpair failed and we were unable to recover it. 00:25:54.733 [2024-11-19 11:27:49.868928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.733 [2024-11-19 11:27:49.868978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.733 qpair failed and we were unable to recover it. 00:25:54.733 [2024-11-19 11:27:49.869245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.733 [2024-11-19 11:27:49.869295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.733 qpair failed and we were unable to recover it. 00:25:54.733 [2024-11-19 11:27:49.869541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.733 [2024-11-19 11:27:49.869593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.733 qpair failed and we were unable to recover it. 00:25:54.733 [2024-11-19 11:27:49.869831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.733 [2024-11-19 11:27:49.869884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.733 qpair failed and we were unable to recover it. 00:25:54.733 [2024-11-19 11:27:49.870051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.733 [2024-11-19 11:27:49.870105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.733 qpair failed and we were unable to recover it. 00:25:54.733 [2024-11-19 11:27:49.870287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.733 [2024-11-19 11:27:49.870341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.733 qpair failed and we were unable to recover it. 00:25:54.733 [2024-11-19 11:27:49.870639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.733 [2024-11-19 11:27:49.870690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.733 qpair failed and we were unable to recover it. 00:25:54.733 [2024-11-19 11:27:49.870925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.733 [2024-11-19 11:27:49.870975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.733 qpair failed and we were unable to recover it. 00:25:54.733 [2024-11-19 11:27:49.871261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.733 [2024-11-19 11:27:49.871311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.733 qpair failed and we were unable to recover it. 00:25:54.733 [2024-11-19 11:27:49.871541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.733 [2024-11-19 11:27:49.871593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.733 qpair failed and we were unable to recover it. 00:25:54.733 [2024-11-19 11:27:49.871813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.733 [2024-11-19 11:27:49.871864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.733 qpair failed and we were unable to recover it. 00:25:54.733 [2024-11-19 11:27:49.872081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.733 [2024-11-19 11:27:49.872132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.733 qpair failed and we were unable to recover it. 00:25:54.733 [2024-11-19 11:27:49.872405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.733 [2024-11-19 11:27:49.872473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.733 qpair failed and we were unable to recover it. 00:25:54.733 [2024-11-19 11:27:49.872705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.733 [2024-11-19 11:27:49.872756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.733 qpair failed and we were unable to recover it. 00:25:54.733 [2024-11-19 11:27:49.873026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.733 [2024-11-19 11:27:49.873080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.733 qpair failed and we were unable to recover it. 00:25:54.733 [2024-11-19 11:27:49.873309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.733 [2024-11-19 11:27:49.873378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.733 qpair failed and we were unable to recover it. 00:25:54.733 [2024-11-19 11:27:49.873573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.733 [2024-11-19 11:27:49.873627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.733 qpair failed and we were unable to recover it. 00:25:54.733 [2024-11-19 11:27:49.873861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.733 [2024-11-19 11:27:49.873918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.733 qpair failed and we were unable to recover it. 00:25:54.733 [2024-11-19 11:27:49.874210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.733 [2024-11-19 11:27:49.874262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.733 qpair failed and we were unable to recover it. 00:25:54.733 [2024-11-19 11:27:49.874498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.733 [2024-11-19 11:27:49.874550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.733 qpair failed and we were unable to recover it. 00:25:54.733 [2024-11-19 11:27:49.874825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.733 [2024-11-19 11:27:49.874877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.733 qpair failed and we were unable to recover it. 00:25:54.733 [2024-11-19 11:27:49.875075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.733 [2024-11-19 11:27:49.875126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.733 qpair failed and we were unable to recover it. 00:25:54.733 [2024-11-19 11:27:49.875343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.733 [2024-11-19 11:27:49.875405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.733 qpair failed and we were unable to recover it. 00:25:54.733 [2024-11-19 11:27:49.875649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.733 [2024-11-19 11:27:49.875700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.733 qpair failed and we were unable to recover it. 00:25:54.733 [2024-11-19 11:27:49.875957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.733 [2024-11-19 11:27:49.876021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.733 qpair failed and we were unable to recover it. 00:25:54.733 [2024-11-19 11:27:49.876279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.733 [2024-11-19 11:27:49.876342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.733 qpair failed and we were unable to recover it. 00:25:54.733 [2024-11-19 11:27:49.876637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.733 [2024-11-19 11:27:49.876701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.733 qpair failed and we were unable to recover it. 00:25:54.733 [2024-11-19 11:27:49.876932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.733 [2024-11-19 11:27:49.876988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.733 qpair failed and we were unable to recover it. 00:25:54.733 [2024-11-19 11:27:49.877241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.733 [2024-11-19 11:27:49.877296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.733 qpair failed and we were unable to recover it. 00:25:54.733 [2024-11-19 11:27:49.877569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.733 [2024-11-19 11:27:49.877624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.733 qpair failed and we were unable to recover it. 00:25:54.733 [2024-11-19 11:27:49.877843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.733 [2024-11-19 11:27:49.877897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.733 qpair failed and we were unable to recover it. 00:25:54.733 [2024-11-19 11:27:49.878106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.733 [2024-11-19 11:27:49.878160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.734 qpair failed and we were unable to recover it. 00:25:54.734 [2024-11-19 11:27:49.878420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.734 [2024-11-19 11:27:49.878477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.734 qpair failed and we were unable to recover it. 00:25:54.734 [2024-11-19 11:27:49.878710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.734 [2024-11-19 11:27:49.878765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.734 qpair failed and we were unable to recover it. 00:25:54.734 [2024-11-19 11:27:49.878987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.734 [2024-11-19 11:27:49.879042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.734 qpair failed and we were unable to recover it. 00:25:54.734 [2024-11-19 11:27:49.879213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.734 [2024-11-19 11:27:49.879291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.734 qpair failed and we were unable to recover it. 00:25:54.734 [2024-11-19 11:27:49.879566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.734 [2024-11-19 11:27:49.879621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.734 qpair failed and we were unable to recover it. 00:25:54.734 [2024-11-19 11:27:49.879861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.734 [2024-11-19 11:27:49.879915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.734 qpair failed and we were unable to recover it. 00:25:54.734 [2024-11-19 11:27:49.880166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.734 [2024-11-19 11:27:49.880228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.734 qpair failed and we were unable to recover it. 00:25:54.734 [2024-11-19 11:27:49.880505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.734 [2024-11-19 11:27:49.880561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.734 qpair failed and we were unable to recover it. 00:25:54.734 [2024-11-19 11:27:49.880724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.734 [2024-11-19 11:27:49.880780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.734 qpair failed and we were unable to recover it. 00:25:54.734 [2024-11-19 11:27:49.880974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.734 [2024-11-19 11:27:49.881028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.734 qpair failed and we were unable to recover it. 00:25:54.734 [2024-11-19 11:27:49.881338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.734 [2024-11-19 11:27:49.881431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.734 qpair failed and we were unable to recover it. 00:25:54.734 [2024-11-19 11:27:49.881599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.734 [2024-11-19 11:27:49.881654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.734 qpair failed and we were unable to recover it. 00:25:54.734 [2024-11-19 11:27:49.881900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.734 [2024-11-19 11:27:49.881954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.734 qpair failed and we were unable to recover it. 00:25:54.734 [2024-11-19 11:27:49.882168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.734 [2024-11-19 11:27:49.882223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.734 qpair failed and we were unable to recover it. 00:25:54.734 [2024-11-19 11:27:49.882451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.734 [2024-11-19 11:27:49.882507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.734 qpair failed and we were unable to recover it. 00:25:54.734 [2024-11-19 11:27:49.882736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.734 [2024-11-19 11:27:49.882808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.734 qpair failed and we were unable to recover it. 00:25:54.734 [2024-11-19 11:27:49.883086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.734 [2024-11-19 11:27:49.883141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.734 qpair failed and we were unable to recover it. 00:25:54.734 [2024-11-19 11:27:49.883359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.734 [2024-11-19 11:27:49.883435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.734 qpair failed and we were unable to recover it. 00:25:54.734 [2024-11-19 11:27:49.883624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.734 [2024-11-19 11:27:49.883679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.734 qpair failed and we were unable to recover it. 00:25:54.734 [2024-11-19 11:27:49.883928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.734 [2024-11-19 11:27:49.883986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.734 qpair failed and we were unable to recover it. 00:25:54.734 [2024-11-19 11:27:49.884155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.734 [2024-11-19 11:27:49.884215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.734 qpair failed and we were unable to recover it. 00:25:54.734 [2024-11-19 11:27:49.884402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.734 [2024-11-19 11:27:49.884457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.734 qpair failed and we were unable to recover it. 00:25:54.734 [2024-11-19 11:27:49.884786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.734 [2024-11-19 11:27:49.884841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.734 qpair failed and we were unable to recover it. 00:25:54.734 [2024-11-19 11:27:49.885118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.734 [2024-11-19 11:27:49.885174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.734 qpair failed and we were unable to recover it. 00:25:54.734 [2024-11-19 11:27:49.885462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.734 [2024-11-19 11:27:49.885517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.734 qpair failed and we were unable to recover it. 00:25:54.734 [2024-11-19 11:27:49.885777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.734 [2024-11-19 11:27:49.885832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.734 qpair failed and we were unable to recover it. 00:25:54.734 [2024-11-19 11:27:49.886165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.734 [2024-11-19 11:27:49.886219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.734 qpair failed and we were unable to recover it. 00:25:54.734 [2024-11-19 11:27:49.886381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.734 [2024-11-19 11:27:49.886436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.734 qpair failed and we were unable to recover it. 00:25:54.734 [2024-11-19 11:27:49.886687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.734 [2024-11-19 11:27:49.886742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.734 qpair failed and we were unable to recover it. 00:25:54.734 [2024-11-19 11:27:49.887076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.734 [2024-11-19 11:27:49.887132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.734 qpair failed and we were unable to recover it. 00:25:54.734 [2024-11-19 11:27:49.887419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.734 [2024-11-19 11:27:49.887474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.734 qpair failed and we were unable to recover it. 00:25:54.734 [2024-11-19 11:27:49.887738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.734 [2024-11-19 11:27:49.887793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.735 qpair failed and we were unable to recover it. 00:25:54.735 [2024-11-19 11:27:49.888052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.735 [2024-11-19 11:27:49.888107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.735 qpair failed and we were unable to recover it. 00:25:54.735 [2024-11-19 11:27:49.888390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.735 [2024-11-19 11:27:49.888444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.735 qpair failed and we were unable to recover it. 00:25:54.735 [2024-11-19 11:27:49.888679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.735 [2024-11-19 11:27:49.888734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.735 qpair failed and we were unable to recover it. 00:25:54.735 [2024-11-19 11:27:49.888987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.735 [2024-11-19 11:27:49.889041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.735 qpair failed and we were unable to recover it. 00:25:54.735 [2024-11-19 11:27:49.889280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.735 [2024-11-19 11:27:49.889333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.735 qpair failed and we were unable to recover it. 00:25:54.735 [2024-11-19 11:27:49.889608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.735 [2024-11-19 11:27:49.889663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.735 qpair failed and we were unable to recover it. 00:25:54.735 [2024-11-19 11:27:49.889895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.735 [2024-11-19 11:27:49.889950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.735 qpair failed and we were unable to recover it. 00:25:54.735 [2024-11-19 11:27:49.890157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.735 [2024-11-19 11:27:49.890211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.735 qpair failed and we were unable to recover it. 00:25:54.735 [2024-11-19 11:27:49.890481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.735 [2024-11-19 11:27:49.890537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.735 qpair failed and we were unable to recover it. 00:25:54.735 [2024-11-19 11:27:49.890783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.735 [2024-11-19 11:27:49.890838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.735 qpair failed and we were unable to recover it. 00:25:54.735 [2024-11-19 11:27:49.891084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.735 [2024-11-19 11:27:49.891147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.735 qpair failed and we were unable to recover it. 00:25:54.735 [2024-11-19 11:27:49.891401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.735 [2024-11-19 11:27:49.891457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.735 qpair failed and we were unable to recover it. 00:25:54.735 [2024-11-19 11:27:49.891706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.735 [2024-11-19 11:27:49.891761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.735 qpair failed and we were unable to recover it. 00:25:54.735 [2024-11-19 11:27:49.891954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.735 [2024-11-19 11:27:49.892008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.735 qpair failed and we were unable to recover it. 00:25:54.735 [2024-11-19 11:27:49.892237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.735 [2024-11-19 11:27:49.892291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.735 qpair failed and we were unable to recover it. 00:25:54.735 [2024-11-19 11:27:49.892543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.735 [2024-11-19 11:27:49.892599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.735 qpair failed and we were unable to recover it. 00:25:54.735 [2024-11-19 11:27:49.892826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.735 [2024-11-19 11:27:49.892880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.735 qpair failed and we were unable to recover it. 00:25:54.735 [2024-11-19 11:27:49.893164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.735 [2024-11-19 11:27:49.893218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.735 qpair failed and we were unable to recover it. 00:25:54.735 [2024-11-19 11:27:49.893474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.735 [2024-11-19 11:27:49.893530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.735 qpair failed and we were unable to recover it. 00:25:54.735 [2024-11-19 11:27:49.893770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.735 [2024-11-19 11:27:49.893826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.735 qpair failed and we were unable to recover it. 00:25:54.735 [2024-11-19 11:27:49.894021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.735 [2024-11-19 11:27:49.894076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.735 qpair failed and we were unable to recover it. 00:25:54.735 [2024-11-19 11:27:49.894328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.735 [2024-11-19 11:27:49.894425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.735 qpair failed and we were unable to recover it. 00:25:54.735 [2024-11-19 11:27:49.894707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.735 [2024-11-19 11:27:49.894760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.735 qpair failed and we were unable to recover it. 00:25:54.735 [2024-11-19 11:27:49.894993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.735 [2024-11-19 11:27:49.895047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.735 qpair failed and we were unable to recover it. 00:25:54.735 [2024-11-19 11:27:49.895342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.735 [2024-11-19 11:27:49.895417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.735 qpair failed and we were unable to recover it. 00:25:54.735 [2024-11-19 11:27:49.895698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.735 [2024-11-19 11:27:49.895755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.735 qpair failed and we were unable to recover it. 00:25:54.735 [2024-11-19 11:27:49.896040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.735 [2024-11-19 11:27:49.896098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.735 qpair failed and we were unable to recover it. 00:25:54.735 [2024-11-19 11:27:49.896295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.735 [2024-11-19 11:27:49.896353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.735 qpair failed and we were unable to recover it. 00:25:54.735 [2024-11-19 11:27:49.896617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.735 [2024-11-19 11:27:49.896675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.735 qpair failed and we were unable to recover it. 00:25:54.735 [2024-11-19 11:27:49.896906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.735 [2024-11-19 11:27:49.896966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.735 qpair failed and we were unable to recover it. 00:25:54.735 [2024-11-19 11:27:49.897208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.735 [2024-11-19 11:27:49.897266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.735 qpair failed and we were unable to recover it. 00:25:54.735 [2024-11-19 11:27:49.897569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.735 [2024-11-19 11:27:49.897629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.735 qpair failed and we were unable to recover it. 00:25:54.735 [2024-11-19 11:27:49.897901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.735 [2024-11-19 11:27:49.897959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.735 qpair failed and we were unable to recover it. 00:25:54.735 [2024-11-19 11:27:49.898271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.735 [2024-11-19 11:27:49.898333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.735 qpair failed and we were unable to recover it. 00:25:54.735 [2024-11-19 11:27:49.898644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.736 [2024-11-19 11:27:49.898705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.736 qpair failed and we were unable to recover it. 00:25:54.736 [2024-11-19 11:27:49.898932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.736 [2024-11-19 11:27:49.898990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.736 qpair failed and we were unable to recover it. 00:25:54.736 [2024-11-19 11:27:49.899241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.736 [2024-11-19 11:27:49.899299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.736 qpair failed and we were unable to recover it. 00:25:54.736 [2024-11-19 11:27:49.899522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.736 [2024-11-19 11:27:49.899592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.736 qpair failed and we were unable to recover it. 00:25:54.736 [2024-11-19 11:27:49.899863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.736 [2024-11-19 11:27:49.899922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.736 qpair failed and we were unable to recover it. 00:25:54.736 [2024-11-19 11:27:49.900209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.736 [2024-11-19 11:27:49.900272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.736 qpair failed and we were unable to recover it. 00:25:54.736 [2024-11-19 11:27:49.900555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.736 [2024-11-19 11:27:49.900616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.736 qpair failed and we were unable to recover it. 00:25:54.736 [2024-11-19 11:27:49.900895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.736 [2024-11-19 11:27:49.900953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.736 qpair failed and we were unable to recover it. 00:25:54.736 [2024-11-19 11:27:49.901170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.736 [2024-11-19 11:27:49.901228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.736 qpair failed and we were unable to recover it. 00:25:54.736 [2024-11-19 11:27:49.901520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.736 [2024-11-19 11:27:49.901581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.736 qpair failed and we were unable to recover it. 00:25:54.736 [2024-11-19 11:27:49.901871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.736 [2024-11-19 11:27:49.901930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.736 qpair failed and we were unable to recover it. 00:25:54.736 [2024-11-19 11:27:49.902193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.736 [2024-11-19 11:27:49.902257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.736 qpair failed and we were unable to recover it. 00:25:54.736 [2024-11-19 11:27:49.902529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.736 [2024-11-19 11:27:49.902588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.736 qpair failed and we were unable to recover it. 00:25:54.736 [2024-11-19 11:27:49.902822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.736 [2024-11-19 11:27:49.902881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.736 qpair failed and we were unable to recover it. 00:25:54.736 [2024-11-19 11:27:49.903103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.736 [2024-11-19 11:27:49.903161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.736 qpair failed and we were unable to recover it. 00:25:54.736 [2024-11-19 11:27:49.903394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.736 [2024-11-19 11:27:49.903453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.736 qpair failed and we were unable to recover it. 00:25:54.736 [2024-11-19 11:27:49.903731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.736 [2024-11-19 11:27:49.903789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.736 qpair failed and we were unable to recover it. 00:25:54.736 [2024-11-19 11:27:49.904078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.736 [2024-11-19 11:27:49.904137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.736 qpair failed and we were unable to recover it. 00:25:54.736 [2024-11-19 11:27:49.904344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.736 [2024-11-19 11:27:49.904415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.736 qpair failed and we were unable to recover it. 00:25:54.736 [2024-11-19 11:27:49.904699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.736 [2024-11-19 11:27:49.904758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.736 qpair failed and we were unable to recover it. 00:25:54.736 [2024-11-19 11:27:49.905046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.736 [2024-11-19 11:27:49.905104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.736 qpair failed and we were unable to recover it. 00:25:54.736 [2024-11-19 11:27:49.905400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.736 [2024-11-19 11:27:49.905460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.736 qpair failed and we were unable to recover it. 00:25:54.736 [2024-11-19 11:27:49.905708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.736 [2024-11-19 11:27:49.905766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.736 qpair failed and we were unable to recover it. 00:25:54.736 [2024-11-19 11:27:49.905965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.736 [2024-11-19 11:27:49.906024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.736 qpair failed and we were unable to recover it. 00:25:54.736 [2024-11-19 11:27:49.906269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.736 [2024-11-19 11:27:49.906327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.736 qpair failed and we were unable to recover it. 00:25:54.736 [2024-11-19 11:27:49.906618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.736 [2024-11-19 11:27:49.906677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.736 qpair failed and we were unable to recover it. 00:25:54.736 [2024-11-19 11:27:49.906896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.736 [2024-11-19 11:27:49.906955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.736 qpair failed and we were unable to recover it. 00:25:54.736 [2024-11-19 11:27:49.907217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.736 [2024-11-19 11:27:49.907275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.736 qpair failed and we were unable to recover it. 00:25:54.736 [2024-11-19 11:27:49.907583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.736 [2024-11-19 11:27:49.907643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.736 qpair failed and we were unable to recover it. 00:25:54.736 [2024-11-19 11:27:49.907805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.736 [2024-11-19 11:27:49.907862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.736 qpair failed and we were unable to recover it. 00:25:54.736 [2024-11-19 11:27:49.908044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.736 [2024-11-19 11:27:49.908102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.736 qpair failed and we were unable to recover it. 00:25:54.736 [2024-11-19 11:27:49.908387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.736 [2024-11-19 11:27:49.908468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.736 qpair failed and we were unable to recover it. 00:25:54.736 [2024-11-19 11:27:49.908739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.736 [2024-11-19 11:27:49.908797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.736 qpair failed and we were unable to recover it. 00:25:54.736 [2024-11-19 11:27:49.909082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.736 [2024-11-19 11:27:49.909140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.736 qpair failed and we were unable to recover it. 00:25:54.736 [2024-11-19 11:27:49.909412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.736 [2024-11-19 11:27:49.909473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.736 qpair failed and we were unable to recover it. 00:25:54.736 [2024-11-19 11:27:49.909704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.737 [2024-11-19 11:27:49.909763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.737 qpair failed and we were unable to recover it. 00:25:54.737 [2024-11-19 11:27:49.909967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.737 [2024-11-19 11:27:49.910055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.737 qpair failed and we were unable to recover it. 00:25:54.737 [2024-11-19 11:27:49.910354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.737 [2024-11-19 11:27:49.910429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.737 qpair failed and we were unable to recover it. 00:25:54.737 [2024-11-19 11:27:49.910719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.737 [2024-11-19 11:27:49.910783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.737 qpair failed and we were unable to recover it. 00:25:54.737 [2024-11-19 11:27:49.911067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.737 [2024-11-19 11:27:49.911130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.737 qpair failed and we were unable to recover it. 00:25:54.737 [2024-11-19 11:27:49.911432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.737 [2024-11-19 11:27:49.911496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.737 qpair failed and we were unable to recover it. 00:25:54.737 [2024-11-19 11:27:49.911757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.737 [2024-11-19 11:27:49.911819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.737 qpair failed and we were unable to recover it. 00:25:54.737 [2024-11-19 11:27:49.912084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.737 [2024-11-19 11:27:49.912147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.737 qpair failed and we were unable to recover it. 00:25:54.737 [2024-11-19 11:27:49.912446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.737 [2024-11-19 11:27:49.912510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.737 qpair failed and we were unable to recover it. 00:25:54.737 [2024-11-19 11:27:49.912726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.737 [2024-11-19 11:27:49.912790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.737 qpair failed and we were unable to recover it. 00:25:54.737 [2024-11-19 11:27:49.913041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.737 [2024-11-19 11:27:49.913105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.737 qpair failed and we were unable to recover it. 00:25:54.737 [2024-11-19 11:27:49.913401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.737 [2024-11-19 11:27:49.913483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.737 qpair failed and we were unable to recover it. 00:25:54.737 [2024-11-19 11:27:49.913730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.737 [2024-11-19 11:27:49.913793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.737 qpair failed and we were unable to recover it. 00:25:54.737 [2024-11-19 11:27:49.914062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.737 [2024-11-19 11:27:49.914125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.737 qpair failed and we were unable to recover it. 00:25:54.737 [2024-11-19 11:27:49.914433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.737 [2024-11-19 11:27:49.914494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.737 qpair failed and we were unable to recover it. 00:25:54.737 [2024-11-19 11:27:49.914745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.737 [2024-11-19 11:27:49.914804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.737 qpair failed and we were unable to recover it. 00:25:54.737 [2024-11-19 11:27:49.915098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.737 [2024-11-19 11:27:49.915155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.737 qpair failed and we were unable to recover it. 00:25:54.737 [2024-11-19 11:27:49.915377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.737 [2024-11-19 11:27:49.915464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.737 qpair failed and we were unable to recover it. 00:25:54.737 [2024-11-19 11:27:49.915758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.737 [2024-11-19 11:27:49.915818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.737 qpair failed and we were unable to recover it. 00:25:54.737 [2024-11-19 11:27:49.916105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.737 [2024-11-19 11:27:49.916164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.737 qpair failed and we were unable to recover it. 00:25:54.737 [2024-11-19 11:27:49.916488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.737 [2024-11-19 11:27:49.916552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.737 qpair failed and we were unable to recover it. 00:25:54.737 [2024-11-19 11:27:49.916849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.737 [2024-11-19 11:27:49.916913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.737 qpair failed and we were unable to recover it. 00:25:54.737 [2024-11-19 11:27:49.917204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.737 [2024-11-19 11:27:49.917271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.737 qpair failed and we were unable to recover it. 00:25:54.737 [2024-11-19 11:27:49.917552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.737 [2024-11-19 11:27:49.917616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.737 qpair failed and we were unable to recover it. 00:25:54.737 [2024-11-19 11:27:49.917875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.737 [2024-11-19 11:27:49.917940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.737 qpair failed and we were unable to recover it. 00:25:54.737 [2024-11-19 11:27:49.918223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.737 [2024-11-19 11:27:49.918286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.737 qpair failed and we were unable to recover it. 00:25:54.737 [2024-11-19 11:27:49.918564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.737 [2024-11-19 11:27:49.918629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.737 qpair failed and we were unable to recover it. 00:25:54.737 [2024-11-19 11:27:49.918923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.737 [2024-11-19 11:27:49.918987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.737 qpair failed and we were unable to recover it. 00:25:54.737 [2024-11-19 11:27:49.919235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.737 [2024-11-19 11:27:49.919298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.737 qpair failed and we were unable to recover it. 00:25:54.737 [2024-11-19 11:27:49.919597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.737 [2024-11-19 11:27:49.919661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.737 qpair failed and we were unable to recover it. 00:25:54.737 [2024-11-19 11:27:49.919958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.737 [2024-11-19 11:27:49.920020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.737 qpair failed and we were unable to recover it. 00:25:54.737 [2024-11-19 11:27:49.920312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.737 [2024-11-19 11:27:49.920391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.738 qpair failed and we were unable to recover it. 00:25:54.738 [2024-11-19 11:27:49.920694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-11-19 11:27:49.920757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.738 qpair failed and we were unable to recover it. 00:25:54.738 [2024-11-19 11:27:49.921055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-11-19 11:27:49.921119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.738 qpair failed and we were unable to recover it. 00:25:54.738 [2024-11-19 11:27:49.921399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-11-19 11:27:49.921464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.738 qpair failed and we were unable to recover it. 00:25:54.738 [2024-11-19 11:27:49.921736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-11-19 11:27:49.921798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.738 qpair failed and we were unable to recover it. 00:25:54.738 [2024-11-19 11:27:49.922083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-11-19 11:27:49.922157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.738 qpair failed and we were unable to recover it. 00:25:54.738 [2024-11-19 11:27:49.922395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-11-19 11:27:49.922460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.738 qpair failed and we were unable to recover it. 00:25:54.738 [2024-11-19 11:27:49.922764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-11-19 11:27:49.922828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.738 qpair failed and we were unable to recover it. 00:25:54.738 [2024-11-19 11:27:49.923107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-11-19 11:27:49.923150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.738 qpair failed and we were unable to recover it. 00:25:54.738 [2024-11-19 11:27:49.923384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-11-19 11:27:49.923428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.738 qpair failed and we were unable to recover it. 00:25:54.738 [2024-11-19 11:27:49.923642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-11-19 11:27:49.923684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.738 qpair failed and we were unable to recover it. 00:25:54.738 [2024-11-19 11:27:49.923892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-11-19 11:27:49.923934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.738 qpair failed and we were unable to recover it. 00:25:54.738 [2024-11-19 11:27:49.924183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-11-19 11:27:49.924257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.738 qpair failed and we were unable to recover it. 00:25:54.738 [2024-11-19 11:27:49.924553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-11-19 11:27:49.924618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.738 qpair failed and we were unable to recover it. 00:25:54.738 [2024-11-19 11:27:49.924894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-11-19 11:27:49.924956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.738 qpair failed and we were unable to recover it. 00:25:54.738 [2024-11-19 11:27:49.925197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-11-19 11:27:49.925260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.738 qpair failed and we were unable to recover it. 00:25:54.738 [2024-11-19 11:27:49.925574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-11-19 11:27:49.925638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.738 qpair failed and we were unable to recover it. 00:25:54.738 [2024-11-19 11:27:49.925904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-11-19 11:27:49.925966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.738 qpair failed and we were unable to recover it. 00:25:54.738 [2024-11-19 11:27:49.926260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-11-19 11:27:49.926323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.738 qpair failed and we were unable to recover it. 00:25:54.738 [2024-11-19 11:27:49.926635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-11-19 11:27:49.926677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.738 qpair failed and we were unable to recover it. 00:25:54.738 [2024-11-19 11:27:49.926953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-11-19 11:27:49.927016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.738 qpair failed and we were unable to recover it. 00:25:54.738 [2024-11-19 11:27:49.927280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-11-19 11:27:49.927344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.738 qpair failed and we were unable to recover it. 00:25:54.738 [2024-11-19 11:27:49.927656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-11-19 11:27:49.927719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.738 qpair failed and we were unable to recover it. 00:25:54.738 [2024-11-19 11:27:49.927966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-11-19 11:27:49.928029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.738 qpair failed and we were unable to recover it. 00:25:54.738 [2024-11-19 11:27:49.928275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-11-19 11:27:49.928338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.738 qpair failed and we were unable to recover it. 00:25:54.738 [2024-11-19 11:27:49.928649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-11-19 11:27:49.928712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.738 qpair failed and we were unable to recover it. 00:25:54.738 [2024-11-19 11:27:49.929000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-11-19 11:27:49.929062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.738 qpair failed and we were unable to recover it. 00:25:54.738 [2024-11-19 11:27:49.929378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-11-19 11:27:49.929443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.738 qpair failed and we were unable to recover it. 00:25:54.738 [2024-11-19 11:27:49.929732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-11-19 11:27:49.929794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.738 qpair failed and we were unable to recover it. 00:25:54.738 [2024-11-19 11:27:49.930001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-11-19 11:27:49.930064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.738 qpair failed and we were unable to recover it. 00:25:54.738 [2024-11-19 11:27:49.930289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-11-19 11:27:49.930353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.738 qpair failed and we were unable to recover it. 00:25:54.738 [2024-11-19 11:27:49.930690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-11-19 11:27:49.930754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.738 qpair failed and we were unable to recover it. 00:25:54.738 [2024-11-19 11:27:49.931044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-11-19 11:27:49.931117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.738 qpair failed and we were unable to recover it. 00:25:54.738 [2024-11-19 11:27:49.931389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-11-19 11:27:49.931455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.738 qpair failed and we were unable to recover it. 00:25:54.738 [2024-11-19 11:27:49.931707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-11-19 11:27:49.931769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.738 qpair failed and we were unable to recover it. 00:25:54.738 [2024-11-19 11:27:49.932040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-11-19 11:27:49.932103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.738 qpair failed and we were unable to recover it. 00:25:54.739 [2024-11-19 11:27:49.932410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-11-19 11:27:49.932476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.739 qpair failed and we were unable to recover it. 00:25:54.739 [2024-11-19 11:27:49.932744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-11-19 11:27:49.932807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.739 qpair failed and we were unable to recover it. 00:25:54.739 [2024-11-19 11:27:49.933072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-11-19 11:27:49.933114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.739 qpair failed and we were unable to recover it. 00:25:54.739 [2024-11-19 11:27:49.933279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-11-19 11:27:49.933353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.739 qpair failed and we were unable to recover it. 00:25:54.739 [2024-11-19 11:27:49.933622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-11-19 11:27:49.933685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.739 qpair failed and we were unable to recover it. 00:25:54.739 [2024-11-19 11:27:49.933976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-11-19 11:27:49.934038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.739 qpair failed and we were unable to recover it. 00:25:54.739 [2024-11-19 11:27:49.934304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-11-19 11:27:49.934395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.739 qpair failed and we were unable to recover it. 00:25:54.739 [2024-11-19 11:27:49.934701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-11-19 11:27:49.934765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.739 qpair failed and we were unable to recover it. 00:25:54.739 [2024-11-19 11:27:49.935000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-11-19 11:27:49.935063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.739 qpair failed and we were unable to recover it. 00:25:54.739 [2024-11-19 11:27:49.935357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-11-19 11:27:49.935438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.739 qpair failed and we were unable to recover it. 00:25:54.739 [2024-11-19 11:27:49.935731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-11-19 11:27:49.935795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.739 qpair failed and we were unable to recover it. 00:25:54.739 [2024-11-19 11:27:49.936049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-11-19 11:27:49.936112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.739 qpair failed and we were unable to recover it. 00:25:54.739 [2024-11-19 11:27:49.936358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-11-19 11:27:49.936437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.739 qpair failed and we were unable to recover it. 00:25:54.739 [2024-11-19 11:27:49.936671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-11-19 11:27:49.936733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.739 qpair failed and we were unable to recover it. 00:25:54.739 [2024-11-19 11:27:49.937016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-11-19 11:27:49.937079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.739 qpair failed and we were unable to recover it. 00:25:54.739 [2024-11-19 11:27:49.937387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-11-19 11:27:49.937453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.739 qpair failed and we were unable to recover it. 00:25:54.739 [2024-11-19 11:27:49.937716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-11-19 11:27:49.937778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.739 qpair failed and we were unable to recover it. 00:25:54.739 [2024-11-19 11:27:49.938068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-11-19 11:27:49.938132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.739 qpair failed and we were unable to recover it. 00:25:54.739 [2024-11-19 11:27:49.938398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-11-19 11:27:49.938463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.739 qpair failed and we were unable to recover it. 00:25:54.739 [2024-11-19 11:27:49.938772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-11-19 11:27:49.938835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.739 qpair failed and we were unable to recover it. 00:25:54.739 [2024-11-19 11:27:49.939106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-11-19 11:27:49.939169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.739 qpair failed and we were unable to recover it. 00:25:54.739 [2024-11-19 11:27:49.939439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-11-19 11:27:49.939504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.739 qpair failed and we were unable to recover it. 00:25:54.739 [2024-11-19 11:27:49.939797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-11-19 11:27:49.939860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.739 qpair failed and we were unable to recover it. 00:25:54.739 [2024-11-19 11:27:49.940144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-11-19 11:27:49.940207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.739 qpair failed and we were unable to recover it. 00:25:54.739 [2024-11-19 11:27:49.940464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-11-19 11:27:49.940530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.739 qpair failed and we were unable to recover it. 00:25:54.739 [2024-11-19 11:27:49.940807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-11-19 11:27:49.940870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.739 qpair failed and we were unable to recover it. 00:25:54.739 [2024-11-19 11:27:49.941065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-11-19 11:27:49.941128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.739 qpair failed and we were unable to recover it. 00:25:54.739 [2024-11-19 11:27:49.941421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-11-19 11:27:49.941487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.739 qpair failed and we were unable to recover it. 00:25:54.739 [2024-11-19 11:27:49.941777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-11-19 11:27:49.941839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.739 qpair failed and we were unable to recover it. 00:25:54.739 [2024-11-19 11:27:49.942103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-11-19 11:27:49.942167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.739 qpair failed and we were unable to recover it. 00:25:54.739 [2024-11-19 11:27:49.942438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-11-19 11:27:49.942503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.739 qpair failed and we were unable to recover it. 00:25:54.739 [2024-11-19 11:27:49.942814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-11-19 11:27:49.942876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.739 qpair failed and we were unable to recover it. 00:25:54.739 [2024-11-19 11:27:49.943119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-11-19 11:27:49.943182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.739 qpair failed and we were unable to recover it. 00:25:54.739 [2024-11-19 11:27:49.943435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-11-19 11:27:49.943500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.739 qpair failed and we were unable to recover it. 00:25:54.739 [2024-11-19 11:27:49.943801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-11-19 11:27:49.943864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.739 qpair failed and we were unable to recover it. 00:25:54.739 [2024-11-19 11:27:49.944155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-11-19 11:27:49.944218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.739 qpair failed and we were unable to recover it. 00:25:54.739 [2024-11-19 11:27:49.944451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-11-19 11:27:49.944516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.739 qpair failed and we were unable to recover it. 00:25:54.739 [2024-11-19 11:27:49.944815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-11-19 11:27:49.944879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.739 qpair failed and we were unable to recover it. 00:25:54.740 [2024-11-19 11:27:49.945142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.740 [2024-11-19 11:27:49.945205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.740 qpair failed and we were unable to recover it. 00:25:54.740 [2024-11-19 11:27:49.945462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.740 [2024-11-19 11:27:49.945527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.740 qpair failed and we were unable to recover it. 00:25:54.740 [2024-11-19 11:27:49.945793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.740 [2024-11-19 11:27:49.945856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.740 qpair failed and we were unable to recover it. 00:25:54.740 [2024-11-19 11:27:49.946156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.740 [2024-11-19 11:27:49.946219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.740 qpair failed and we were unable to recover it. 00:25:54.740 [2024-11-19 11:27:49.946466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.740 [2024-11-19 11:27:49.946529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.740 qpair failed and we were unable to recover it. 00:25:54.740 [2024-11-19 11:27:49.946772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.740 [2024-11-19 11:27:49.946836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.740 qpair failed and we were unable to recover it. 00:25:54.740 [2024-11-19 11:27:49.947098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.740 [2024-11-19 11:27:49.947160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.740 qpair failed and we were unable to recover it. 00:25:54.740 [2024-11-19 11:27:49.947423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.740 [2024-11-19 11:27:49.947487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.740 qpair failed and we were unable to recover it. 00:25:54.740 [2024-11-19 11:27:49.947735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.740 [2024-11-19 11:27:49.947799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.740 qpair failed and we were unable to recover it. 00:25:54.740 [2024-11-19 11:27:49.948103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.740 [2024-11-19 11:27:49.948165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.740 qpair failed and we were unable to recover it. 00:25:54.740 [2024-11-19 11:27:49.948460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.740 [2024-11-19 11:27:49.948524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.740 qpair failed and we were unable to recover it. 00:25:54.740 [2024-11-19 11:27:49.948777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.740 [2024-11-19 11:27:49.948841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.740 qpair failed and we were unable to recover it. 00:25:54.740 [2024-11-19 11:27:49.949100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.740 [2024-11-19 11:27:49.949163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.740 qpair failed and we were unable to recover it. 00:25:54.740 [2024-11-19 11:27:49.949430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.740 [2024-11-19 11:27:49.949495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.740 qpair failed and we were unable to recover it. 00:25:54.740 [2024-11-19 11:27:49.949801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.740 [2024-11-19 11:27:49.949867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.740 qpair failed and we were unable to recover it. 00:25:54.740 [2024-11-19 11:27:49.950155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.740 [2024-11-19 11:27:49.950218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.740 qpair failed and we were unable to recover it. 00:25:54.740 [2024-11-19 11:27:49.950491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.740 [2024-11-19 11:27:49.950556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.740 qpair failed and we were unable to recover it. 00:25:54.740 [2024-11-19 11:27:49.950818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.740 [2024-11-19 11:27:49.950882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.740 qpair failed and we were unable to recover it. 00:25:54.740 [2024-11-19 11:27:49.951099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.740 [2024-11-19 11:27:49.951173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.740 qpair failed and we were unable to recover it. 00:25:54.740 [2024-11-19 11:27:49.951417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.740 [2024-11-19 11:27:49.951483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.740 qpair failed and we were unable to recover it. 00:25:54.740 [2024-11-19 11:27:49.951753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.740 [2024-11-19 11:27:49.951816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.740 qpair failed and we were unable to recover it. 00:25:54.740 [2024-11-19 11:27:49.952109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.740 [2024-11-19 11:27:49.952172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.740 qpair failed and we were unable to recover it. 00:25:54.740 [2024-11-19 11:27:49.952466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.740 [2024-11-19 11:27:49.952531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.740 qpair failed and we were unable to recover it. 00:25:54.740 [2024-11-19 11:27:49.952819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.740 [2024-11-19 11:27:49.952882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.740 qpair failed and we were unable to recover it. 00:25:54.740 [2024-11-19 11:27:49.953145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.740 [2024-11-19 11:27:49.953208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.740 qpair failed and we were unable to recover it. 00:25:54.740 [2024-11-19 11:27:49.953491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.740 [2024-11-19 11:27:49.953557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.740 qpair failed and we were unable to recover it. 00:25:54.740 [2024-11-19 11:27:49.953807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.740 [2024-11-19 11:27:49.953883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.740 qpair failed and we were unable to recover it. 00:25:54.740 [2024-11-19 11:27:49.954189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.740 [2024-11-19 11:27:49.954252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.740 qpair failed and we were unable to recover it. 00:25:54.740 [2024-11-19 11:27:49.954547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.740 [2024-11-19 11:27:49.954612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.740 qpair failed and we were unable to recover it. 00:25:54.740 [2024-11-19 11:27:49.954881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.740 [2024-11-19 11:27:49.954944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.740 qpair failed and we were unable to recover it. 00:25:54.740 [2024-11-19 11:27:49.955181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.740 [2024-11-19 11:27:49.955243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.740 qpair failed and we were unable to recover it. 00:25:54.740 [2024-11-19 11:27:49.955492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.740 [2024-11-19 11:27:49.955557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.740 qpair failed and we were unable to recover it. 00:25:54.740 [2024-11-19 11:27:49.955831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.740 [2024-11-19 11:27:49.955894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.740 qpair failed and we were unable to recover it. 00:25:54.740 [2024-11-19 11:27:49.956156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.740 [2024-11-19 11:27:49.956220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.740 qpair failed and we were unable to recover it. 00:25:54.740 [2024-11-19 11:27:49.956446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.740 [2024-11-19 11:27:49.956489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.740 qpair failed and we were unable to recover it. 00:25:54.740 [2024-11-19 11:27:49.956753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.740 [2024-11-19 11:27:49.956816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.740 qpair failed and we were unable to recover it. 00:25:54.740 [2024-11-19 11:27:49.957065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.740 [2024-11-19 11:27:49.957127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.740 qpair failed and we were unable to recover it. 00:25:54.740 [2024-11-19 11:27:49.957426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.740 [2024-11-19 11:27:49.957491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.740 qpair failed and we were unable to recover it. 00:25:54.740 [2024-11-19 11:27:49.957751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.740 [2024-11-19 11:27:49.957814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.740 qpair failed and we were unable to recover it. 00:25:54.741 [2024-11-19 11:27:49.958124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.741 [2024-11-19 11:27:49.958187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.741 qpair failed and we were unable to recover it. 00:25:54.741 [2024-11-19 11:27:49.958478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.741 [2024-11-19 11:27:49.958543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.741 qpair failed and we were unable to recover it. 00:25:54.741 [2024-11-19 11:27:49.958774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.741 [2024-11-19 11:27:49.958837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.741 qpair failed and we were unable to recover it. 00:25:54.741 [2024-11-19 11:27:49.959130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.741 [2024-11-19 11:27:49.959193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.741 qpair failed and we were unable to recover it. 00:25:54.741 [2024-11-19 11:27:49.959498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.741 [2024-11-19 11:27:49.959564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.741 qpair failed and we were unable to recover it. 00:25:54.741 [2024-11-19 11:27:49.959762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.741 [2024-11-19 11:27:49.959824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.741 qpair failed and we were unable to recover it. 00:25:54.741 [2024-11-19 11:27:49.960121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.741 [2024-11-19 11:27:49.960183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.741 qpair failed and we were unable to recover it. 00:25:54.741 [2024-11-19 11:27:49.960461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.741 [2024-11-19 11:27:49.960526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.741 qpair failed and we were unable to recover it. 00:25:54.741 [2024-11-19 11:27:49.960810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.741 [2024-11-19 11:27:49.960873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.741 qpair failed and we were unable to recover it. 00:25:54.741 [2024-11-19 11:27:49.961178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.741 [2024-11-19 11:27:49.961240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.741 qpair failed and we were unable to recover it. 00:25:54.741 [2024-11-19 11:27:49.961503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.741 [2024-11-19 11:27:49.961566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.741 qpair failed and we were unable to recover it. 00:25:54.741 [2024-11-19 11:27:49.961795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.741 [2024-11-19 11:27:49.961858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.741 qpair failed and we were unable to recover it. 00:25:54.741 [2024-11-19 11:27:49.962118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.741 [2024-11-19 11:27:49.962180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.741 qpair failed and we were unable to recover it. 00:25:54.741 [2024-11-19 11:27:49.962480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.741 [2024-11-19 11:27:49.962544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.741 qpair failed and we were unable to recover it. 00:25:54.741 [2024-11-19 11:27:49.962807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.741 [2024-11-19 11:27:49.962881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.741 qpair failed and we were unable to recover it. 00:25:54.741 [2024-11-19 11:27:49.963153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.741 [2024-11-19 11:27:49.963216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.741 qpair failed and we were unable to recover it. 00:25:54.741 [2024-11-19 11:27:49.963450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.741 [2024-11-19 11:27:49.963516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.741 qpair failed and we were unable to recover it. 00:25:54.741 [2024-11-19 11:27:49.963780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.741 [2024-11-19 11:27:49.963843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.741 qpair failed and we were unable to recover it. 00:25:54.741 [2024-11-19 11:27:49.964140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.741 [2024-11-19 11:27:49.964203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.741 qpair failed and we were unable to recover it. 00:25:54.741 [2024-11-19 11:27:49.964428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.741 [2024-11-19 11:27:49.964493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.741 qpair failed and we were unable to recover it. 00:25:54.741 [2024-11-19 11:27:49.964791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.741 [2024-11-19 11:27:49.964854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.741 qpair failed and we were unable to recover it. 00:25:54.741 [2024-11-19 11:27:49.965160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.741 [2024-11-19 11:27:49.965223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.741 qpair failed and we were unable to recover it. 00:25:54.741 [2024-11-19 11:27:49.965440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.741 [2024-11-19 11:27:49.965505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.741 qpair failed and we were unable to recover it. 00:25:54.741 [2024-11-19 11:27:49.965806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.741 [2024-11-19 11:27:49.965869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.741 qpair failed and we were unable to recover it. 00:25:54.741 [2024-11-19 11:27:49.966156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.741 [2024-11-19 11:27:49.966219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.741 qpair failed and we were unable to recover it. 00:25:54.741 [2024-11-19 11:27:49.966491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.741 [2024-11-19 11:27:49.966554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.741 qpair failed and we were unable to recover it. 00:25:54.741 [2024-11-19 11:27:49.966859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.741 [2024-11-19 11:27:49.966922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.741 qpair failed and we were unable to recover it. 00:25:54.741 [2024-11-19 11:27:49.967207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.741 [2024-11-19 11:27:49.967270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.741 qpair failed and we were unable to recover it. 00:25:54.741 [2024-11-19 11:27:49.967534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.741 [2024-11-19 11:27:49.967599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.741 qpair failed and we were unable to recover it. 00:25:54.741 [2024-11-19 11:27:49.967854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.741 [2024-11-19 11:27:49.967917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.741 qpair failed and we were unable to recover it. 00:25:54.741 [2024-11-19 11:27:49.968208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.741 [2024-11-19 11:27:49.968271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.741 qpair failed and we were unable to recover it. 00:25:54.741 [2024-11-19 11:27:49.968557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.741 [2024-11-19 11:27:49.968620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.741 qpair failed and we were unable to recover it. 00:25:54.741 [2024-11-19 11:27:49.968904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.741 [2024-11-19 11:27:49.968967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.741 qpair failed and we were unable to recover it. 00:25:54.741 [2024-11-19 11:27:49.969218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.741 [2024-11-19 11:27:49.969282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.741 qpair failed and we were unable to recover it. 00:25:54.741 [2024-11-19 11:27:49.969551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.741 [2024-11-19 11:27:49.969614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.741 qpair failed and we were unable to recover it. 00:25:54.741 [2024-11-19 11:27:49.969839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.741 [2024-11-19 11:27:49.969902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.741 qpair failed and we were unable to recover it. 00:25:54.741 [2024-11-19 11:27:49.970150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.741 [2024-11-19 11:27:49.970213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.741 qpair failed and we were unable to recover it. 00:25:54.741 [2024-11-19 11:27:49.970500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.742 [2024-11-19 11:27:49.970564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.742 qpair failed and we were unable to recover it. 00:25:54.742 [2024-11-19 11:27:49.970767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.742 [2024-11-19 11:27:49.970835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.742 qpair failed and we were unable to recover it. 00:25:54.742 [2024-11-19 11:27:49.971121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.742 [2024-11-19 11:27:49.971184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.742 qpair failed and we were unable to recover it. 00:25:54.742 [2024-11-19 11:27:49.971430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.742 [2024-11-19 11:27:49.971495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.742 qpair failed and we were unable to recover it. 00:25:54.742 [2024-11-19 11:27:49.971742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.742 [2024-11-19 11:27:49.971816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.742 qpair failed and we were unable to recover it. 00:25:54.742 [2024-11-19 11:27:49.972111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.742 [2024-11-19 11:27:49.972173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.742 qpair failed and we were unable to recover it. 00:25:54.742 [2024-11-19 11:27:49.972468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.742 [2024-11-19 11:27:49.972533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.742 qpair failed and we were unable to recover it. 00:25:54.742 [2024-11-19 11:27:49.972821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.742 [2024-11-19 11:27:49.972885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.742 qpair failed and we were unable to recover it. 00:25:54.742 [2024-11-19 11:27:49.973120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.742 [2024-11-19 11:27:49.973182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.742 qpair failed and we were unable to recover it. 00:25:54.742 [2024-11-19 11:27:49.973453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.742 [2024-11-19 11:27:49.973517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.742 qpair failed and we were unable to recover it. 00:25:54.742 [2024-11-19 11:27:49.973815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.742 [2024-11-19 11:27:49.973879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.742 qpair failed and we were unable to recover it. 00:25:54.742 [2024-11-19 11:27:49.974156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.742 [2024-11-19 11:27:49.974218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.742 qpair failed and we were unable to recover it. 00:25:54.742 [2024-11-19 11:27:49.974461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.742 [2024-11-19 11:27:49.974526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.742 qpair failed and we were unable to recover it. 00:25:54.742 [2024-11-19 11:27:49.974815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.742 [2024-11-19 11:27:49.974879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.742 qpair failed and we were unable to recover it. 00:25:54.742 [2024-11-19 11:27:49.975122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.742 [2024-11-19 11:27:49.975184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.742 qpair failed and we were unable to recover it. 00:25:54.742 [2024-11-19 11:27:49.975416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.742 [2024-11-19 11:27:49.975481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.742 qpair failed and we were unable to recover it. 00:25:54.742 [2024-11-19 11:27:49.975733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.742 [2024-11-19 11:27:49.975797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.742 qpair failed and we were unable to recover it. 00:25:54.742 [2024-11-19 11:27:49.976085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.742 [2024-11-19 11:27:49.976147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.742 qpair failed and we were unable to recover it. 00:25:54.742 [2024-11-19 11:27:49.976442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.742 [2024-11-19 11:27:49.976507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.742 qpair failed and we were unable to recover it. 00:25:54.742 [2024-11-19 11:27:49.976807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.742 [2024-11-19 11:27:49.976871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.742 qpair failed and we were unable to recover it. 00:25:54.742 [2024-11-19 11:27:49.977144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.742 [2024-11-19 11:27:49.977208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.742 qpair failed and we were unable to recover it. 00:25:54.742 [2024-11-19 11:27:49.977441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.742 [2024-11-19 11:27:49.977506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.742 qpair failed and we were unable to recover it. 00:25:54.742 [2024-11-19 11:27:49.977808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.742 [2024-11-19 11:27:49.977872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.742 qpair failed and we were unable to recover it. 00:25:54.742 [2024-11-19 11:27:49.978154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.742 [2024-11-19 11:27:49.978216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.742 qpair failed and we were unable to recover it. 00:25:54.742 [2024-11-19 11:27:49.978462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.742 [2024-11-19 11:27:49.978526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.742 qpair failed and we were unable to recover it. 00:25:54.742 [2024-11-19 11:27:49.978821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.742 [2024-11-19 11:27:49.978884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.742 qpair failed and we were unable to recover it. 00:25:54.742 [2024-11-19 11:27:49.979178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.742 [2024-11-19 11:27:49.979240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.742 qpair failed and we were unable to recover it. 00:25:54.742 [2024-11-19 11:27:49.979540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.742 [2024-11-19 11:27:49.979606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.742 qpair failed and we were unable to recover it. 00:25:54.742 [2024-11-19 11:27:49.979856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.742 [2024-11-19 11:27:49.979926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.742 qpair failed and we were unable to recover it. 00:25:54.742 [2024-11-19 11:27:49.980239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.742 [2024-11-19 11:27:49.980301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.742 qpair failed and we were unable to recover it. 00:25:54.742 [2024-11-19 11:27:49.980586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.742 [2024-11-19 11:27:49.980650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.742 qpair failed and we were unable to recover it. 00:25:54.742 [2024-11-19 11:27:49.980889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.742 [2024-11-19 11:27:49.980963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.742 qpair failed and we were unable to recover it. 00:25:54.742 [2024-11-19 11:27:49.981258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.742 [2024-11-19 11:27:49.981321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.742 qpair failed and we were unable to recover it. 00:25:54.742 [2024-11-19 11:27:49.981562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.742 [2024-11-19 11:27:49.981626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.742 qpair failed and we were unable to recover it. 00:25:54.742 [2024-11-19 11:27:49.981933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.742 [2024-11-19 11:27:49.981998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.742 qpair failed and we were unable to recover it. 00:25:54.742 [2024-11-19 11:27:49.982276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.742 [2024-11-19 11:27:49.982338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.742 qpair failed and we were unable to recover it. 00:25:54.742 [2024-11-19 11:27:49.982667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.742 [2024-11-19 11:27:49.982731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.742 qpair failed and we were unable to recover it. 00:25:54.742 [2024-11-19 11:27:49.983021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.742 [2024-11-19 11:27:49.983085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.742 qpair failed and we were unable to recover it. 00:25:54.742 [2024-11-19 11:27:49.983405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.742 [2024-11-19 11:27:49.983470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.742 qpair failed and we were unable to recover it. 00:25:54.742 [2024-11-19 11:27:49.983759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.742 [2024-11-19 11:27:49.983823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:49.984065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:49.984129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:49.984398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:49.984465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:49.984724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:49.984788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:49.985077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:49.985140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:49.985338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:49.985418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:49.985703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:49.985767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:49.986000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:49.986064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:49.986360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:49.986443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:49.986703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:49.986766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:49.987031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:49.987095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:49.987350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:49.987426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:49.987728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:49.987791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:49.988005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:49.988068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:49.988317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:49.988398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:49.988672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:49.988734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:49.989055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:49.989118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:49.989384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:49.989449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:49.989742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:49.989806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:49.990110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:49.990173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:49.990447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:49.990511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:49.990809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:49.990873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:49.991171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:49.991233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:49.991477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:49.991543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:49.991780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:49.991843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:49.992142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:49.992205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:49.992477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:49.992542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:49.992811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:49.992874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:49.993151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:49.993214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:49.993455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:49.993521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:49.993768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:49.993832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:49.994126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:49.994188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:49.994458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:49.994524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:49.994806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:49.994880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:49.995149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:49.995215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:49.995455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:49.995520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:49.995699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:49.995763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:49.995994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:49.996056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:49.996346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:49.996421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:49.996690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:49.996754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:49.997027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:49.997090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:49.997342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:49.997428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:49.997612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:49.997676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:49.997966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:49.998030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:49.998338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:49.998428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:49.998676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:49.998739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:49.998972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:49.999035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:49.999327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:49.999410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:49.999661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:49.999724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:50.000022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:50.000085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.743 [2024-11-19 11:27:50.000334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.743 [2024-11-19 11:27:50.000429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.743 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.000708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.000771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.001065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.001128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.001405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.001470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.001665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.001730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.001925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.001989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.002199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.002263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.002466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.002532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.002813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.002877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.003146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.003210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.003471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.003548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.003768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.003832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.004068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.004131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.004343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.004422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.004607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.004670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.004885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.004949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.005154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.005219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.005434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.005500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.005783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.005846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.006121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.006192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.006436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.006500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.006720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.006784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.007048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.007112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.007402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.007474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.007671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.007734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.007984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.008047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.008282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.008347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.008581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.008644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.008894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.008957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.009219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.009282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.009480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.009546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.009869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.009932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.010184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.010248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.010489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.010553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.010859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.010922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.011166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.011229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.011418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.011484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.011745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.011809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.012083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.012146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.012421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.012486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.012701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.012767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.013072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.013135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.013411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.013477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.013757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.013821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.014025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.014087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.014389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.014465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.014645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.014709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.014961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.015024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.015291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.015354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.015608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.015671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.015923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.015987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.744 [2024-11-19 11:27:50.016296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.744 [2024-11-19 11:27:50.016360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.744 qpair failed and we were unable to recover it. 00:25:54.745 [2024-11-19 11:27:50.016678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.745 [2024-11-19 11:27:50.016742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.745 qpair failed and we were unable to recover it. 00:25:54.745 [2024-11-19 11:27:50.017035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.745 [2024-11-19 11:27:50.017098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.745 qpair failed and we were unable to recover it. 00:25:54.745 [2024-11-19 11:27:50.017360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.745 [2024-11-19 11:27:50.017440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.745 qpair failed and we were unable to recover it. 00:25:54.745 [2024-11-19 11:27:50.017740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.745 [2024-11-19 11:27:50.017804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.745 qpair failed and we were unable to recover it. 00:25:54.745 [2024-11-19 11:27:50.018077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.745 [2024-11-19 11:27:50.018141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.745 qpair failed and we were unable to recover it. 00:25:54.745 [2024-11-19 11:27:50.018397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.745 [2024-11-19 11:27:50.018462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.745 qpair failed and we were unable to recover it. 00:25:54.745 [2024-11-19 11:27:50.018710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.745 [2024-11-19 11:27:50.018774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.745 qpair failed and we were unable to recover it. 00:25:54.745 [2024-11-19 11:27:50.019026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.745 [2024-11-19 11:27:50.019091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.745 qpair failed and we were unable to recover it. 00:25:54.745 [2024-11-19 11:27:50.019305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.745 [2024-11-19 11:27:50.019379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.745 qpair failed and we were unable to recover it. 00:25:54.745 [2024-11-19 11:27:50.019649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.745 [2024-11-19 11:27:50.019712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.745 qpair failed and we were unable to recover it. 00:25:54.745 [2024-11-19 11:27:50.019968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.745 [2024-11-19 11:27:50.020034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.745 qpair failed and we were unable to recover it. 00:25:54.745 [2024-11-19 11:27:50.020258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.745 [2024-11-19 11:27:50.020321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.745 qpair failed and we were unable to recover it. 00:25:54.745 [2024-11-19 11:27:50.020566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.745 [2024-11-19 11:27:50.020630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.745 qpair failed and we were unable to recover it. 00:25:54.745 [2024-11-19 11:27:50.020862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.745 [2024-11-19 11:27:50.020925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.745 qpair failed and we were unable to recover it. 00:25:54.745 [2024-11-19 11:27:50.021170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.745 [2024-11-19 11:27:50.021234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.745 qpair failed and we were unable to recover it. 00:25:54.745 [2024-11-19 11:27:50.021477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.745 [2024-11-19 11:27:50.021541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.745 qpair failed and we were unable to recover it. 00:25:54.745 [2024-11-19 11:27:50.021733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.745 [2024-11-19 11:27:50.021796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.745 qpair failed and we were unable to recover it. 00:25:54.745 [2024-11-19 11:27:50.022036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.745 [2024-11-19 11:27:50.022100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.745 qpair failed and we were unable to recover it. 00:25:54.745 [2024-11-19 11:27:50.022284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.745 [2024-11-19 11:27:50.022348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.745 qpair failed and we were unable to recover it. 00:25:54.745 [2024-11-19 11:27:50.022579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.745 [2024-11-19 11:27:50.022670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.745 qpair failed and we were unable to recover it. 00:25:54.745 [2024-11-19 11:27:50.022943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.745 [2024-11-19 11:27:50.023039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.745 qpair failed and we were unable to recover it. 00:25:54.745 [2024-11-19 11:27:50.023327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.745 [2024-11-19 11:27:50.023445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.745 qpair failed and we were unable to recover it. 00:25:54.745 [2024-11-19 11:27:50.023745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.745 [2024-11-19 11:27:50.023866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.745 qpair failed and we were unable to recover it. 00:25:54.745 [2024-11-19 11:27:50.024162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.745 [2024-11-19 11:27:50.024261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.745 qpair failed and we were unable to recover it. 00:25:54.745 [2024-11-19 11:27:50.024504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.745 [2024-11-19 11:27:50.024583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.745 qpair failed and we were unable to recover it. 00:25:54.745 [2024-11-19 11:27:50.024798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.745 [2024-11-19 11:27:50.024854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.745 qpair failed and we were unable to recover it. 00:25:54.745 [2024-11-19 11:27:50.025083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.745 [2024-11-19 11:27:50.025150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.745 qpair failed and we were unable to recover it. 00:25:54.745 [2024-11-19 11:27:50.025398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.745 [2024-11-19 11:27:50.025459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.745 qpair failed and we were unable to recover it. 00:25:54.745 [2024-11-19 11:27:50.025655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.745 [2024-11-19 11:27:50.025713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.745 qpair failed and we were unable to recover it. 00:25:54.745 [2024-11-19 11:27:50.025929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.745 [2024-11-19 11:27:50.025987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.745 qpair failed and we were unable to recover it. 00:25:54.745 [2024-11-19 11:27:50.026251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.745 [2024-11-19 11:27:50.026309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.745 qpair failed and we were unable to recover it. 00:25:54.745 [2024-11-19 11:27:50.026537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.745 [2024-11-19 11:27:50.026597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.745 qpair failed and we were unable to recover it. 00:25:54.745 [2024-11-19 11:27:50.026852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.745 [2024-11-19 11:27:50.026915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.745 qpair failed and we were unable to recover it. 00:25:54.745 [2024-11-19 11:27:50.027159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.745 [2024-11-19 11:27:50.027223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.745 qpair failed and we were unable to recover it. 00:25:54.745 [2024-11-19 11:27:50.027463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.745 [2024-11-19 11:27:50.027529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.745 qpair failed and we were unable to recover it. 00:25:54.745 [2024-11-19 11:27:50.027759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.745 [2024-11-19 11:27:50.027822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.745 qpair failed and we were unable to recover it. 00:25:54.745 [2024-11-19 11:27:50.028094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.745 [2024-11-19 11:27:50.028156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.745 qpair failed and we were unable to recover it. 00:25:54.745 [2024-11-19 11:27:50.028403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.745 [2024-11-19 11:27:50.028467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.745 qpair failed and we were unable to recover it. 00:25:54.745 [2024-11-19 11:27:50.028703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.745 [2024-11-19 11:27:50.028767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.745 qpair failed and we were unable to recover it. 00:25:54.745 [2024-11-19 11:27:50.029014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.745 [2024-11-19 11:27:50.029077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.745 qpair failed and we were unable to recover it. 00:25:54.745 [2024-11-19 11:27:50.029287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.745 [2024-11-19 11:27:50.029352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.745 qpair failed and we were unable to recover it. 00:25:54.745 [2024-11-19 11:27:50.029586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.745 [2024-11-19 11:27:50.029649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.745 qpair failed and we were unable to recover it. 00:25:54.745 [2024-11-19 11:27:50.029867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.745 [2024-11-19 11:27:50.029929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.745 qpair failed and we were unable to recover it. 00:25:54.745 [2024-11-19 11:27:50.030152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.745 [2024-11-19 11:27:50.030215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.745 qpair failed and we were unable to recover it. 00:25:54.745 [2024-11-19 11:27:50.030448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.746 [2024-11-19 11:27:50.030514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.746 qpair failed and we were unable to recover it. 00:25:54.746 [2024-11-19 11:27:50.030774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.746 [2024-11-19 11:27:50.030837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.746 qpair failed and we were unable to recover it. 00:25:54.746 [2024-11-19 11:27:50.031036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.746 [2024-11-19 11:27:50.031101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.746 qpair failed and we were unable to recover it. 00:25:54.746 [2024-11-19 11:27:50.031358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.746 [2024-11-19 11:27:50.031437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.746 qpair failed and we were unable to recover it. 00:25:54.746 [2024-11-19 11:27:50.031676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.746 [2024-11-19 11:27:50.031741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.746 qpair failed and we were unable to recover it. 00:25:54.746 [2024-11-19 11:27:50.032010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.746 [2024-11-19 11:27:50.032074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.746 qpair failed and we were unable to recover it. 00:25:54.746 [2024-11-19 11:27:50.032284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.746 [2024-11-19 11:27:50.032348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.746 qpair failed and we were unable to recover it. 00:25:54.746 [2024-11-19 11:27:50.032598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.746 [2024-11-19 11:27:50.032663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.746 qpair failed and we were unable to recover it. 00:25:54.746 [2024-11-19 11:27:50.032893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.746 [2024-11-19 11:27:50.032959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.746 qpair failed and we were unable to recover it. 00:25:54.746 [2024-11-19 11:27:50.033194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.746 [2024-11-19 11:27:50.033268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.746 qpair failed and we were unable to recover it. 00:25:54.746 [2024-11-19 11:27:50.033521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.746 [2024-11-19 11:27:50.033588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.746 qpair failed and we were unable to recover it. 00:25:54.746 [2024-11-19 11:27:50.033822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.746 [2024-11-19 11:27:50.033887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.746 qpair failed and we were unable to recover it. 00:25:54.746 [2024-11-19 11:27:50.034152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.746 [2024-11-19 11:27:50.034215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.746 qpair failed and we were unable to recover it. 00:25:54.746 [2024-11-19 11:27:50.034449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.746 [2024-11-19 11:27:50.034514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.746 qpair failed and we were unable to recover it. 00:25:54.746 [2024-11-19 11:27:50.034779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.746 [2024-11-19 11:27:50.034843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.746 qpair failed and we were unable to recover it. 00:25:54.746 [2024-11-19 11:27:50.035079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.746 [2024-11-19 11:27:50.035143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.746 qpair failed and we were unable to recover it. 00:25:54.746 [2024-11-19 11:27:50.035392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.746 [2024-11-19 11:27:50.035467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.746 qpair failed and we were unable to recover it. 00:25:54.746 [2024-11-19 11:27:50.035701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.746 [2024-11-19 11:27:50.035766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.746 qpair failed and we were unable to recover it. 00:25:54.746 [2024-11-19 11:27:50.035985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.746 [2024-11-19 11:27:50.036049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.746 qpair failed and we were unable to recover it. 00:25:54.746 [2024-11-19 11:27:50.036298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.746 [2024-11-19 11:27:50.036378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.746 qpair failed and we were unable to recover it. 00:25:54.746 [2024-11-19 11:27:50.036616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.746 [2024-11-19 11:27:50.036679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.746 qpair failed and we were unable to recover it. 00:25:54.746 [2024-11-19 11:27:50.036867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.746 [2024-11-19 11:27:50.036930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.746 qpair failed and we were unable to recover it. 00:25:54.746 [2024-11-19 11:27:50.037177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.746 [2024-11-19 11:27:50.037241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.746 qpair failed and we were unable to recover it. 00:25:54.746 [2024-11-19 11:27:50.037488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.746 [2024-11-19 11:27:50.037553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.746 qpair failed and we were unable to recover it. 00:25:54.746 [2024-11-19 11:27:50.037822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.746 [2024-11-19 11:27:50.037889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.746 qpair failed and we were unable to recover it. 00:25:54.746 [2024-11-19 11:27:50.038111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.746 [2024-11-19 11:27:50.038174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.746 qpair failed and we were unable to recover it. 00:25:54.746 [2024-11-19 11:27:50.038427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.746 [2024-11-19 11:27:50.038493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.746 qpair failed and we were unable to recover it. 00:25:54.746 [2024-11-19 11:27:50.038717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.746 [2024-11-19 11:27:50.038780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.746 qpair failed and we were unable to recover it. 00:25:54.746 [2024-11-19 11:27:50.039032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.746 [2024-11-19 11:27:50.039095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.746 qpair failed and we were unable to recover it. 00:25:54.746 [2024-11-19 11:27:50.039284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.746 [2024-11-19 11:27:50.039347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.746 qpair failed and we were unable to recover it. 00:25:54.746 [2024-11-19 11:27:50.039615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.746 [2024-11-19 11:27:50.039679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.746 qpair failed and we were unable to recover it. 00:25:54.746 [2024-11-19 11:27:50.039904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.746 [2024-11-19 11:27:50.039967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.746 qpair failed and we were unable to recover it. 00:25:54.746 [2024-11-19 11:27:50.040222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.746 [2024-11-19 11:27:50.040286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.746 qpair failed and we were unable to recover it. 00:25:54.746 [2024-11-19 11:27:50.040531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.746 [2024-11-19 11:27:50.040596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.746 qpair failed and we were unable to recover it. 00:25:54.746 [2024-11-19 11:27:50.040847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.746 [2024-11-19 11:27:50.040910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.746 qpair failed and we were unable to recover it. 00:25:54.746 [2024-11-19 11:27:50.041165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.746 [2024-11-19 11:27:50.041229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.746 qpair failed and we were unable to recover it. 00:25:54.746 [2024-11-19 11:27:50.041484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.746 [2024-11-19 11:27:50.041559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.746 qpair failed and we were unable to recover it. 00:25:54.746 [2024-11-19 11:27:50.041804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.746 [2024-11-19 11:27:50.041867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.746 qpair failed and we were unable to recover it. 00:25:54.746 [2024-11-19 11:27:50.042114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.746 [2024-11-19 11:27:50.042178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.746 qpair failed and we were unable to recover it. 00:25:54.746 [2024-11-19 11:27:50.042424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.746 [2024-11-19 11:27:50.042490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.746 qpair failed and we were unable to recover it. 00:25:54.746 [2024-11-19 11:27:50.042708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.746 [2024-11-19 11:27:50.042771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.746 qpair failed and we were unable to recover it. 00:25:54.746 [2024-11-19 11:27:50.042983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.746 [2024-11-19 11:27:50.043046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.746 qpair failed and we were unable to recover it. 00:25:54.746 [2024-11-19 11:27:50.043292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.746 [2024-11-19 11:27:50.043357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.746 qpair failed and we were unable to recover it. 00:25:54.746 [2024-11-19 11:27:50.043621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.746 [2024-11-19 11:27:50.043684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.746 qpair failed and we were unable to recover it. 00:25:54.746 [2024-11-19 11:27:50.043923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.746 [2024-11-19 11:27:50.043986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.746 qpair failed and we were unable to recover it. 00:25:54.746 [2024-11-19 11:27:50.044204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.746 [2024-11-19 11:27:50.044268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.746 qpair failed and we were unable to recover it. 00:25:54.746 [2024-11-19 11:27:50.044501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.746 [2024-11-19 11:27:50.044565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.746 qpair failed and we were unable to recover it. 00:25:54.746 [2024-11-19 11:27:50.044784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.746 [2024-11-19 11:27:50.044848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.746 qpair failed and we were unable to recover it. 00:25:54.746 [2024-11-19 11:27:50.045090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.045153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.045386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.045450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.045685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.045750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.045973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.046035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.046276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.046340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.046590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.046654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.046912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.046975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.047185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.047247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.047490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.047555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.047769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.047832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.048069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.048132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.048394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.048459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.048642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.048707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.048893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.048956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.049161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.049225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.049486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.049547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.049791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.049849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.050052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.050111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.050312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.050383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.050587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.050646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.050827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.050885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.051125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.051184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.051421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.051481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.051689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.051748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.051976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.052035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.052250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.052308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.052542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.052597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.052773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.052827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.053012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.053067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.053262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.053317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.053510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.053565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.053792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.053847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.054013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.054068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.054269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.054323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.054582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.054637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.054856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.054913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.055150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.055205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.055483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.055535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.055802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.055853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.056130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.056180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.056402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.056454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.056615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.056666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.056938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.056989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.057261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.057313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.057499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.057550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.057833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.057883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.058129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.058180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.058450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.058502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.058666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.058717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.058966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.059018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.059243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.059294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.059511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.059559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.059772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.059821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.060087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.747 [2024-11-19 11:27:50.060135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.747 qpair failed and we were unable to recover it. 00:25:54.747 [2024-11-19 11:27:50.060355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.060421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.060636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.060695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.060893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.060949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.061176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.061224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.061507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.061557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.061744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.061792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.061997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.062045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.062318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.062375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.062562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.062610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.062826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.062875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.063077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.063125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.063382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.063430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.063630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.063678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.063915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.063962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.064173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.064221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.064504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.064580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.064873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.064927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.065149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.065197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.065473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.065528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.065798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.065845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.066109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.066165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.066447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.066492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.066728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.066771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.066985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.067029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.067250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.067292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.067521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.067564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.067774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.067816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.068004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.068047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.068311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.068356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.068608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.068651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.068897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.068939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.069146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.069189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.069441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.069484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.069688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.069731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.069981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.070024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.070235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.070277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.070512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.070553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.070795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.070845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.071089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.071131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.071330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.071379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.071617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.071656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.071805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.071845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.072041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.072080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.072306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.072346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.072570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.072611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.072820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.072861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.073108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.073148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.073347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.073412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.073652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.073690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.073924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.073962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.074204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.074242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.074492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.074531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.074779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.074817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.075013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.075051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.075201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.075239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.748 [2024-11-19 11:27:50.075477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.748 [2024-11-19 11:27:50.075516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.748 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.075763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.075802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.075960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.075998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.076258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.076296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.076552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.076591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.076800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.076839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.077082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.077120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.077321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.077360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.077613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.077650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.077892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.077928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.078172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.078208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.078438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.078475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.078614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.078651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.078817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.078853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.079080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.079123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.079368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.079405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.079626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.079662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.079895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.079931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.080123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.080159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.080379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.080432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.080671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.080705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.080925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.080959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.081170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.081205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.081437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.081472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.081628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.081663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.081861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.081896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.082090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.082124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.082290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.082326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.082534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.082582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.082775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.082818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.083034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.083081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.084937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.084988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.085164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.085199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.085412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.085448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.085598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.085632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.085839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.085872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.086052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.086088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.086331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.086383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.086547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.086581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.086769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.086802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.087036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.087069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.087268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.087301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.087475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.087508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.087738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.087772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.088000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.088033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.088176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.088209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.088405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.088439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.088669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.088703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.088933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.088967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.089198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.089231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.089421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.089467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.089651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.089685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.089871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.089905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.090104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.090137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.090275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.090318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.749 [2024-11-19 11:27:50.090484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.749 [2024-11-19 11:27:50.090519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.749 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.090742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.090776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.090943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.090976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.091139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.091173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.091393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.091429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.091596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.091630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.091815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.091849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.092095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.092128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.092378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.092412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.092599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.092633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.092936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.092999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.093258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.093322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.093586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.093650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.093950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.094014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.094239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.094302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.094648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.094712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.095013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.095077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.095383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.095449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.095703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.095766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.096025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.096089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.096410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.096487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.096803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.096879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.097151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.097228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.097532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.097611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.097940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.098010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.098342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.098430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.098758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.098830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.099157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.099234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.099530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.099606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.099878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.099960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.100299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.100392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.100675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.100750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.101077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.101143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.101452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.101519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.101828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.101894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.102182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.102245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.102541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.102609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.102868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.102932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.103238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.103302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.103577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.103653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.103956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.104021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.104274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.104338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.104647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.104711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.104962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.105024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.105273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.105337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.105611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.105675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.105941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.106003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.106314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.106429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.106755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.106821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.107072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.107135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.107439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.107505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.107805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.107871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.108158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.108222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.108525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.108591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.108890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.108953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.109251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.109314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.109576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.109640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.109860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.109923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.110224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.110287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.110605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.750 [2024-11-19 11:27:50.110670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.750 qpair failed and we were unable to recover it. 00:25:54.750 [2024-11-19 11:27:50.110972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.111035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.111326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.111408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.111703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.111769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.112013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.112075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.112360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.112442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.112738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.112803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.113099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.113161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.113414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.113479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.113732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.113797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.114042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.114105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.114414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.114479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.114766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.114830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.115086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.115150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.115417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.115481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.115778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.115841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.116136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.116199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.116441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.116505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.116809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.116873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.117167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.117231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.117526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.117601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.117893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.117956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.118254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.118318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.118590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.118654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.118910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.118974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.119268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.119331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.119646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.119709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.120011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.120074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.120387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.120452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.120723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.120785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.121044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.121109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.121399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.121466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.121729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.121795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.122088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.122152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.122425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.122491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.122802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.122866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.123116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.123170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.123442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.123523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.123768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.123823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.124089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.124143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.124437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.124502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.124809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.124872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.125140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.125203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.125493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.125559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.125855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.125918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.126217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.126280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.126618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.126684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.126984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.127048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.127341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.127421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.127669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.127733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.128030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.128094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.128393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.128457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.128750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.128814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.129056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.129120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.129419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.129483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.129711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.129774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.130065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.130129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.130379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.130443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.130669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.751 [2024-11-19 11:27:50.130734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.751 qpair failed and we were unable to recover it. 00:25:54.751 [2024-11-19 11:27:50.131025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.131088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.131393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.131468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.131759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.131822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.132115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.132178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.132476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.132541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.132786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.132851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.133145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.133207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.133480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.133546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.133837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.133901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.134196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.134258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.134578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.134644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.134967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.135029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.135274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.135337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.135654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.135719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.136017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.136080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.136393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.136458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.136763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.136826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.137070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.137133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.137435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.137500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.137793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.137857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.138158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.138220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.138525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.138590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.138884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.138948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.139236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.139299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.139614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.139678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.139935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.139997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.140295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.140359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.140643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.140707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.141012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.141075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.141360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.141443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.141700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.141763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.142020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.142082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.142394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.142459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.142723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.142789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.143089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.143152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.143437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.143503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.143777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.143839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.144102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.144165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.144457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.144522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.144808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.144871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.145169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.145232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.145527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.145603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.145846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.145909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.146215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.146277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.146544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.146610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.146906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.146968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.147252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.147315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.147636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.147701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.147944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.148007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.148225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.148289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.148592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.148658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.148946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.149009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.149302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.149383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.149673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.149738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.149986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.150049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.150328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.150431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.150736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.752 [2024-11-19 11:27:50.150800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.752 qpair failed and we were unable to recover it. 00:25:54.752 [2024-11-19 11:27:50.151105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.151167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.151459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.151525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.151825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.151889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.152149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.152212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.152495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.152560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.152856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.152923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.153219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.153283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.153517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.153581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.153819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.153881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.154179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.154241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.154543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.154608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.154916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.154979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.155284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.155347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.155699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.155763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.156059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.156122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.156414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.156480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.156771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.156834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.157130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.157193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.157481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.157547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.157757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.157820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.158110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.158174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.158479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.158544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.158822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.158885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.159183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.159246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.159505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.159589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.159892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.159954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.160252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.160316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.160699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.160803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.161190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.161271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.161658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.161740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.162109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.162187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.162508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.162586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.162952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.163028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.163347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.163452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.163811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.163887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.164251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.164327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.164706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.164782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.165097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.165172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.165554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.165631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.165995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.166072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.166431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.166510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.166865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.166941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.167302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.167391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.167753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.167829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.168140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.168216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.168580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.168655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.169016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.169092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.169462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.169539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.169872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.169946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.170315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.170414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.170766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.170843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.171220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.171296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.171637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.171713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.172042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.172118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.172487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.172564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.172912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.172987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.173353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.173444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.173794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.173868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.174213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.174290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.174649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.174725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.175070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.175145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.753 qpair failed and we were unable to recover it. 00:25:54.753 [2024-11-19 11:27:50.175490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.753 [2024-11-19 11:27:50.175567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.175938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.176014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.176341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.176439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.176754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.176843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.177213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.177291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.177657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.177734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.178091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.178166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.178517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.178594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.178958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.179033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.179356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.179457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.179828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.179904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.180267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.180342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.180718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.180795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.181160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.181236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.181609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.181684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.182047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.182122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.182476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.182554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.182926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.183002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.183392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.183480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.183826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.183901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.184263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.184339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.184684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.184760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.185071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.185145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.185506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.185583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.185944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.186020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.186400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.186477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.186778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.186854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.187181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.187256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.187816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.187895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.188249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.188325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.188748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.188847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.189167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.189234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.189556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.189624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.189912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.189977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.190242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.190305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.190575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.190642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.190937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.191001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.191289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.191352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.191660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.191723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.192023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.192087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.192390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.192454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.192748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.192811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.193057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.193122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.193413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.193479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.193780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.193844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.194099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.194163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.194415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.194479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.194775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.194839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.195133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.195196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.195496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.195562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.195801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.195864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.196121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.196185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.196481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.196545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.196789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.196852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.197094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.197157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.197447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.197512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.197796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.197860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.198158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.198233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.754 qpair failed and we were unable to recover it. 00:25:54.754 [2024-11-19 11:27:50.198527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.754 [2024-11-19 11:27:50.198592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.198836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.198901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.199186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.199249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.199512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.199577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.199809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.199874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.200060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.200123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.200384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.200449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.200729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.200793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.201077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.201139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.201423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.201488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.201773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.201837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.202140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.202203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.202450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.202515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.202825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.202888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.203194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.203257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.203559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.203623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.203909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.203972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.204233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.204296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.204599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.204663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.204947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.205010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.205304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.205385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.205639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.205703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.205998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.206061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.206317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.206399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.206691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.206755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.207007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.207070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.207357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.207459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.207765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.207830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.208133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.208196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.208494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.208560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.208846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.208911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.209160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.209223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.209512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.209577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.209824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.209889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.210175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.210238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.210534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.210599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.210867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.210932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.211219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.211252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.211457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.211521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.211777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.211841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.212158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.212223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.212482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.212516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.212758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.212821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.213114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.213177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.213463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.213526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.213822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.213885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.214173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.214237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.214524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.214588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.214883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.214946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.215252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.215316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.215630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.215694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.215986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.216049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.216348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.216431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.216718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.216757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.216997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.217061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.217302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.217384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.217676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.217709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.217945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.218009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.218301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.218384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.218682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.218715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:54.755 [2024-11-19 11:27:50.218950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.755 [2024-11-19 11:27:50.219013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:54.755 qpair failed and we were unable to recover it. 00:25:55.034 [2024-11-19 11:27:50.219267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.034 [2024-11-19 11:27:50.219330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.034 qpair failed and we were unable to recover it. 00:25:55.034 [2024-11-19 11:27:50.219639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.034 [2024-11-19 11:27:50.219703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.034 qpair failed and we were unable to recover it. 00:25:55.034 [2024-11-19 11:27:50.219951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.034 [2024-11-19 11:27:50.220015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.034 qpair failed and we were unable to recover it. 00:25:55.034 [2024-11-19 11:27:50.220276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.034 [2024-11-19 11:27:50.220339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.034 qpair failed and we were unable to recover it. 00:25:55.034 [2024-11-19 11:27:50.220682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.034 [2024-11-19 11:27:50.220747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.034 qpair failed and we were unable to recover it. 00:25:55.034 [2024-11-19 11:27:50.221003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.034 [2024-11-19 11:27:50.221036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.034 qpair failed and we were unable to recover it. 00:25:55.034 [2024-11-19 11:27:50.221266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.034 [2024-11-19 11:27:50.221299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.034 qpair failed and we were unable to recover it. 00:25:55.034 [2024-11-19 11:27:50.221524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.034 [2024-11-19 11:27:50.221558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.034 qpair failed and we were unable to recover it. 00:25:55.034 [2024-11-19 11:27:50.221783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.034 [2024-11-19 11:27:50.221817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.034 qpair failed and we were unable to recover it. 00:25:55.035 [2024-11-19 11:27:50.222036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.035 [2024-11-19 11:27:50.222069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.035 qpair failed and we were unable to recover it. 00:25:55.035 [2024-11-19 11:27:50.222318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.035 [2024-11-19 11:27:50.222396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.035 qpair failed and we were unable to recover it. 00:25:55.035 [2024-11-19 11:27:50.222678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.035 [2024-11-19 11:27:50.222742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.035 qpair failed and we were unable to recover it. 00:25:55.035 [2024-11-19 11:27:50.222991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.035 [2024-11-19 11:27:50.223055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.035 qpair failed and we were unable to recover it. 00:25:55.035 [2024-11-19 11:27:50.223264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.035 [2024-11-19 11:27:50.223327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.035 qpair failed and we were unable to recover it. 00:25:55.035 [2024-11-19 11:27:50.223599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.035 [2024-11-19 11:27:50.223664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.035 qpair failed and we were unable to recover it. 00:25:55.035 [2024-11-19 11:27:50.226575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.035 [2024-11-19 11:27:50.226673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.035 qpair failed and we were unable to recover it. 00:25:55.035 [2024-11-19 11:27:50.227016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.035 [2024-11-19 11:27:50.227084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.035 qpair failed and we were unable to recover it. 00:25:55.035 [2024-11-19 11:27:50.227397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.035 [2024-11-19 11:27:50.227484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.035 qpair failed and we were unable to recover it. 00:25:55.035 [2024-11-19 11:27:50.227781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.035 [2024-11-19 11:27:50.227848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.035 qpair failed and we were unable to recover it. 00:25:55.035 [2024-11-19 11:27:50.228156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.035 [2024-11-19 11:27:50.228220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.035 qpair failed and we were unable to recover it. 00:25:55.035 [2024-11-19 11:27:50.228543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.035 [2024-11-19 11:27:50.228609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.035 qpair failed and we were unable to recover it. 00:25:55.035 [2024-11-19 11:27:50.228901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.035 [2024-11-19 11:27:50.228964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.035 qpair failed and we were unable to recover it. 00:25:55.035 [2024-11-19 11:27:50.229279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.035 [2024-11-19 11:27:50.229342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.035 qpair failed and we were unable to recover it. 00:25:55.035 [2024-11-19 11:27:50.229671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.035 [2024-11-19 11:27:50.229734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.035 qpair failed and we were unable to recover it. 00:25:55.035 [2024-11-19 11:27:50.230032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.035 [2024-11-19 11:27:50.230093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.035 qpair failed and we were unable to recover it. 00:25:55.035 [2024-11-19 11:27:50.230350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.035 [2024-11-19 11:27:50.230434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.035 qpair failed and we were unable to recover it. 00:25:55.035 [2024-11-19 11:27:50.230719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.035 [2024-11-19 11:27:50.230783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.035 qpair failed and we were unable to recover it. 00:25:55.035 [2024-11-19 11:27:50.231075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.035 [2024-11-19 11:27:50.231138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.035 qpair failed and we were unable to recover it. 00:25:55.035 [2024-11-19 11:27:50.231441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.035 [2024-11-19 11:27:50.231507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.035 qpair failed and we were unable to recover it. 00:25:55.035 [2024-11-19 11:27:50.231794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.035 [2024-11-19 11:27:50.231857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.035 qpair failed and we were unable to recover it. 00:25:55.035 [2024-11-19 11:27:50.232155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.035 [2024-11-19 11:27:50.232217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.035 qpair failed and we were unable to recover it. 00:25:55.035 [2024-11-19 11:27:50.232463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.035 [2024-11-19 11:27:50.232527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.035 qpair failed and we were unable to recover it. 00:25:55.035 [2024-11-19 11:27:50.232816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.035 [2024-11-19 11:27:50.232879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.035 qpair failed and we were unable to recover it. 00:25:55.035 [2024-11-19 11:27:50.233194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.035 [2024-11-19 11:27:50.233257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.035 qpair failed and we were unable to recover it. 00:25:55.035 [2024-11-19 11:27:50.233571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.035 [2024-11-19 11:27:50.233635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.035 qpair failed and we were unable to recover it. 00:25:55.035 [2024-11-19 11:27:50.233920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.035 [2024-11-19 11:27:50.233984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.035 qpair failed and we were unable to recover it. 00:25:55.035 [2024-11-19 11:27:50.234286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.035 [2024-11-19 11:27:50.234348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.035 qpair failed and we were unable to recover it. 00:25:55.035 [2024-11-19 11:27:50.234665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.035 [2024-11-19 11:27:50.234729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.035 qpair failed and we were unable to recover it. 00:25:55.035 [2024-11-19 11:27:50.234925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.036 [2024-11-19 11:27:50.234989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.036 qpair failed and we were unable to recover it. 00:25:55.036 [2024-11-19 11:27:50.235269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.036 [2024-11-19 11:27:50.235331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.036 qpair failed and we were unable to recover it. 00:25:55.036 [2024-11-19 11:27:50.235660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.036 [2024-11-19 11:27:50.235724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.036 qpair failed and we were unable to recover it. 00:25:55.036 [2024-11-19 11:27:50.236012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.036 [2024-11-19 11:27:50.236075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.036 qpair failed and we were unable to recover it. 00:25:55.036 [2024-11-19 11:27:50.236403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.036 [2024-11-19 11:27:50.236469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.036 qpair failed and we were unable to recover it. 00:25:55.036 [2024-11-19 11:27:50.236777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.036 [2024-11-19 11:27:50.236839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.036 qpair failed and we were unable to recover it. 00:25:55.036 [2024-11-19 11:27:50.237080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.036 [2024-11-19 11:27:50.237144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.036 qpair failed and we were unable to recover it. 00:25:55.036 [2024-11-19 11:27:50.237431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.036 [2024-11-19 11:27:50.237497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.036 qpair failed and we were unable to recover it. 00:25:55.036 [2024-11-19 11:27:50.237761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.036 [2024-11-19 11:27:50.237824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.036 qpair failed and we were unable to recover it. 00:25:55.036 [2024-11-19 11:27:50.238087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.036 [2024-11-19 11:27:50.238151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.036 qpair failed and we were unable to recover it. 00:25:55.036 [2024-11-19 11:27:50.238437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.036 [2024-11-19 11:27:50.238503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.036 qpair failed and we were unable to recover it. 00:25:55.036 [2024-11-19 11:27:50.238807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.036 [2024-11-19 11:27:50.238870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.036 qpair failed and we were unable to recover it. 00:25:55.036 [2024-11-19 11:27:50.239118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.036 [2024-11-19 11:27:50.239181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.036 qpair failed and we were unable to recover it. 00:25:55.036 [2024-11-19 11:27:50.239482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.036 [2024-11-19 11:27:50.239547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.036 qpair failed and we were unable to recover it. 00:25:55.036 [2024-11-19 11:27:50.239837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.036 [2024-11-19 11:27:50.239899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.036 qpair failed and we were unable to recover it. 00:25:55.036 [2024-11-19 11:27:50.240167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.036 [2024-11-19 11:27:50.240230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.036 qpair failed and we were unable to recover it. 00:25:55.036 [2024-11-19 11:27:50.240530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.036 [2024-11-19 11:27:50.240596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.036 qpair failed and we were unable to recover it. 00:25:55.036 [2024-11-19 11:27:50.240893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.036 [2024-11-19 11:27:50.240956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.036 qpair failed and we were unable to recover it. 00:25:55.036 [2024-11-19 11:27:50.241215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.036 [2024-11-19 11:27:50.241279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.036 qpair failed and we were unable to recover it. 00:25:55.036 [2024-11-19 11:27:50.241591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.036 [2024-11-19 11:27:50.241655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.036 qpair failed and we were unable to recover it. 00:25:55.036 [2024-11-19 11:27:50.241952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.036 [2024-11-19 11:27:50.242016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.036 qpair failed and we were unable to recover it. 00:25:55.036 [2024-11-19 11:27:50.242304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.036 [2024-11-19 11:27:50.242387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.036 qpair failed and we were unable to recover it. 00:25:55.036 [2024-11-19 11:27:50.242677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.036 [2024-11-19 11:27:50.242751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.036 qpair failed and we were unable to recover it. 00:25:55.036 [2024-11-19 11:27:50.243050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.036 [2024-11-19 11:27:50.243115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.036 qpair failed and we were unable to recover it. 00:25:55.036 [2024-11-19 11:27:50.243402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.036 [2024-11-19 11:27:50.243468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.036 qpair failed and we were unable to recover it. 00:25:55.036 [2024-11-19 11:27:50.243774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.036 [2024-11-19 11:27:50.243837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.036 qpair failed and we were unable to recover it. 00:25:55.036 [2024-11-19 11:27:50.244089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.036 [2024-11-19 11:27:50.244153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.036 qpair failed and we were unable to recover it. 00:25:55.036 [2024-11-19 11:27:50.244442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.036 [2024-11-19 11:27:50.244507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.036 qpair failed and we were unable to recover it. 00:25:55.036 [2024-11-19 11:27:50.244772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.036 [2024-11-19 11:27:50.244836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.036 qpair failed and we were unable to recover it. 00:25:55.036 [2024-11-19 11:27:50.245131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.036 [2024-11-19 11:27:50.245194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.036 qpair failed and we were unable to recover it. 00:25:55.037 [2024-11-19 11:27:50.245430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.037 [2024-11-19 11:27:50.245495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.037 qpair failed and we were unable to recover it. 00:25:55.037 [2024-11-19 11:27:50.245779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.037 [2024-11-19 11:27:50.245842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.037 qpair failed and we were unable to recover it. 00:25:55.037 [2024-11-19 11:27:50.246140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.037 [2024-11-19 11:27:50.246203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.037 qpair failed and we were unable to recover it. 00:25:55.037 [2024-11-19 11:27:50.246458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.037 [2024-11-19 11:27:50.246522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.037 qpair failed and we were unable to recover it. 00:25:55.037 [2024-11-19 11:27:50.246789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.037 [2024-11-19 11:27:50.246852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.037 qpair failed and we were unable to recover it. 00:25:55.037 [2024-11-19 11:27:50.247136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.037 [2024-11-19 11:27:50.247201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.037 qpair failed and we were unable to recover it. 00:25:55.037 [2024-11-19 11:27:50.247475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.037 [2024-11-19 11:27:50.247539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.037 qpair failed and we were unable to recover it. 00:25:55.037 [2024-11-19 11:27:50.247821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.037 [2024-11-19 11:27:50.247885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.037 qpair failed and we were unable to recover it. 00:25:55.037 [2024-11-19 11:27:50.248173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.037 [2024-11-19 11:27:50.248237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.037 qpair failed and we were unable to recover it. 00:25:55.037 [2024-11-19 11:27:50.248485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.037 [2024-11-19 11:27:50.248550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.037 qpair failed and we were unable to recover it. 00:25:55.037 [2024-11-19 11:27:50.248854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.037 [2024-11-19 11:27:50.248917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.037 qpair failed and we were unable to recover it. 00:25:55.037 [2024-11-19 11:27:50.249205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.037 [2024-11-19 11:27:50.249269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.037 qpair failed and we were unable to recover it. 00:25:55.037 [2024-11-19 11:27:50.249582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.037 [2024-11-19 11:27:50.249647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.037 qpair failed and we were unable to recover it. 00:25:55.037 [2024-11-19 11:27:50.249903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.037 [2024-11-19 11:27:50.249966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.037 qpair failed and we were unable to recover it. 00:25:55.037 [2024-11-19 11:27:50.250225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.037 [2024-11-19 11:27:50.250289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.037 qpair failed and we were unable to recover it. 00:25:55.037 [2024-11-19 11:27:50.250574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.037 [2024-11-19 11:27:50.250638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.037 qpair failed and we were unable to recover it. 00:25:55.037 [2024-11-19 11:27:50.250930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.037 [2024-11-19 11:27:50.250993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.037 qpair failed and we were unable to recover it. 00:25:55.037 [2024-11-19 11:27:50.251246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.037 [2024-11-19 11:27:50.251310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.037 qpair failed and we were unable to recover it. 00:25:55.037 [2024-11-19 11:27:50.251636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.037 [2024-11-19 11:27:50.251701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.037 qpair failed and we were unable to recover it. 00:25:55.037 [2024-11-19 11:27:50.251993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.037 [2024-11-19 11:27:50.252066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.037 qpair failed and we were unable to recover it. 00:25:55.037 [2024-11-19 11:27:50.252316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.037 [2024-11-19 11:27:50.252415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.037 qpair failed and we were unable to recover it. 00:25:55.037 [2024-11-19 11:27:50.252732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.037 [2024-11-19 11:27:50.252796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.037 qpair failed and we were unable to recover it. 00:25:55.037 [2024-11-19 11:27:50.253047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.037 [2024-11-19 11:27:50.253110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.037 qpair failed and we were unable to recover it. 00:25:55.037 [2024-11-19 11:27:50.253390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.037 [2024-11-19 11:27:50.253455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.037 qpair failed and we were unable to recover it. 00:25:55.037 [2024-11-19 11:27:50.253755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.037 [2024-11-19 11:27:50.253820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.037 qpair failed and we were unable to recover it. 00:25:55.037 [2024-11-19 11:27:50.254118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.037 [2024-11-19 11:27:50.254180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.037 qpair failed and we were unable to recover it. 00:25:55.037 [2024-11-19 11:27:50.254479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.037 [2024-11-19 11:27:50.254545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.037 qpair failed and we were unable to recover it. 00:25:55.037 [2024-11-19 11:27:50.254839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.037 [2024-11-19 11:27:50.254904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.037 qpair failed and we were unable to recover it. 00:25:55.037 [2024-11-19 11:27:50.255201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.037 [2024-11-19 11:27:50.255264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.037 qpair failed and we were unable to recover it. 00:25:55.037 [2024-11-19 11:27:50.255539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.037 [2024-11-19 11:27:50.255604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.037 qpair failed and we were unable to recover it. 00:25:55.037 [2024-11-19 11:27:50.255864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.038 [2024-11-19 11:27:50.255928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.038 qpair failed and we were unable to recover it. 00:25:55.038 [2024-11-19 11:27:50.256221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.038 [2024-11-19 11:27:50.256284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.038 qpair failed and we were unable to recover it. 00:25:55.038 [2024-11-19 11:27:50.256589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.038 [2024-11-19 11:27:50.256654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.038 qpair failed and we were unable to recover it. 00:25:55.038 [2024-11-19 11:27:50.256950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.038 [2024-11-19 11:27:50.257013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.038 qpair failed and we were unable to recover it. 00:25:55.038 [2024-11-19 11:27:50.257261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.038 [2024-11-19 11:27:50.257324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.038 qpair failed and we were unable to recover it. 00:25:55.038 [2024-11-19 11:27:50.257633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.038 [2024-11-19 11:27:50.257697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.038 qpair failed and we were unable to recover it. 00:25:55.038 [2024-11-19 11:27:50.257989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.038 [2024-11-19 11:27:50.258051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.038 qpair failed and we were unable to recover it. 00:25:55.038 [2024-11-19 11:27:50.258308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.038 [2024-11-19 11:27:50.258390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.038 qpair failed and we were unable to recover it. 00:25:55.038 [2024-11-19 11:27:50.258651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.038 [2024-11-19 11:27:50.258714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.038 qpair failed and we were unable to recover it. 00:25:55.038 [2024-11-19 11:27:50.259015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.038 [2024-11-19 11:27:50.259077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.038 qpair failed and we were unable to recover it. 00:25:55.038 [2024-11-19 11:27:50.259336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.038 [2024-11-19 11:27:50.259420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.038 qpair failed and we were unable to recover it. 00:25:55.038 [2024-11-19 11:27:50.259712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.038 [2024-11-19 11:27:50.259775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.038 qpair failed and we were unable to recover it. 00:25:55.038 [2024-11-19 11:27:50.260070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.038 [2024-11-19 11:27:50.260132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.038 qpair failed and we were unable to recover it. 00:25:55.038 [2024-11-19 11:27:50.260414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.038 [2024-11-19 11:27:50.260479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.038 qpair failed and we were unable to recover it. 00:25:55.038 [2024-11-19 11:27:50.260773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.038 [2024-11-19 11:27:50.260836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.038 qpair failed and we were unable to recover it. 00:25:55.038 [2024-11-19 11:27:50.261122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.038 [2024-11-19 11:27:50.261185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.038 qpair failed and we were unable to recover it. 00:25:55.038 [2024-11-19 11:27:50.261505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.038 [2024-11-19 11:27:50.261582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.038 qpair failed and we were unable to recover it. 00:25:55.038 [2024-11-19 11:27:50.261853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.038 [2024-11-19 11:27:50.261917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.038 qpair failed and we were unable to recover it. 00:25:55.038 [2024-11-19 11:27:50.262170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.038 [2024-11-19 11:27:50.262234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.038 qpair failed and we were unable to recover it. 00:25:55.038 [2024-11-19 11:27:50.262517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.038 [2024-11-19 11:27:50.262581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.038 qpair failed and we were unable to recover it. 00:25:55.038 [2024-11-19 11:27:50.262832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.038 [2024-11-19 11:27:50.262895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.038 qpair failed and we were unable to recover it. 00:25:55.038 [2024-11-19 11:27:50.263184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.038 [2024-11-19 11:27:50.263248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.038 qpair failed and we were unable to recover it. 00:25:55.038 [2024-11-19 11:27:50.263548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.038 [2024-11-19 11:27:50.263612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.038 qpair failed and we were unable to recover it. 00:25:55.038 [2024-11-19 11:27:50.263906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.038 [2024-11-19 11:27:50.263968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.038 qpair failed and we were unable to recover it. 00:25:55.038 [2024-11-19 11:27:50.264278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.038 [2024-11-19 11:27:50.264342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.038 qpair failed and we were unable to recover it. 00:25:55.038 [2024-11-19 11:27:50.264648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.038 [2024-11-19 11:27:50.264711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.038 qpair failed and we were unable to recover it. 00:25:55.038 [2024-11-19 11:27:50.265011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.038 [2024-11-19 11:27:50.265075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.038 qpair failed and we were unable to recover it. 00:25:55.038 [2024-11-19 11:27:50.265265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.038 [2024-11-19 11:27:50.265328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.038 qpair failed and we were unable to recover it. 00:25:55.038 [2024-11-19 11:27:50.265644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-11-19 11:27:50.265708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-11-19 11:27:50.265995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-11-19 11:27:50.266058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-11-19 11:27:50.266322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-11-19 11:27:50.266404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-11-19 11:27:50.266651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-11-19 11:27:50.266714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-11-19 11:27:50.267021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-11-19 11:27:50.267084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-11-19 11:27:50.267431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-11-19 11:27:50.267496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-11-19 11:27:50.267786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-11-19 11:27:50.267849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-11-19 11:27:50.268093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-11-19 11:27:50.268155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-11-19 11:27:50.268461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-11-19 11:27:50.268527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-11-19 11:27:50.268846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-11-19 11:27:50.268908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-11-19 11:27:50.269163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-11-19 11:27:50.269226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-11-19 11:27:50.269525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-11-19 11:27:50.269590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-11-19 11:27:50.269839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-11-19 11:27:50.269902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-11-19 11:27:50.270211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-11-19 11:27:50.270273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-11-19 11:27:50.270590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-11-19 11:27:50.270655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-11-19 11:27:50.270953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-11-19 11:27:50.271016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-11-19 11:27:50.271218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-11-19 11:27:50.271281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-11-19 11:27:50.271593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-11-19 11:27:50.271659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-11-19 11:27:50.271953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-11-19 11:27:50.272015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-11-19 11:27:50.272305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-11-19 11:27:50.272402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-11-19 11:27:50.272697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-11-19 11:27:50.272760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-11-19 11:27:50.273053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-11-19 11:27:50.273116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-11-19 11:27:50.273411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-11-19 11:27:50.273475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-11-19 11:27:50.273779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-11-19 11:27:50.273842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-11-19 11:27:50.274148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-11-19 11:27:50.274214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-11-19 11:27:50.274509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-11-19 11:27:50.274574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-11-19 11:27:50.274866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-11-19 11:27:50.274930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-11-19 11:27:50.275142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-11-19 11:27:50.275205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-11-19 11:27:50.275491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-11-19 11:27:50.275556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-11-19 11:27:50.275848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.039 [2024-11-19 11:27:50.275922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.039 qpair failed and we were unable to recover it. 00:25:55.039 [2024-11-19 11:27:50.276214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-11-19 11:27:50.276277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-11-19 11:27:50.276594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-11-19 11:27:50.276659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-11-19 11:27:50.276953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-11-19 11:27:50.277016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-11-19 11:27:50.277314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-11-19 11:27:50.277399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-11-19 11:27:50.277704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-11-19 11:27:50.277766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-11-19 11:27:50.278058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-11-19 11:27:50.278120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-11-19 11:27:50.278408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-11-19 11:27:50.278473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-11-19 11:27:50.278714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-11-19 11:27:50.278778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-11-19 11:27:50.278966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-11-19 11:27:50.279036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-11-19 11:27:50.279335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-11-19 11:27:50.279421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-11-19 11:27:50.279634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-11-19 11:27:50.279709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-11-19 11:27:50.279979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-11-19 11:27:50.280043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-11-19 11:27:50.280299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-11-19 11:27:50.280377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-11-19 11:27:50.280617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-11-19 11:27:50.280681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-11-19 11:27:50.280928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-11-19 11:27:50.280991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-11-19 11:27:50.281207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-11-19 11:27:50.281270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-11-19 11:27:50.281534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-11-19 11:27:50.281599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-11-19 11:27:50.281813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-11-19 11:27:50.281877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-11-19 11:27:50.282118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-11-19 11:27:50.282181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-11-19 11:27:50.282420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-11-19 11:27:50.282485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-11-19 11:27:50.282694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-11-19 11:27:50.282757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-11-19 11:27:50.282989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-11-19 11:27:50.283052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-11-19 11:27:50.283288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.040 [2024-11-19 11:27:50.283351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.040 qpair failed and we were unable to recover it. 00:25:55.040 [2024-11-19 11:27:50.283606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-11-19 11:27:50.283669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-11-19 11:27:50.283917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-11-19 11:27:50.283980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-11-19 11:27:50.284210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-11-19 11:27:50.284272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-11-19 11:27:50.284535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-11-19 11:27:50.284610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-11-19 11:27:50.284851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-11-19 11:27:50.284914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-11-19 11:27:50.285116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-11-19 11:27:50.285179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-11-19 11:27:50.285400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-11-19 11:27:50.285464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-11-19 11:27:50.285696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-11-19 11:27:50.285759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-11-19 11:27:50.285963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-11-19 11:27:50.286027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-11-19 11:27:50.286314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-11-19 11:27:50.286394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-11-19 11:27:50.286601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-11-19 11:27:50.286663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-11-19 11:27:50.286925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-11-19 11:27:50.286989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-11-19 11:27:50.287273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-11-19 11:27:50.287335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-11-19 11:27:50.287575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-11-19 11:27:50.287638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-11-19 11:27:50.287882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-11-19 11:27:50.287945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-11-19 11:27:50.288182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-11-19 11:27:50.288245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-11-19 11:27:50.288442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-11-19 11:27:50.288507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-11-19 11:27:50.288768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-11-19 11:27:50.288833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-11-19 11:27:50.289036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-11-19 11:27:50.289100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-11-19 11:27:50.289265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-11-19 11:27:50.289329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-11-19 11:27:50.289534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-11-19 11:27:50.289597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-11-19 11:27:50.289821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-11-19 11:27:50.289884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-11-19 11:27:50.290092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-11-19 11:27:50.290155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-11-19 11:27:50.290406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-11-19 11:27:50.290470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-11-19 11:27:50.290714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-11-19 11:27:50.290778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-11-19 11:27:50.291018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-11-19 11:27:50.291081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-11-19 11:27:50.291308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-11-19 11:27:50.291384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.041 qpair failed and we were unable to recover it. 00:25:55.041 [2024-11-19 11:27:50.291570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.041 [2024-11-19 11:27:50.291631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-11-19 11:27:50.291877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-11-19 11:27:50.291938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-11-19 11:27:50.292151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-11-19 11:27:50.292211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-11-19 11:27:50.292435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-11-19 11:27:50.292507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-11-19 11:27:50.292731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-11-19 11:27:50.292791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-11-19 11:27:50.293054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-11-19 11:27:50.293114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-11-19 11:27:50.293316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-11-19 11:27:50.293394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-11-19 11:27:50.293605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-11-19 11:27:50.293665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-11-19 11:27:50.293962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-11-19 11:27:50.294022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-11-19 11:27:50.294218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-11-19 11:27:50.294278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-11-19 11:27:50.294486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-11-19 11:27:50.294547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-11-19 11:27:50.294785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-11-19 11:27:50.294845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-11-19 11:27:50.295103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-11-19 11:27:50.295164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-11-19 11:27:50.295406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-11-19 11:27:50.295467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-11-19 11:27:50.295733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-11-19 11:27:50.295794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-11-19 11:27:50.296054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-11-19 11:27:50.296116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-11-19 11:27:50.296426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-11-19 11:27:50.296488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-11-19 11:27:50.296729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-11-19 11:27:50.296791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-11-19 11:27:50.297095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-11-19 11:27:50.297157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-11-19 11:27:50.297394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-11-19 11:27:50.297455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-11-19 11:27:50.297646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-11-19 11:27:50.297706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-11-19 11:27:50.297981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-11-19 11:27:50.298050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-11-19 11:27:50.298276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-11-19 11:27:50.298336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-11-19 11:27:50.298544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-11-19 11:27:50.298605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-11-19 11:27:50.298876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-11-19 11:27:50.298937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-11-19 11:27:50.299175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-11-19 11:27:50.299235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-11-19 11:27:50.299474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-11-19 11:27:50.299537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-11-19 11:27:50.299774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-11-19 11:27:50.299834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-11-19 11:27:50.300120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-11-19 11:27:50.300183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-11-19 11:27:50.300443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-11-19 11:27:50.300508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-11-19 11:27:50.300739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-11-19 11:27:50.300803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.042 [2024-11-19 11:27:50.301093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.042 [2024-11-19 11:27:50.301157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.042 qpair failed and we were unable to recover it. 00:25:55.043 [2024-11-19 11:27:50.301429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-11-19 11:27:50.301493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-11-19 11:27:50.301788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-11-19 11:27:50.301851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-11-19 11:27:50.302114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-11-19 11:27:50.302177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-11-19 11:27:50.302470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-11-19 11:27:50.302534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-11-19 11:27:50.302794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-11-19 11:27:50.302866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-11-19 11:27:50.303137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-11-19 11:27:50.303201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-11-19 11:27:50.303460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-11-19 11:27:50.303524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-11-19 11:27:50.303762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-11-19 11:27:50.303834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-11-19 11:27:50.304128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-11-19 11:27:50.304192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-11-19 11:27:50.304453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-11-19 11:27:50.304517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-11-19 11:27:50.304751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-11-19 11:27:50.304814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-11-19 11:27:50.305068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-11-19 11:27:50.305132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-11-19 11:27:50.305419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-11-19 11:27:50.305485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2727523 Killed "${NVMF_APP[@]}" "$@" 00:25:55.043 [2024-11-19 11:27:50.305794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-11-19 11:27:50.305857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-11-19 11:27:50.306032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-11-19 11:27:50.306106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 11:27:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:25:55.043 [2024-11-19 11:27:50.306387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-11-19 11:27:50.306451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-11-19 11:27:50.306627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-11-19 11:27:50.306689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 11:27:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:55.043 [2024-11-19 11:27:50.306942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-11-19 11:27:50.307006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 11:27:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:55.043 [2024-11-19 11:27:50.307307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-11-19 11:27:50.307384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 wit 11:27:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:55.043 h addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-11-19 11:27:50.307599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-11-19 11:27:50.307663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.043 11:27:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-11-19 11:27:50.307878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-11-19 11:27:50.307947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-11-19 11:27:50.308166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-11-19 11:27:50.308228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-11-19 11:27:50.308490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-11-19 11:27:50.308555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-11-19 11:27:50.308806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-11-19 11:27:50.308869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-11-19 11:27:50.309090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-11-19 11:27:50.309154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-11-19 11:27:50.309457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-11-19 11:27:50.309492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-11-19 11:27:50.309686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-11-19 11:27:50.309727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-11-19 11:27:50.309863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-11-19 11:27:50.309899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.043 qpair failed and we were unable to recover it. 00:25:55.043 [2024-11-19 11:27:50.310125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.043 [2024-11-19 11:27:50.310187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-11-19 11:27:50.310437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-11-19 11:27:50.310502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-11-19 11:27:50.310718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-11-19 11:27:50.310782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-11-19 11:27:50.310991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-11-19 11:27:50.311054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-11-19 11:27:50.311290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-11-19 11:27:50.311354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-11-19 11:27:50.311580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-11-19 11:27:50.311645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-11-19 11:27:50.311838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-11-19 11:27:50.311873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-11-19 11:27:50.312019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-11-19 11:27:50.312054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-11-19 11:27:50.312212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-11-19 11:27:50.312274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-11-19 11:27:50.312522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-11-19 11:27:50.312557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-11-19 11:27:50.312722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-11-19 11:27:50.312793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-11-19 11:27:50.313038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-11-19 11:27:50.313100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-11-19 11:27:50.313355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-11-19 11:27:50.313435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-11-19 11:27:50.313585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-11-19 11:27:50.313618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-11-19 11:27:50.313744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-11-19 11:27:50.313807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 11:27:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2728081 00:25:55.044 [2024-11-19 11:27:50.314011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 11:27:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:55.044 [2024-11-19 11:27:50.314075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 11:27:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2728081 00:25:55.044 [2024-11-19 11:27:50.314335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-11-19 11:27:50.314421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-11-19 11:27:50.314537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 11:27:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2728081 ']' 00:25:55.044 [2024-11-19 11:27:50.314571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 11:27:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:55.044 [2024-11-19 11:27:50.314713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-11-19 11:27:50.314775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 11:27:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:55.044 [2024-11-19 11:27:50.315072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 11:27:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:55.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:55.044 [2024-11-19 11:27:50.315136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 11:27:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:55.044 [2024-11-19 11:27:50.315348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 11:27:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:55.044 [2024-11-19 11:27:50.315442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-11-19 11:27:50.315564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-11-19 11:27:50.315599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-11-19 11:27:50.315775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-11-19 11:27:50.315840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-11-19 11:27:50.316029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.044 [2024-11-19 11:27:50.316089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.044 qpair failed and we were unable to recover it. 00:25:55.044 [2024-11-19 11:27:50.316315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-11-19 11:27:50.316427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-11-19 11:27:50.316591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-11-19 11:27:50.316625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-11-19 11:27:50.316772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-11-19 11:27:50.316806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-11-19 11:27:50.316947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-11-19 11:27:50.316981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-11-19 11:27:50.317152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-11-19 11:27:50.317184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-11-19 11:27:50.317330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-11-19 11:27:50.317373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-11-19 11:27:50.317523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-11-19 11:27:50.317556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-11-19 11:27:50.317702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-11-19 11:27:50.317735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-11-19 11:27:50.317907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-11-19 11:27:50.317941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-11-19 11:27:50.318084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-11-19 11:27:50.318117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-11-19 11:27:50.318262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-11-19 11:27:50.318296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-11-19 11:27:50.318413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-11-19 11:27:50.318447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-11-19 11:27:50.318563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-11-19 11:27:50.318595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-11-19 11:27:50.318762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-11-19 11:27:50.318795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-11-19 11:27:50.318972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-11-19 11:27:50.319005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-11-19 11:27:50.319181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-11-19 11:27:50.319214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-11-19 11:27:50.319335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-11-19 11:27:50.319377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-11-19 11:27:50.319500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-11-19 11:27:50.319531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-11-19 11:27:50.319696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-11-19 11:27:50.319728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-11-19 11:27:50.319833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-11-19 11:27:50.319865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-11-19 11:27:50.320033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-11-19 11:27:50.320071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-11-19 11:27:50.320174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-11-19 11:27:50.320205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-11-19 11:27:50.320343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-11-19 11:27:50.320382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-11-19 11:27:50.320500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-11-19 11:27:50.320532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-11-19 11:27:50.320651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-11-19 11:27:50.320683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-11-19 11:27:50.320860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-11-19 11:27:50.320892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-11-19 11:27:50.321027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-11-19 11:27:50.321059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-11-19 11:27:50.321220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-11-19 11:27:50.321251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.045 qpair failed and we were unable to recover it. 00:25:55.045 [2024-11-19 11:27:50.321389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.045 [2024-11-19 11:27:50.321422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-11-19 11:27:50.321533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-11-19 11:27:50.321565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-11-19 11:27:50.321694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-11-19 11:27:50.321726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-11-19 11:27:50.321902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-11-19 11:27:50.321933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-11-19 11:27:50.322075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-11-19 11:27:50.322106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-11-19 11:27:50.322249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-11-19 11:27:50.322280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-11-19 11:27:50.322386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-11-19 11:27:50.322419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-11-19 11:27:50.322560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-11-19 11:27:50.322592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-11-19 11:27:50.322724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-11-19 11:27:50.322756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-11-19 11:27:50.322892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-11-19 11:27:50.322924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-11-19 11:27:50.323061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-11-19 11:27:50.323092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-11-19 11:27:50.323260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-11-19 11:27:50.323291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-11-19 11:27:50.323387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-11-19 11:27:50.323444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-11-19 11:27:50.323580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-11-19 11:27:50.323610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-11-19 11:27:50.323744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-11-19 11:27:50.323775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-11-19 11:27:50.323914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-11-19 11:27:50.323945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-11-19 11:27:50.324083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-11-19 11:27:50.324114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-11-19 11:27:50.324254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-11-19 11:27:50.324285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-11-19 11:27:50.324428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-11-19 11:27:50.324459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-11-19 11:27:50.324603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-11-19 11:27:50.324633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-11-19 11:27:50.324775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-11-19 11:27:50.324805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-11-19 11:27:50.324922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-11-19 11:27:50.324953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-11-19 11:27:50.325089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-11-19 11:27:50.325120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-11-19 11:27:50.325260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-11-19 11:27:50.325292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-11-19 11:27:50.325420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.046 [2024-11-19 11:27:50.325450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.046 qpair failed and we were unable to recover it. 00:25:55.046 [2024-11-19 11:27:50.325553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-11-19 11:27:50.325584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-11-19 11:27:50.325709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-11-19 11:27:50.325739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-11-19 11:27:50.325864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-11-19 11:27:50.325894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-11-19 11:27:50.326027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-11-19 11:27:50.326058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-11-19 11:27:50.326209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-11-19 11:27:50.326239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-11-19 11:27:50.326378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-11-19 11:27:50.326424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-11-19 11:27:50.326555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-11-19 11:27:50.326584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-11-19 11:27:50.326749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-11-19 11:27:50.326778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-11-19 11:27:50.326943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-11-19 11:27:50.326972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-11-19 11:27:50.327103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-11-19 11:27:50.327132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-11-19 11:27:50.327219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-11-19 11:27:50.327249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-11-19 11:27:50.327405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-11-19 11:27:50.327435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-11-19 11:27:50.327555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-11-19 11:27:50.327585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-11-19 11:27:50.327751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-11-19 11:27:50.327781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-11-19 11:27:50.327882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-11-19 11:27:50.327912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-11-19 11:27:50.328009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-11-19 11:27:50.328038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-11-19 11:27:50.328206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-11-19 11:27:50.328237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-11-19 11:27:50.328360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-11-19 11:27:50.328398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-11-19 11:27:50.328530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-11-19 11:27:50.328559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-11-19 11:27:50.328654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-11-19 11:27:50.328683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-11-19 11:27:50.328814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-11-19 11:27:50.328843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-11-19 11:27:50.328968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-11-19 11:27:50.328997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-11-19 11:27:50.329122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-11-19 11:27:50.329153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-11-19 11:27:50.329239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-11-19 11:27:50.329268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-11-19 11:27:50.329426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-11-19 11:27:50.329456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-11-19 11:27:50.329590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-11-19 11:27:50.329619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-11-19 11:27:50.329758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-11-19 11:27:50.329787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-11-19 11:27:50.329941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-11-19 11:27:50.329971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-11-19 11:27:50.330100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-11-19 11:27:50.330129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-11-19 11:27:50.330255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-11-19 11:27:50.330284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-11-19 11:27:50.330422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-11-19 11:27:50.330452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-11-19 11:27:50.330585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-11-19 11:27:50.330614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-11-19 11:27:50.330747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.047 [2024-11-19 11:27:50.330776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.047 qpair failed and we were unable to recover it. 00:25:55.047 [2024-11-19 11:27:50.330905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-11-19 11:27:50.330935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-11-19 11:27:50.331058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-11-19 11:27:50.331086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-11-19 11:27:50.331236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-11-19 11:27:50.331274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-11-19 11:27:50.331434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-11-19 11:27:50.331465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-11-19 11:27:50.331566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-11-19 11:27:50.331595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-11-19 11:27:50.331699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-11-19 11:27:50.331728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-11-19 11:27:50.331829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-11-19 11:27:50.331859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-11-19 11:27:50.331980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-11-19 11:27:50.332009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-11-19 11:27:50.332138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-11-19 11:27:50.332167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-11-19 11:27:50.332294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-11-19 11:27:50.332323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-11-19 11:27:50.332460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-11-19 11:27:50.332490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-11-19 11:27:50.332578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-11-19 11:27:50.332607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-11-19 11:27:50.332712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-11-19 11:27:50.332741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-11-19 11:27:50.332896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-11-19 11:27:50.332925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-11-19 11:27:50.333055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-11-19 11:27:50.333084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-11-19 11:27:50.333207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-11-19 11:27:50.333236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-11-19 11:27:50.333377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-11-19 11:27:50.333407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-11-19 11:27:50.333513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-11-19 11:27:50.333543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-11-19 11:27:50.333643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-11-19 11:27:50.333672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-11-19 11:27:50.333794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-11-19 11:27:50.333824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-11-19 11:27:50.333952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-11-19 11:27:50.333981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-11-19 11:27:50.334135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-11-19 11:27:50.334163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-11-19 11:27:50.334263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-11-19 11:27:50.334292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-11-19 11:27:50.334389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-11-19 11:27:50.334419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-11-19 11:27:50.334523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-11-19 11:27:50.334552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-11-19 11:27:50.334692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-11-19 11:27:50.334721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-11-19 11:27:50.334843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-11-19 11:27:50.334873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-11-19 11:27:50.335005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-11-19 11:27:50.335034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-11-19 11:27:50.335149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-11-19 11:27:50.335178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.048 [2024-11-19 11:27:50.335267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.048 [2024-11-19 11:27:50.335302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.048 qpair failed and we were unable to recover it. 00:25:55.049 [2024-11-19 11:27:50.335448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-11-19 11:27:50.335478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-11-19 11:27:50.335579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-11-19 11:27:50.335609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-11-19 11:27:50.335746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-11-19 11:27:50.335775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-11-19 11:27:50.335909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-11-19 11:27:50.335934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-11-19 11:27:50.336022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-11-19 11:27:50.336047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-11-19 11:27:50.336183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-11-19 11:27:50.336208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-11-19 11:27:50.336303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-11-19 11:27:50.336328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-11-19 11:27:50.336463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-11-19 11:27:50.336488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-11-19 11:27:50.336609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-11-19 11:27:50.336635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-11-19 11:27:50.336756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-11-19 11:27:50.336799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-11-19 11:27:50.336893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-11-19 11:27:50.336918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-11-19 11:27:50.337064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-11-19 11:27:50.337089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-11-19 11:27:50.337206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-11-19 11:27:50.337245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-11-19 11:27:50.337386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-11-19 11:27:50.337412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-11-19 11:27:50.337501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-11-19 11:27:50.337526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-11-19 11:27:50.337649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-11-19 11:27:50.337674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-11-19 11:27:50.337797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-11-19 11:27:50.337823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-11-19 11:27:50.337926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-11-19 11:27:50.337951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-11-19 11:27:50.338048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-11-19 11:27:50.338074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-11-19 11:27:50.338197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-11-19 11:27:50.338222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-11-19 11:27:50.338315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-11-19 11:27:50.338341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-11-19 11:27:50.338449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-11-19 11:27:50.338475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-11-19 11:27:50.338591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-11-19 11:27:50.338617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-11-19 11:27:50.338754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-11-19 11:27:50.338779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-11-19 11:27:50.338922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-11-19 11:27:50.338947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-11-19 11:27:50.339037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-11-19 11:27:50.339062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-11-19 11:27:50.339154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.049 [2024-11-19 11:27:50.339192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.049 qpair failed and we were unable to recover it. 00:25:55.049 [2024-11-19 11:27:50.339389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-11-19 11:27:50.339419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-11-19 11:27:50.339542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-11-19 11:27:50.339567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-11-19 11:27:50.339672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-11-19 11:27:50.339696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-11-19 11:27:50.339827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-11-19 11:27:50.339852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-11-19 11:27:50.339961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-11-19 11:27:50.339986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-11-19 11:27:50.340109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-11-19 11:27:50.340133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-11-19 11:27:50.340225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-11-19 11:27:50.340251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-11-19 11:27:50.340368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-11-19 11:27:50.340394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-11-19 11:27:50.340486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-11-19 11:27:50.340511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-11-19 11:27:50.340632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-11-19 11:27:50.340658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-11-19 11:27:50.340833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-11-19 11:27:50.340856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-11-19 11:27:50.340965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-11-19 11:27:50.340989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-11-19 11:27:50.341105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-11-19 11:27:50.341130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-11-19 11:27:50.341247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-11-19 11:27:50.341272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-11-19 11:27:50.341402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-11-19 11:27:50.341428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-11-19 11:27:50.341546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-11-19 11:27:50.341570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-11-19 11:27:50.341716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-11-19 11:27:50.341754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-11-19 11:27:50.341925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-11-19 11:27:50.341949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-11-19 11:27:50.342037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-11-19 11:27:50.342061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-11-19 11:27:50.342197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-11-19 11:27:50.342221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-11-19 11:27:50.342356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-11-19 11:27:50.342387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-11-19 11:27:50.342535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-11-19 11:27:50.342560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-11-19 11:27:50.342693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-11-19 11:27:50.342732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-11-19 11:27:50.342830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-11-19 11:27:50.342853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-11-19 11:27:50.342983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-11-19 11:27:50.343008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-11-19 11:27:50.343124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-11-19 11:27:50.343149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-11-19 11:27:50.343308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-11-19 11:27:50.343346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-11-19 11:27:50.343499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-11-19 11:27:50.343524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-11-19 11:27:50.343647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-11-19 11:27:50.343671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-11-19 11:27:50.343791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-11-19 11:27:50.343817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-11-19 11:27:50.343953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-11-19 11:27:50.343976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.050 qpair failed and we were unable to recover it. 00:25:55.050 [2024-11-19 11:27:50.344122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.050 [2024-11-19 11:27:50.344145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-11-19 11:27:50.344277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-11-19 11:27:50.344303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-11-19 11:27:50.344420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-11-19 11:27:50.344446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-11-19 11:27:50.344565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-11-19 11:27:50.344590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-11-19 11:27:50.344713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-11-19 11:27:50.344737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-11-19 11:27:50.344860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-11-19 11:27:50.344900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-11-19 11:27:50.345020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-11-19 11:27:50.345061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-11-19 11:27:50.345206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-11-19 11:27:50.345230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-11-19 11:27:50.345388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-11-19 11:27:50.345414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-11-19 11:27:50.345564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-11-19 11:27:50.345591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-11-19 11:27:50.345717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-11-19 11:27:50.345741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-11-19 11:27:50.345840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-11-19 11:27:50.345866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-11-19 11:27:50.345979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-11-19 11:27:50.346003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-11-19 11:27:50.346150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-11-19 11:27:50.346176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-11-19 11:27:50.346320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-11-19 11:27:50.346359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-11-19 11:27:50.346479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-11-19 11:27:50.346503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-11-19 11:27:50.346665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-11-19 11:27:50.346689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-11-19 11:27:50.346825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-11-19 11:27:50.346851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-11-19 11:27:50.346975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-11-19 11:27:50.346999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-11-19 11:27:50.347139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-11-19 11:27:50.347164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-11-19 11:27:50.347294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-11-19 11:27:50.347317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-11-19 11:27:50.347452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-11-19 11:27:50.347477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-11-19 11:27:50.347592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-11-19 11:27:50.347616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-11-19 11:27:50.347762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-11-19 11:27:50.347801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-11-19 11:27:50.347926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-11-19 11:27:50.347951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-11-19 11:27:50.348121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-11-19 11:27:50.348145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-11-19 11:27:50.348283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-11-19 11:27:50.348321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-11-19 11:27:50.348438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-11-19 11:27:50.348464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-11-19 11:27:50.348610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-11-19 11:27:50.348635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-11-19 11:27:50.348782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-11-19 11:27:50.348821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-11-19 11:27:50.348928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-11-19 11:27:50.348954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.051 qpair failed and we were unable to recover it. 00:25:55.051 [2024-11-19 11:27:50.349070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.051 [2024-11-19 11:27:50.349094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-11-19 11:27:50.349229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-11-19 11:27:50.349254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-11-19 11:27:50.349393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-11-19 11:27:50.349419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-11-19 11:27:50.349514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-11-19 11:27:50.349538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-11-19 11:27:50.349670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-11-19 11:27:50.349695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-11-19 11:27:50.349781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-11-19 11:27:50.349809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-11-19 11:27:50.349951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-11-19 11:27:50.349977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-11-19 11:27:50.350078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-11-19 11:27:50.350103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-11-19 11:27:50.350231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-11-19 11:27:50.350256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-11-19 11:27:50.350395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-11-19 11:27:50.350420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-11-19 11:27:50.350531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-11-19 11:27:50.350557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-11-19 11:27:50.350700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-11-19 11:27:50.350739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-11-19 11:27:50.350894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-11-19 11:27:50.350917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-11-19 11:27:50.351030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-11-19 11:27:50.351054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-11-19 11:27:50.351176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-11-19 11:27:50.351201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-11-19 11:27:50.351347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-11-19 11:27:50.351377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-11-19 11:27:50.351500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-11-19 11:27:50.351525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-11-19 11:27:50.351649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-11-19 11:27:50.351674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-11-19 11:27:50.351817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-11-19 11:27:50.351841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-11-19 11:27:50.351974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-11-19 11:27:50.351999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-11-19 11:27:50.352126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-11-19 11:27:50.352151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-11-19 11:27:50.352245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-11-19 11:27:50.352270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-11-19 11:27:50.352371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-11-19 11:27:50.352397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-11-19 11:27:50.352511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-11-19 11:27:50.352536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-11-19 11:27:50.352616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-11-19 11:27:50.352642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-11-19 11:27:50.352782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-11-19 11:27:50.352806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-11-19 11:27:50.352920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-11-19 11:27:50.352945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.052 qpair failed and we were unable to recover it. 00:25:55.052 [2024-11-19 11:27:50.353097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.052 [2024-11-19 11:27:50.353122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-11-19 11:27:50.353245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-11-19 11:27:50.353269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-11-19 11:27:50.353394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-11-19 11:27:50.353421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-11-19 11:27:50.353543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-11-19 11:27:50.353567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-11-19 11:27:50.353689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-11-19 11:27:50.353714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-11-19 11:27:50.353845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-11-19 11:27:50.353888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-11-19 11:27:50.354012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-11-19 11:27:50.354035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-11-19 11:27:50.354168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-11-19 11:27:50.354193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-11-19 11:27:50.354317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-11-19 11:27:50.354341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-11-19 11:27:50.354476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-11-19 11:27:50.354502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-11-19 11:27:50.354659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-11-19 11:27:50.354684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-11-19 11:27:50.354801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-11-19 11:27:50.354840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-11-19 11:27:50.355020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-11-19 11:27:50.355045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-11-19 11:27:50.355196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-11-19 11:27:50.355220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-11-19 11:27:50.355333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-11-19 11:27:50.355357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-11-19 11:27:50.355515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-11-19 11:27:50.355541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-11-19 11:27:50.355704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-11-19 11:27:50.355728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-11-19 11:27:50.355826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-11-19 11:27:50.355864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-11-19 11:27:50.356003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-11-19 11:27:50.356027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-11-19 11:27:50.356190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-11-19 11:27:50.356229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-11-19 11:27:50.356358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-11-19 11:27:50.356392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-11-19 11:27:50.356513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-11-19 11:27:50.356537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-11-19 11:27:50.356666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-11-19 11:27:50.356691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-11-19 11:27:50.356829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-11-19 11:27:50.356855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-11-19 11:27:50.357005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-11-19 11:27:50.357043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-11-19 11:27:50.357161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-11-19 11:27:50.357185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-11-19 11:27:50.357296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-11-19 11:27:50.357320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-11-19 11:27:50.357484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-11-19 11:27:50.357510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-11-19 11:27:50.357645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-11-19 11:27:50.357671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-11-19 11:27:50.357773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.053 [2024-11-19 11:27:50.357797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.053 qpair failed and we were unable to recover it. 00:25:55.053 [2024-11-19 11:27:50.357912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-11-19 11:27:50.357936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-11-19 11:27:50.358101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-11-19 11:27:50.358127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-11-19 11:27:50.358246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-11-19 11:27:50.358270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-11-19 11:27:50.358447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-11-19 11:27:50.358472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-11-19 11:27:50.358634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-11-19 11:27:50.358659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-11-19 11:27:50.358788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-11-19 11:27:50.358813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-11-19 11:27:50.358935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-11-19 11:27:50.358960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-11-19 11:27:50.359109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-11-19 11:27:50.359133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-11-19 11:27:50.359237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-11-19 11:27:50.359260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-11-19 11:27:50.359393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-11-19 11:27:50.359419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-11-19 11:27:50.359542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-11-19 11:27:50.359567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-11-19 11:27:50.359668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-11-19 11:27:50.359692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-11-19 11:27:50.359836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-11-19 11:27:50.359862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-11-19 11:27:50.359956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-11-19 11:27:50.359981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-11-19 11:27:50.360137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-11-19 11:27:50.360177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-11-19 11:27:50.360272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-11-19 11:27:50.360296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-11-19 11:27:50.360429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-11-19 11:27:50.360470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-11-19 11:27:50.360592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-11-19 11:27:50.360617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-11-19 11:27:50.360738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-11-19 11:27:50.360763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-11-19 11:27:50.360915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-11-19 11:27:50.360954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-11-19 11:27:50.361072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-11-19 11:27:50.361095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-11-19 11:27:50.361230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-11-19 11:27:50.361255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-11-19 11:27:50.361388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-11-19 11:27:50.361414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-11-19 11:27:50.361525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-11-19 11:27:50.361550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-11-19 11:27:50.361679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-11-19 11:27:50.361704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-11-19 11:27:50.361847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-11-19 11:27:50.361872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-11-19 11:27:50.361991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.054 [2024-11-19 11:27:50.362016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.054 qpair failed and we were unable to recover it. 00:25:55.054 [2024-11-19 11:27:50.362167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-11-19 11:27:50.362191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-11-19 11:27:50.362344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-11-19 11:27:50.362391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-11-19 11:27:50.362513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-11-19 11:27:50.362537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-11-19 11:27:50.362634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-11-19 11:27:50.362658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-11-19 11:27:50.362794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-11-19 11:27:50.362818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-11-19 11:27:50.362985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-11-19 11:27:50.363010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-11-19 11:27:50.363126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-11-19 11:27:50.363150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-11-19 11:27:50.363283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-11-19 11:27:50.363309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-11-19 11:27:50.363422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-11-19 11:27:50.363447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-11-19 11:27:50.363584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-11-19 11:27:50.363608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-11-19 11:27:50.363714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-11-19 11:27:50.363739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-11-19 11:27:50.363898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-11-19 11:27:50.363923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-11-19 11:27:50.364046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-11-19 11:27:50.364071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-11-19 11:27:50.364170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-11-19 11:27:50.364196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-11-19 11:27:50.364287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-11-19 11:27:50.364312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-11-19 11:27:50.364430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-11-19 11:27:50.364456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-11-19 11:27:50.364599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-11-19 11:27:50.364628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-11-19 11:27:50.364792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-11-19 11:27:50.364829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-11-19 11:27:50.364969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-11-19 11:27:50.365007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-11-19 11:27:50.365142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-11-19 11:27:50.365167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-11-19 11:27:50.365294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-11-19 11:27:50.365318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-11-19 11:27:50.365485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-11-19 11:27:50.365511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-11-19 11:27:50.365637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-11-19 11:27:50.365662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-11-19 11:27:50.365803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-11-19 11:27:50.365842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-11-19 11:27:50.365962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-11-19 11:27:50.365987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-11-19 11:27:50.366128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-11-19 11:27:50.366168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-11-19 11:27:50.366291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-11-19 11:27:50.366315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-11-19 11:27:50.366442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-11-19 11:27:50.366468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-11-19 11:27:50.366587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-11-19 11:27:50.366611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-11-19 11:27:50.366753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-11-19 11:27:50.366776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-11-19 11:27:50.366920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-11-19 11:27:50.366946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-11-19 11:27:50.367063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-11-19 11:27:50.367087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-11-19 11:27:50.367210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-11-19 11:27:50.367250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.055 qpair failed and we were unable to recover it. 00:25:55.055 [2024-11-19 11:27:50.367410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.055 [2024-11-19 11:27:50.367436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-11-19 11:27:50.367561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-11-19 11:27:50.367585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-11-19 11:27:50.367733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-11-19 11:27:50.367772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-11-19 11:27:50.367907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-11-19 11:27:50.367930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-11-19 11:27:50.368088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-11-19 11:27:50.368112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-11-19 11:27:50.368290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-11-19 11:27:50.368314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-11-19 11:27:50.368484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-11-19 11:27:50.368510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-11-19 11:27:50.368612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-11-19 11:27:50.368637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-11-19 11:27:50.368774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-11-19 11:27:50.368799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-11-19 11:27:50.368905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-11-19 11:27:50.368930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-11-19 11:27:50.369053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-11-19 11:27:50.369082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-11-19 11:27:50.369202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-11-19 11:27:50.369227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-11-19 11:27:50.369322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-11-19 11:27:50.369347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-11-19 11:27:50.369467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-11-19 11:27:50.369493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-11-19 11:27:50.369590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-11-19 11:27:50.369615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-11-19 11:27:50.369696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-11-19 11:27:50.369721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-11-19 11:27:50.369819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-11-19 11:27:50.369843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-11-19 11:27:50.369976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-11-19 11:27:50.370002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-11-19 11:27:50.370129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-11-19 11:27:50.370153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-11-19 11:27:50.370249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-11-19 11:27:50.370273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-11-19 11:27:50.370389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-11-19 11:27:50.370414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-11-19 11:27:50.370548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-11-19 11:27:50.370573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-11-19 11:27:50.370695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-11-19 11:27:50.370720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-11-19 11:27:50.370868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-11-19 11:27:50.370906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-11-19 11:27:50.371044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-11-19 11:27:50.371068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-11-19 11:27:50.371205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-11-19 11:27:50.371231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-11-19 11:27:50.371351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-11-19 11:27:50.371382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-11-19 11:27:50.371504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-11-19 11:27:50.371530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-11-19 11:27:50.371623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.056 [2024-11-19 11:27:50.371647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.056 qpair failed and we were unable to recover it. 00:25:55.056 [2024-11-19 11:27:50.371700] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:25:55.056 [2024-11-19 11:27:50.371771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-11-19 11:27:50.371783] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:55.057 [2024-11-19 11:27:50.371796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-11-19 11:27:50.371891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-11-19 11:27:50.371914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-11-19 11:27:50.372012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-11-19 11:27:50.372035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-11-19 11:27:50.372183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-11-19 11:27:50.372220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-11-19 11:27:50.372327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-11-19 11:27:50.372350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-11-19 11:27:50.372457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-11-19 11:27:50.372481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-11-19 11:27:50.372569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-11-19 11:27:50.372594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-11-19 11:27:50.372759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-11-19 11:27:50.372789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-11-19 11:27:50.372938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-11-19 11:27:50.372964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-11-19 11:27:50.373125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-11-19 11:27:50.373149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-11-19 11:27:50.373259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-11-19 11:27:50.373284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-11-19 11:27:50.373408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-11-19 11:27:50.373435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-11-19 11:27:50.373531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-11-19 11:27:50.373556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-11-19 11:27:50.373708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-11-19 11:27:50.373734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-11-19 11:27:50.373877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-11-19 11:27:50.373902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-11-19 11:27:50.374008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-11-19 11:27:50.374034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-11-19 11:27:50.374155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-11-19 11:27:50.374181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-11-19 11:27:50.374293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-11-19 11:27:50.374318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-11-19 11:27:50.374478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-11-19 11:27:50.374505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-11-19 11:27:50.374651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-11-19 11:27:50.374677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-11-19 11:27:50.374820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-11-19 11:27:50.374860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-11-19 11:27:50.374957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-11-19 11:27:50.374983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-11-19 11:27:50.375127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-11-19 11:27:50.375153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-11-19 11:27:50.375275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-11-19 11:27:50.375301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-11-19 11:27:50.375424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-11-19 11:27:50.375451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-11-19 11:27:50.375567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-11-19 11:27:50.375593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-11-19 11:27:50.375728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-11-19 11:27:50.375768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.057 [2024-11-19 11:27:50.375913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.057 [2024-11-19 11:27:50.375938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.057 qpair failed and we were unable to recover it. 00:25:55.058 [2024-11-19 11:27:50.376101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-11-19 11:27:50.376126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-11-19 11:27:50.376234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-11-19 11:27:50.376260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-11-19 11:27:50.376375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-11-19 11:27:50.376401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-11-19 11:27:50.376494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-11-19 11:27:50.376520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-11-19 11:27:50.376668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-11-19 11:27:50.376693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-11-19 11:27:50.376803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-11-19 11:27:50.376828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-11-19 11:27:50.376994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-11-19 11:27:50.377023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-11-19 11:27:50.377131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-11-19 11:27:50.377156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-11-19 11:27:50.377292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-11-19 11:27:50.377318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-11-19 11:27:50.377431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-11-19 11:27:50.377458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-11-19 11:27:50.377550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-11-19 11:27:50.377576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-11-19 11:27:50.377727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-11-19 11:27:50.377751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-11-19 11:27:50.377878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-11-19 11:27:50.377918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-11-19 11:27:50.378063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-11-19 11:27:50.378103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-11-19 11:27:50.378265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-11-19 11:27:50.378306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-11-19 11:27:50.378446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-11-19 11:27:50.378472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-11-19 11:27:50.378615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-11-19 11:27:50.378641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-11-19 11:27:50.378792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-11-19 11:27:50.378817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-11-19 11:27:50.378975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-11-19 11:27:50.379000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-11-19 11:27:50.379106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-11-19 11:27:50.379130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-11-19 11:27:50.379289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-11-19 11:27:50.379315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-11-19 11:27:50.379496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-11-19 11:27:50.379522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-11-19 11:27:50.379608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-11-19 11:27:50.379634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-11-19 11:27:50.379759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-11-19 11:27:50.379784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.058 [2024-11-19 11:27:50.379922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.058 [2024-11-19 11:27:50.379948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.058 qpair failed and we were unable to recover it. 00:25:55.059 [2024-11-19 11:27:50.380066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-11-19 11:27:50.380092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-11-19 11:27:50.380255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-11-19 11:27:50.380279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-11-19 11:27:50.380419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-11-19 11:27:50.380461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-11-19 11:27:50.380581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-11-19 11:27:50.380607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-11-19 11:27:50.380726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-11-19 11:27:50.380753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-11-19 11:27:50.380924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-11-19 11:27:50.380948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-11-19 11:27:50.381084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-11-19 11:27:50.381109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-11-19 11:27:50.381219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-11-19 11:27:50.381245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-11-19 11:27:50.381376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-11-19 11:27:50.381401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-11-19 11:27:50.381536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-11-19 11:27:50.381562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-11-19 11:27:50.381684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-11-19 11:27:50.381709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-11-19 11:27:50.381831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-11-19 11:27:50.381857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-11-19 11:27:50.382001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-11-19 11:27:50.382041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-11-19 11:27:50.382147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-11-19 11:27:50.382172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-11-19 11:27:50.382354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-11-19 11:27:50.382387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-11-19 11:27:50.382505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-11-19 11:27:50.382531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-11-19 11:27:50.382630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-11-19 11:27:50.382655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-11-19 11:27:50.382816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-11-19 11:27:50.382840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-11-19 11:27:50.382926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-11-19 11:27:50.382951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-11-19 11:27:50.383070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-11-19 11:27:50.383094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-11-19 11:27:50.383236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-11-19 11:27:50.383261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-11-19 11:27:50.383384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-11-19 11:27:50.383410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-11-19 11:27:50.383538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-11-19 11:27:50.383563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-11-19 11:27:50.383668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-11-19 11:27:50.383693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-11-19 11:27:50.383814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-11-19 11:27:50.383839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-11-19 11:27:50.383962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-11-19 11:27:50.383987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-11-19 11:27:50.384134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-11-19 11:27:50.384158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-11-19 11:27:50.384284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-11-19 11:27:50.384308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-11-19 11:27:50.384484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-11-19 11:27:50.384510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-11-19 11:27:50.384604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-11-19 11:27:50.384628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.059 qpair failed and we were unable to recover it. 00:25:55.059 [2024-11-19 11:27:50.384723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.059 [2024-11-19 11:27:50.384761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-11-19 11:27:50.384922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-11-19 11:27:50.384946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-11-19 11:27:50.385040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-11-19 11:27:50.385064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-11-19 11:27:50.385220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-11-19 11:27:50.385245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-11-19 11:27:50.385339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-11-19 11:27:50.385406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-11-19 11:27:50.385539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-11-19 11:27:50.385564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-11-19 11:27:50.385693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-11-19 11:27:50.385718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-11-19 11:27:50.385861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-11-19 11:27:50.385899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-11-19 11:27:50.386054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-11-19 11:27:50.386079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-11-19 11:27:50.386223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-11-19 11:27:50.386248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-11-19 11:27:50.386378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-11-19 11:27:50.386417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-11-19 11:27:50.386536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-11-19 11:27:50.386560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-11-19 11:27:50.386700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-11-19 11:27:50.386724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-11-19 11:27:50.386850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-11-19 11:27:50.386874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-11-19 11:27:50.387030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-11-19 11:27:50.387069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-11-19 11:27:50.387187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-11-19 11:27:50.387212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-11-19 11:27:50.387310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-11-19 11:27:50.387335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-11-19 11:27:50.387448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-11-19 11:27:50.387473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-11-19 11:27:50.387606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-11-19 11:27:50.387630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-11-19 11:27:50.387757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-11-19 11:27:50.387799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-11-19 11:27:50.387934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-11-19 11:27:50.387958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-11-19 11:27:50.388125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-11-19 11:27:50.388166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-11-19 11:27:50.388286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-11-19 11:27:50.388312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-11-19 11:27:50.388411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-11-19 11:27:50.388437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-11-19 11:27:50.388557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-11-19 11:27:50.388583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-11-19 11:27:50.388704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-11-19 11:27:50.388729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-11-19 11:27:50.388848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-11-19 11:27:50.388872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-11-19 11:27:50.389009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-11-19 11:27:50.389047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-11-19 11:27:50.389145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-11-19 11:27:50.389169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-11-19 11:27:50.389336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-11-19 11:27:50.389368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-11-19 11:27:50.389492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-11-19 11:27:50.389518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-11-19 11:27:50.389678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-11-19 11:27:50.389701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-11-19 11:27:50.389807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-11-19 11:27:50.389831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.060 [2024-11-19 11:27:50.389936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.060 [2024-11-19 11:27:50.389960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.060 qpair failed and we were unable to recover it. 00:25:55.061 [2024-11-19 11:27:50.390088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-11-19 11:27:50.390113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-11-19 11:27:50.390239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-11-19 11:27:50.390262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-11-19 11:27:50.390394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-11-19 11:27:50.390419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-11-19 11:27:50.390562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-11-19 11:27:50.390587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-11-19 11:27:50.390705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-11-19 11:27:50.390744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-11-19 11:27:50.390841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-11-19 11:27:50.390865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-11-19 11:27:50.390988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-11-19 11:27:50.391012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-11-19 11:27:50.391149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-11-19 11:27:50.391174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-11-19 11:27:50.391348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-11-19 11:27:50.391396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-11-19 11:27:50.391504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-11-19 11:27:50.391529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-11-19 11:27:50.391621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-11-19 11:27:50.391646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-11-19 11:27:50.391799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-11-19 11:27:50.391824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-11-19 11:27:50.391950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-11-19 11:27:50.391993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-11-19 11:27:50.392112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-11-19 11:27:50.392136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-11-19 11:27:50.392279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-11-19 11:27:50.392303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-11-19 11:27:50.392444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-11-19 11:27:50.392469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-11-19 11:27:50.392597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-11-19 11:27:50.392622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-11-19 11:27:50.392758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-11-19 11:27:50.392797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-11-19 11:27:50.392936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-11-19 11:27:50.392960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-11-19 11:27:50.393081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-11-19 11:27:50.393107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-11-19 11:27:50.393231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-11-19 11:27:50.393256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-11-19 11:27:50.393401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-11-19 11:27:50.393427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-11-19 11:27:50.393529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-11-19 11:27:50.393555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-11-19 11:27:50.393673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-11-19 11:27:50.393698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-11-19 11:27:50.393822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-11-19 11:27:50.393862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-11-19 11:27:50.394008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-11-19 11:27:50.394032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-11-19 11:27:50.394176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-11-19 11:27:50.394215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-11-19 11:27:50.394374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.061 [2024-11-19 11:27:50.394401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.061 qpair failed and we were unable to recover it. 00:25:55.061 [2024-11-19 11:27:50.394524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-11-19 11:27:50.394549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-11-19 11:27:50.394674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-11-19 11:27:50.394714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-11-19 11:27:50.394801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-11-19 11:27:50.394826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-11-19 11:27:50.394947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-11-19 11:27:50.394972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-11-19 11:27:50.395057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-11-19 11:27:50.395082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-11-19 11:27:50.395251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-11-19 11:27:50.395275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-11-19 11:27:50.395423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-11-19 11:27:50.395450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-11-19 11:27:50.395575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-11-19 11:27:50.395600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-11-19 11:27:50.395696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-11-19 11:27:50.395721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-11-19 11:27:50.395822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-11-19 11:27:50.395846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-11-19 11:27:50.395980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-11-19 11:27:50.396003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-11-19 11:27:50.396139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-11-19 11:27:50.396170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-11-19 11:27:50.396306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-11-19 11:27:50.396331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-11-19 11:27:50.396457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-11-19 11:27:50.396483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-11-19 11:27:50.396608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-11-19 11:27:50.396633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-11-19 11:27:50.396754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-11-19 11:27:50.396778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-11-19 11:27:50.396925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-11-19 11:27:50.396951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-11-19 11:27:50.397035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-11-19 11:27:50.397060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-11-19 11:27:50.397215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-11-19 11:27:50.397240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-11-19 11:27:50.397394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-11-19 11:27:50.397420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-11-19 11:27:50.397536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-11-19 11:27:50.397561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-11-19 11:27:50.397709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-11-19 11:27:50.397748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-11-19 11:27:50.397894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-11-19 11:27:50.397918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-11-19 11:27:50.398018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-11-19 11:27:50.398043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-11-19 11:27:50.398161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-11-19 11:27:50.398186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-11-19 11:27:50.398301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-11-19 11:27:50.398326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-11-19 11:27:50.398492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-11-19 11:27:50.398517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-11-19 11:27:50.398677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-11-19 11:27:50.398701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-11-19 11:27:50.398873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-11-19 11:27:50.398896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-11-19 11:27:50.399048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-11-19 11:27:50.399072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-11-19 11:27:50.399207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-11-19 11:27:50.399231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-11-19 11:27:50.399380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.062 [2024-11-19 11:27:50.399407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.062 qpair failed and we were unable to recover it. 00:25:55.062 [2024-11-19 11:27:50.399525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-11-19 11:27:50.399549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-11-19 11:27:50.399677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-11-19 11:27:50.399700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-11-19 11:27:50.399865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-11-19 11:27:50.399891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-11-19 11:27:50.399986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-11-19 11:27:50.400012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-11-19 11:27:50.400132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-11-19 11:27:50.400157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-11-19 11:27:50.400275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-11-19 11:27:50.400300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-11-19 11:27:50.400425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-11-19 11:27:50.400464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-11-19 11:27:50.400626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-11-19 11:27:50.400651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-11-19 11:27:50.400785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-11-19 11:27:50.400809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-11-19 11:27:50.400940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-11-19 11:27:50.400963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-11-19 11:27:50.401080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-11-19 11:27:50.401103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-11-19 11:27:50.401245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-11-19 11:27:50.401269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-11-19 11:27:50.401412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-11-19 11:27:50.401438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-11-19 11:27:50.401559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-11-19 11:27:50.401598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-11-19 11:27:50.401731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-11-19 11:27:50.401753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-11-19 11:27:50.401859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-11-19 11:27:50.401883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-11-19 11:27:50.402043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-11-19 11:27:50.402066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-11-19 11:27:50.402200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-11-19 11:27:50.402238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-11-19 11:27:50.402383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-11-19 11:27:50.402408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-11-19 11:27:50.402546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-11-19 11:27:50.402570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-11-19 11:27:50.402698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-11-19 11:27:50.402722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-11-19 11:27:50.402845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-11-19 11:27:50.402867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-11-19 11:27:50.402993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-11-19 11:27:50.403017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-11-19 11:27:50.403176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-11-19 11:27:50.403199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-11-19 11:27:50.403381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-11-19 11:27:50.403406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-11-19 11:27:50.403534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-11-19 11:27:50.403559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-11-19 11:27:50.403706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-11-19 11:27:50.403729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-11-19 11:27:50.403896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-11-19 11:27:50.403919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-11-19 11:27:50.404048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.063 [2024-11-19 11:27:50.404087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.063 qpair failed and we were unable to recover it. 00:25:55.063 [2024-11-19 11:27:50.404232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-11-19 11:27:50.404270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-11-19 11:27:50.404399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-11-19 11:27:50.404424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-11-19 11:27:50.404541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-11-19 11:27:50.404566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-11-19 11:27:50.404676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-11-19 11:27:50.404701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-11-19 11:27:50.404880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-11-19 11:27:50.404903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-11-19 11:27:50.405073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-11-19 11:27:50.405097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-11-19 11:27:50.405265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-11-19 11:27:50.405289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-11-19 11:27:50.405455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-11-19 11:27:50.405479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-11-19 11:27:50.405657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-11-19 11:27:50.405682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-11-19 11:27:50.405817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-11-19 11:27:50.405842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-11-19 11:27:50.405950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-11-19 11:27:50.405974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-11-19 11:27:50.406140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-11-19 11:27:50.406165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-11-19 11:27:50.406331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-11-19 11:27:50.406378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-11-19 11:27:50.406517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-11-19 11:27:50.406542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-11-19 11:27:50.406664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-11-19 11:27:50.406688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-11-19 11:27:50.406810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-11-19 11:27:50.406835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-11-19 11:27:50.406951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-11-19 11:27:50.406975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-11-19 11:27:50.407105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-11-19 11:27:50.407128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-11-19 11:27:50.407286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-11-19 11:27:50.407329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-11-19 11:27:50.407472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-11-19 11:27:50.407496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-11-19 11:27:50.407604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-11-19 11:27:50.407629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-11-19 11:27:50.407763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-11-19 11:27:50.407788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-11-19 11:27:50.407915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-11-19 11:27:50.407939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-11-19 11:27:50.408103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-11-19 11:27:50.408126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-11-19 11:27:50.408241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-11-19 11:27:50.408278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-11-19 11:27:50.408419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-11-19 11:27:50.408445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-11-19 11:27:50.408586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-11-19 11:27:50.408612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-11-19 11:27:50.408718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.064 [2024-11-19 11:27:50.408743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.064 qpair failed and we were unable to recover it. 00:25:55.064 [2024-11-19 11:27:50.408878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-11-19 11:27:50.408917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-11-19 11:27:50.409061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-11-19 11:27:50.409087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-11-19 11:27:50.409215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-11-19 11:27:50.409238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-11-19 11:27:50.409377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-11-19 11:27:50.409417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-11-19 11:27:50.409539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-11-19 11:27:50.409564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-11-19 11:27:50.409731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-11-19 11:27:50.409755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-11-19 11:27:50.409903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-11-19 11:27:50.409927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-11-19 11:27:50.410064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-11-19 11:27:50.410090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-11-19 11:27:50.410214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-11-19 11:27:50.410239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-11-19 11:27:50.410374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-11-19 11:27:50.410398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-11-19 11:27:50.410501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-11-19 11:27:50.410524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-11-19 11:27:50.410693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-11-19 11:27:50.410716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-11-19 11:27:50.410881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-11-19 11:27:50.410904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-11-19 11:27:50.411000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-11-19 11:27:50.411025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-11-19 11:27:50.411160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-11-19 11:27:50.411184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-11-19 11:27:50.411315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-11-19 11:27:50.411338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-11-19 11:27:50.411468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-11-19 11:27:50.411492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-11-19 11:27:50.411623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-11-19 11:27:50.411666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-11-19 11:27:50.411820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-11-19 11:27:50.411858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-11-19 11:27:50.412015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-11-19 11:27:50.412039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-11-19 11:27:50.412150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-11-19 11:27:50.412173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-11-19 11:27:50.412259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-11-19 11:27:50.412283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-11-19 11:27:50.412438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-11-19 11:27:50.412465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-11-19 11:27:50.412601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-11-19 11:27:50.412626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-11-19 11:27:50.412801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-11-19 11:27:50.412824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-11-19 11:27:50.412957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-11-19 11:27:50.412980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-11-19 11:27:50.413114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-11-19 11:27:50.413154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-11-19 11:27:50.413301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-11-19 11:27:50.413325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-11-19 11:27:50.413492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-11-19 11:27:50.413517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-11-19 11:27:50.413673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-11-19 11:27:50.413696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-11-19 11:27:50.413851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-11-19 11:27:50.413889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-11-19 11:27:50.414050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-11-19 11:27:50.414089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.065 [2024-11-19 11:27:50.414245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.065 [2024-11-19 11:27:50.414268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.065 qpair failed and we were unable to recover it. 00:25:55.066 [2024-11-19 11:27:50.414411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-11-19 11:27:50.414435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-11-19 11:27:50.414574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-11-19 11:27:50.414600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-11-19 11:27:50.414704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-11-19 11:27:50.414728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-11-19 11:27:50.414863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-11-19 11:27:50.414886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-11-19 11:27:50.414966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-11-19 11:27:50.414990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-11-19 11:27:50.415148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-11-19 11:27:50.415172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-11-19 11:27:50.415283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-11-19 11:27:50.415308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-11-19 11:27:50.415480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-11-19 11:27:50.415506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-11-19 11:27:50.415663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-11-19 11:27:50.415686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-11-19 11:27:50.415852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-11-19 11:27:50.415876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-11-19 11:27:50.416032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-11-19 11:27:50.416055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-11-19 11:27:50.416176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-11-19 11:27:50.416198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-11-19 11:27:50.416379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-11-19 11:27:50.416405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-11-19 11:27:50.416547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-11-19 11:27:50.416571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-11-19 11:27:50.416702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-11-19 11:27:50.416740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-11-19 11:27:50.416853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-11-19 11:27:50.416891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-11-19 11:27:50.417029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-11-19 11:27:50.417051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-11-19 11:27:50.417174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-11-19 11:27:50.417198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-11-19 11:27:50.417315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-11-19 11:27:50.417339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-11-19 11:27:50.417483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-11-19 11:27:50.417507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-11-19 11:27:50.417606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-11-19 11:27:50.417632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-11-19 11:27:50.417786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-11-19 11:27:50.417809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-11-19 11:27:50.417951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-11-19 11:27:50.417990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-11-19 11:27:50.418135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-11-19 11:27:50.418158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-11-19 11:27:50.418294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-11-19 11:27:50.418317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-11-19 11:27:50.418434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-11-19 11:27:50.418460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-11-19 11:27:50.418572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-11-19 11:27:50.418596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-11-19 11:27:50.418761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-11-19 11:27:50.418785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-11-19 11:27:50.418904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-11-19 11:27:50.418927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-11-19 11:27:50.419017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-11-19 11:27:50.419041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-11-19 11:27:50.419204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-11-19 11:27:50.419227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.066 [2024-11-19 11:27:50.419390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.066 [2024-11-19 11:27:50.419414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.066 qpair failed and we were unable to recover it. 00:25:55.067 [2024-11-19 11:27:50.419572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-11-19 11:27:50.419597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-11-19 11:27:50.419693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-11-19 11:27:50.419717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-11-19 11:27:50.419835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-11-19 11:27:50.419859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-11-19 11:27:50.420000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-11-19 11:27:50.420024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-11-19 11:27:50.420177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-11-19 11:27:50.420201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-11-19 11:27:50.420370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-11-19 11:27:50.420395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-11-19 11:27:50.420539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-11-19 11:27:50.420564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-11-19 11:27:50.420695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-11-19 11:27:50.420734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-11-19 11:27:50.420866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-11-19 11:27:50.420891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-11-19 11:27:50.421016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-11-19 11:27:50.421040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-11-19 11:27:50.421158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-11-19 11:27:50.421183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-11-19 11:27:50.421311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-11-19 11:27:50.421333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-11-19 11:27:50.421485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-11-19 11:27:50.421509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-11-19 11:27:50.421669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-11-19 11:27:50.421709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-11-19 11:27:50.421865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-11-19 11:27:50.421889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-11-19 11:27:50.422020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-11-19 11:27:50.422043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-11-19 11:27:50.422194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-11-19 11:27:50.422217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-11-19 11:27:50.422390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-11-19 11:27:50.422416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-11-19 11:27:50.422543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-11-19 11:27:50.422567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-11-19 11:27:50.422692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-11-19 11:27:50.422716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-11-19 11:27:50.422849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-11-19 11:27:50.422893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-11-19 11:27:50.423047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-11-19 11:27:50.423072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-11-19 11:27:50.423201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-11-19 11:27:50.423226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-11-19 11:27:50.423367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-11-19 11:27:50.423391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-11-19 11:27:50.423549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-11-19 11:27:50.423575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-11-19 11:27:50.423709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-11-19 11:27:50.423748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-11-19 11:27:50.423901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-11-19 11:27:50.423924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-11-19 11:27:50.424059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-11-19 11:27:50.424083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-11-19 11:27:50.424255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-11-19 11:27:50.424280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-11-19 11:27:50.424421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-11-19 11:27:50.424460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.067 qpair failed and we were unable to recover it. 00:25:55.067 [2024-11-19 11:27:50.424617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.067 [2024-11-19 11:27:50.424641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-11-19 11:27:50.424769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-11-19 11:27:50.424806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-11-19 11:27:50.424938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-11-19 11:27:50.424961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-11-19 11:27:50.425063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-11-19 11:27:50.425086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-11-19 11:27:50.425259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-11-19 11:27:50.425285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-11-19 11:27:50.425413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-11-19 11:27:50.425436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-11-19 11:27:50.425575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-11-19 11:27:50.425599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-11-19 11:27:50.425759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-11-19 11:27:50.425798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-11-19 11:27:50.425956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-11-19 11:27:50.425982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-11-19 11:27:50.426183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-11-19 11:27:50.426225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-11-19 11:27:50.426393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-11-19 11:27:50.426418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-11-19 11:27:50.426561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-11-19 11:27:50.426586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-11-19 11:27:50.426731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-11-19 11:27:50.426757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-11-19 11:27:50.426889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-11-19 11:27:50.426913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-11-19 11:27:50.427042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-11-19 11:27:50.427066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-11-19 11:27:50.427227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-11-19 11:27:50.427252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-11-19 11:27:50.427371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-11-19 11:27:50.427395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-11-19 11:27:50.427546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-11-19 11:27:50.427578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-11-19 11:27:50.427731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-11-19 11:27:50.427755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-11-19 11:27:50.427901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-11-19 11:27:50.427925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-11-19 11:27:50.428057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-11-19 11:27:50.428081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-11-19 11:27:50.428187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-11-19 11:27:50.428209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-11-19 11:27:50.428441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-11-19 11:27:50.428466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-11-19 11:27:50.428586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-11-19 11:27:50.428610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-11-19 11:27:50.428822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-11-19 11:27:50.428845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-11-19 11:27:50.428994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-11-19 11:27:50.429017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-11-19 11:27:50.429244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-11-19 11:27:50.429268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-11-19 11:27:50.429465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-11-19 11:27:50.429490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-11-19 11:27:50.429631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-11-19 11:27:50.429655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-11-19 11:27:50.429843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-11-19 11:27:50.429865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-11-19 11:27:50.430067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-11-19 11:27:50.430091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-11-19 11:27:50.430282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-11-19 11:27:50.430305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-11-19 11:27:50.430465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-11-19 11:27:50.430491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-11-19 11:27:50.430636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-11-19 11:27:50.430667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-11-19 11:27:50.430819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.068 [2024-11-19 11:27:50.430843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.068 qpair failed and we were unable to recover it. 00:25:55.068 [2024-11-19 11:27:50.430991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-11-19 11:27:50.431030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-11-19 11:27:50.431231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-11-19 11:27:50.431255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-11-19 11:27:50.431426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-11-19 11:27:50.431466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-11-19 11:27:50.431638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-11-19 11:27:50.431663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-11-19 11:27:50.431860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-11-19 11:27:50.431885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-11-19 11:27:50.432118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-11-19 11:27:50.432142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-11-19 11:27:50.432326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-11-19 11:27:50.432371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-11-19 11:27:50.432536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-11-19 11:27:50.432562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-11-19 11:27:50.432682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-11-19 11:27:50.432707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-11-19 11:27:50.432866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-11-19 11:27:50.432897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-11-19 11:27:50.433063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-11-19 11:27:50.433087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-11-19 11:27:50.433259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-11-19 11:27:50.433283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-11-19 11:27:50.433469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-11-19 11:27:50.433495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-11-19 11:27:50.433628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-11-19 11:27:50.433653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-11-19 11:27:50.433839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-11-19 11:27:50.433865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-11-19 11:27:50.433994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-11-19 11:27:50.434019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-11-19 11:27:50.434237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-11-19 11:27:50.434262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-11-19 11:27:50.434460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-11-19 11:27:50.434485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-11-19 11:27:50.434603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-11-19 11:27:50.434628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-11-19 11:27:50.434815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-11-19 11:27:50.434840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-11-19 11:27:50.434974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-11-19 11:27:50.435000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-11-19 11:27:50.435177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-11-19 11:27:50.435202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-11-19 11:27:50.435319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-11-19 11:27:50.435359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-11-19 11:27:50.435542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-11-19 11:27:50.435568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-11-19 11:27:50.435786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-11-19 11:27:50.435810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.069 qpair failed and we were unable to recover it. 00:25:55.069 [2024-11-19 11:27:50.435944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.069 [2024-11-19 11:27:50.435969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.070 qpair failed and we were unable to recover it. 00:25:55.070 [2024-11-19 11:27:50.436122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.070 [2024-11-19 11:27:50.436151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.070 qpair failed and we were unable to recover it. 00:25:55.070 [2024-11-19 11:27:50.436290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.070 [2024-11-19 11:27:50.436314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.070 qpair failed and we were unable to recover it. 00:25:55.070 [2024-11-19 11:27:50.436495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.070 [2024-11-19 11:27:50.436521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.070 qpair failed and we were unable to recover it. 00:25:55.070 [2024-11-19 11:27:50.436682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.070 [2024-11-19 11:27:50.436707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.070 qpair failed and we were unable to recover it. 00:25:55.070 [2024-11-19 11:27:50.436915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.070 [2024-11-19 11:27:50.436938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.070 qpair failed and we were unable to recover it. 00:25:55.070 [2024-11-19 11:27:50.437105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.070 [2024-11-19 11:27:50.437130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.070 qpair failed and we were unable to recover it. 00:25:55.070 [2024-11-19 11:27:50.437303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.070 [2024-11-19 11:27:50.437328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.070 qpair failed and we were unable to recover it. 00:25:55.070 [2024-11-19 11:27:50.437462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.070 [2024-11-19 11:27:50.437489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.070 qpair failed and we were unable to recover it. 00:25:55.070 [2024-11-19 11:27:50.437610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.070 [2024-11-19 11:27:50.437636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.070 qpair failed and we were unable to recover it. 00:25:55.070 [2024-11-19 11:27:50.437789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.070 [2024-11-19 11:27:50.437814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.070 qpair failed and we were unable to recover it. 00:25:55.070 [2024-11-19 11:27:50.437924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.070 [2024-11-19 11:27:50.437949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.070 qpair failed and we were unable to recover it. 00:25:55.070 [2024-11-19 11:27:50.438138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.070 [2024-11-19 11:27:50.438163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.070 qpair failed and we were unable to recover it. 00:25:55.070 [2024-11-19 11:27:50.438296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.070 [2024-11-19 11:27:50.438322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.070 qpair failed and we were unable to recover it. 00:25:55.070 [2024-11-19 11:27:50.438492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.070 [2024-11-19 11:27:50.438518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.070 qpair failed and we were unable to recover it. 00:25:55.070 [2024-11-19 11:27:50.438604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.070 [2024-11-19 11:27:50.438642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.070 qpair failed and we were unable to recover it. 00:25:55.070 [2024-11-19 11:27:50.438876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.070 [2024-11-19 11:27:50.438901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.070 qpair failed and we were unable to recover it. 00:25:55.070 [2024-11-19 11:27:50.439127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.070 [2024-11-19 11:27:50.439151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.070 qpair failed and we were unable to recover it. 00:25:55.070 [2024-11-19 11:27:50.439351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.070 [2024-11-19 11:27:50.439386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.070 qpair failed and we were unable to recover it. 00:25:55.070 [2024-11-19 11:27:50.439535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.070 [2024-11-19 11:27:50.439560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.070 qpair failed and we were unable to recover it. 00:25:55.070 [2024-11-19 11:27:50.439782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.070 [2024-11-19 11:27:50.439806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.070 qpair failed and we were unable to recover it. 00:25:55.070 [2024-11-19 11:27:50.440017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.070 [2024-11-19 11:27:50.440041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.070 qpair failed and we were unable to recover it. 00:25:55.070 [2024-11-19 11:27:50.440245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.070 [2024-11-19 11:27:50.440270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.070 qpair failed and we were unable to recover it. 00:25:55.070 [2024-11-19 11:27:50.440441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.070 [2024-11-19 11:27:50.440467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.070 qpair failed and we were unable to recover it. 00:25:55.070 [2024-11-19 11:27:50.440602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.070 [2024-11-19 11:27:50.440634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.070 qpair failed and we were unable to recover it. 00:25:55.070 [2024-11-19 11:27:50.440808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.070 [2024-11-19 11:27:50.440832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.070 qpair failed and we were unable to recover it. 00:25:55.070 [2024-11-19 11:27:50.441034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.070 [2024-11-19 11:27:50.441058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.070 qpair failed and we were unable to recover it. 00:25:55.070 [2024-11-19 11:27:50.441245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.070 [2024-11-19 11:27:50.441271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.070 qpair failed and we were unable to recover it. 00:25:55.070 [2024-11-19 11:27:50.441442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.070 [2024-11-19 11:27:50.441468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.070 qpair failed and we were unable to recover it. 00:25:55.070 [2024-11-19 11:27:50.441612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.070 [2024-11-19 11:27:50.441637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.070 qpair failed and we were unable to recover it. 00:25:55.070 [2024-11-19 11:27:50.441856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.070 [2024-11-19 11:27:50.441881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.070 qpair failed and we were unable to recover it. 00:25:55.070 [2024-11-19 11:27:50.442083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.070 [2024-11-19 11:27:50.442107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.070 qpair failed and we were unable to recover it. 00:25:55.070 [2024-11-19 11:27:50.442291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.070 [2024-11-19 11:27:50.442314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.070 qpair failed and we were unable to recover it. 00:25:55.070 [2024-11-19 11:27:50.442485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.070 [2024-11-19 11:27:50.442510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.070 qpair failed and we were unable to recover it. 00:25:55.071 [2024-11-19 11:27:50.442628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.071 [2024-11-19 11:27:50.442653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.071 qpair failed and we were unable to recover it. 00:25:55.071 [2024-11-19 11:27:50.442880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.071 [2024-11-19 11:27:50.442905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.071 qpair failed and we were unable to recover it. 00:25:55.071 [2024-11-19 11:27:50.443143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.071 [2024-11-19 11:27:50.443167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.071 qpair failed and we were unable to recover it. 00:25:55.071 [2024-11-19 11:27:50.443301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.071 [2024-11-19 11:27:50.443323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.071 qpair failed and we were unable to recover it. 00:25:55.071 [2024-11-19 11:27:50.443501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.071 [2024-11-19 11:27:50.443526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.071 qpair failed and we were unable to recover it. 00:25:55.071 [2024-11-19 11:27:50.443693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.071 [2024-11-19 11:27:50.443718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.071 qpair failed and we were unable to recover it. 00:25:55.071 [2024-11-19 11:27:50.443927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.071 [2024-11-19 11:27:50.443952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.071 qpair failed and we were unable to recover it. 00:25:55.071 [2024-11-19 11:27:50.444130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.071 [2024-11-19 11:27:50.444153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.071 qpair failed and we were unable to recover it. 00:25:55.071 [2024-11-19 11:27:50.444396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.071 [2024-11-19 11:27:50.444428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.071 qpair failed and we were unable to recover it. 00:25:55.071 [2024-11-19 11:27:50.444549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.071 [2024-11-19 11:27:50.444574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.071 qpair failed and we were unable to recover it. 00:25:55.071 [2024-11-19 11:27:50.444764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.071 [2024-11-19 11:27:50.444790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.071 qpair failed and we were unable to recover it. 00:25:55.071 [2024-11-19 11:27:50.444998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.071 [2024-11-19 11:27:50.445022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.071 qpair failed and we were unable to recover it. 00:25:55.071 [2024-11-19 11:27:50.445237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.071 [2024-11-19 11:27:50.445262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.071 qpair failed and we were unable to recover it. 00:25:55.071 [2024-11-19 11:27:50.445489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.071 [2024-11-19 11:27:50.445515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.071 qpair failed and we were unable to recover it. 00:25:55.071 [2024-11-19 11:27:50.445628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.071 [2024-11-19 11:27:50.445667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.071 qpair failed and we were unable to recover it. 00:25:55.071 [2024-11-19 11:27:50.445801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.071 [2024-11-19 11:27:50.445826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.071 qpair failed and we were unable to recover it. 00:25:55.071 [2024-11-19 11:27:50.446033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.071 [2024-11-19 11:27:50.446058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.071 qpair failed and we were unable to recover it. 00:25:55.071 [2024-11-19 11:27:50.446279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.071 [2024-11-19 11:27:50.446303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.071 qpair failed and we were unable to recover it. 00:25:55.071 [2024-11-19 11:27:50.446504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.071 [2024-11-19 11:27:50.446534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.071 qpair failed and we were unable to recover it. 00:25:55.071 [2024-11-19 11:27:50.446744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.071 [2024-11-19 11:27:50.446768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.071 qpair failed and we were unable to recover it. 00:25:55.071 [2024-11-19 11:27:50.446932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.071 [2024-11-19 11:27:50.446957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.071 qpair failed and we were unable to recover it. 00:25:55.071 [2024-11-19 11:27:50.447160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.071 [2024-11-19 11:27:50.447184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.071 qpair failed and we were unable to recover it. 00:25:55.071 [2024-11-19 11:27:50.447367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.071 [2024-11-19 11:27:50.447393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.071 qpair failed and we were unable to recover it. 00:25:55.071 [2024-11-19 11:27:50.447603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.071 [2024-11-19 11:27:50.447628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.071 qpair failed and we were unable to recover it. 00:25:55.071 [2024-11-19 11:27:50.447794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.071 [2024-11-19 11:27:50.447817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.071 qpair failed and we were unable to recover it. 00:25:55.071 [2024-11-19 11:27:50.448063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.071 [2024-11-19 11:27:50.448087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.071 qpair failed and we were unable to recover it. 00:25:55.071 [2024-11-19 11:27:50.448256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.071 [2024-11-19 11:27:50.448280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.071 qpair failed and we were unable to recover it. 00:25:55.071 [2024-11-19 11:27:50.448472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.071 [2024-11-19 11:27:50.448498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.071 qpair failed and we were unable to recover it. 00:25:55.071 [2024-11-19 11:27:50.448690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.071 [2024-11-19 11:27:50.448729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.071 qpair failed and we were unable to recover it. 00:25:55.071 [2024-11-19 11:27:50.448936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.072 [2024-11-19 11:27:50.448960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.072 qpair failed and we were unable to recover it. 00:25:55.072 [2024-11-19 11:27:50.449170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.072 [2024-11-19 11:27:50.449196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.072 qpair failed and we were unable to recover it. 00:25:55.072 [2024-11-19 11:27:50.449411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.072 [2024-11-19 11:27:50.449437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.072 qpair failed and we were unable to recover it. 00:25:55.072 [2024-11-19 11:27:50.449677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.072 [2024-11-19 11:27:50.449702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.072 qpair failed and we were unable to recover it. 00:25:55.072 [2024-11-19 11:27:50.449940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.072 [2024-11-19 11:27:50.449965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.072 qpair failed and we were unable to recover it. 00:25:55.072 [2024-11-19 11:27:50.450161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.072 [2024-11-19 11:27:50.450201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.072 qpair failed and we were unable to recover it. 00:25:55.072 [2024-11-19 11:27:50.450334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.072 [2024-11-19 11:27:50.450360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.072 qpair failed and we were unable to recover it. 00:25:55.072 [2024-11-19 11:27:50.450495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.072 [2024-11-19 11:27:50.450520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.072 qpair failed and we were unable to recover it. 00:25:55.072 [2024-11-19 11:27:50.450664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.072 [2024-11-19 11:27:50.450689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.072 qpair failed and we were unable to recover it. 00:25:55.072 [2024-11-19 11:27:50.450922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.072 [2024-11-19 11:27:50.450947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.072 qpair failed and we were unable to recover it. 00:25:55.072 [2024-11-19 11:27:50.451063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.072 [2024-11-19 11:27:50.451103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.072 qpair failed and we were unable to recover it. 00:25:55.072 [2024-11-19 11:27:50.451326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.072 [2024-11-19 11:27:50.451351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.072 qpair failed and we were unable to recover it. 00:25:55.072 [2024-11-19 11:27:50.451532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.072 [2024-11-19 11:27:50.451556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.072 qpair failed and we were unable to recover it. 00:25:55.072 [2024-11-19 11:27:50.451770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.072 [2024-11-19 11:27:50.451795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.072 qpair failed and we were unable to recover it. 00:25:55.072 [2024-11-19 11:27:50.451970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.072 [2024-11-19 11:27:50.451993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.072 qpair failed and we were unable to recover it. 00:25:55.072 [2024-11-19 11:27:50.452206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.072 [2024-11-19 11:27:50.452230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.072 qpair failed and we were unable to recover it. 00:25:55.072 [2024-11-19 11:27:50.452440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.072 [2024-11-19 11:27:50.452471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.072 qpair failed and we were unable to recover it. 00:25:55.072 [2024-11-19 11:27:50.452699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.072 [2024-11-19 11:27:50.452723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.072 qpair failed and we were unable to recover it. 00:25:55.072 [2024-11-19 11:27:50.452944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.072 [2024-11-19 11:27:50.452968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.072 qpair failed and we were unable to recover it. 00:25:55.072 [2024-11-19 11:27:50.453188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.072 [2024-11-19 11:27:50.453213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.072 qpair failed and we were unable to recover it. 00:25:55.072 [2024-11-19 11:27:50.453400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.072 [2024-11-19 11:27:50.453425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.072 qpair failed and we were unable to recover it. 00:25:55.072 [2024-11-19 11:27:50.453639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.072 [2024-11-19 11:27:50.453663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.072 qpair failed and we were unable to recover it. 00:25:55.072 [2024-11-19 11:27:50.453798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.072 [2024-11-19 11:27:50.453838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.072 qpair failed and we were unable to recover it. 00:25:55.072 [2024-11-19 11:27:50.454061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.072 [2024-11-19 11:27:50.454086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.072 qpair failed and we were unable to recover it. 00:25:55.072 [2024-11-19 11:27:50.454262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.072 [2024-11-19 11:27:50.454286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.072 qpair failed and we were unable to recover it. 00:25:55.072 [2024-11-19 11:27:50.454467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.072 [2024-11-19 11:27:50.454492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.072 qpair failed and we were unable to recover it. 00:25:55.072 [2024-11-19 11:27:50.454726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.072 [2024-11-19 11:27:50.454750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.072 qpair failed and we were unable to recover it. 00:25:55.072 [2024-11-19 11:27:50.454955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.072 [2024-11-19 11:27:50.454981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.072 qpair failed and we were unable to recover it. 00:25:55.072 [2024-11-19 11:27:50.455217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.072 [2024-11-19 11:27:50.455241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.072 qpair failed and we were unable to recover it. 00:25:55.072 [2024-11-19 11:27:50.455382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.072 [2024-11-19 11:27:50.455421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.072 qpair failed and we were unable to recover it. 00:25:55.073 [2024-11-19 11:27:50.455598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.073 [2024-11-19 11:27:50.455624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.073 qpair failed and we were unable to recover it. 00:25:55.073 [2024-11-19 11:27:50.455819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.073 [2024-11-19 11:27:50.455845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.073 qpair failed and we were unable to recover it. 00:25:55.073 [2024-11-19 11:27:50.455970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.073 [2024-11-19 11:27:50.455995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.073 qpair failed and we were unable to recover it. 00:25:55.073 [2024-11-19 11:27:50.456164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.073 [2024-11-19 11:27:50.456189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.073 qpair failed and we were unable to recover it. 00:25:55.073 [2024-11-19 11:27:50.456438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.073 [2024-11-19 11:27:50.456464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.073 qpair failed and we were unable to recover it. 00:25:55.073 [2024-11-19 11:27:50.456711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.073 [2024-11-19 11:27:50.456736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.073 qpair failed and we were unable to recover it. 00:25:55.073 [2024-11-19 11:27:50.456936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.073 [2024-11-19 11:27:50.456961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.073 qpair failed and we were unable to recover it. 00:25:55.073 [2024-11-19 11:27:50.457167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.073 [2024-11-19 11:27:50.457191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.073 qpair failed and we were unable to recover it. 00:25:55.073 [2024-11-19 11:27:50.457397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.073 [2024-11-19 11:27:50.457423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.073 qpair failed and we were unable to recover it. 00:25:55.073 [2024-11-19 11:27:50.457655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.073 [2024-11-19 11:27:50.457681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.073 qpair failed and we were unable to recover it. 00:25:55.073 [2024-11-19 11:27:50.457865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.073 [2024-11-19 11:27:50.457890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.073 qpair failed and we were unable to recover it. 00:25:55.073 [2024-11-19 11:27:50.458058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.073 [2024-11-19 11:27:50.458082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.073 qpair failed and we were unable to recover it. 00:25:55.073 [2024-11-19 11:27:50.458260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.073 [2024-11-19 11:27:50.458284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.073 qpair failed and we were unable to recover it. 00:25:55.073 [2024-11-19 11:27:50.458478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.073 [2024-11-19 11:27:50.458504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.073 qpair failed and we were unable to recover it. 00:25:55.073 [2024-11-19 11:27:50.458647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.073 [2024-11-19 11:27:50.458672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.073 qpair failed and we were unable to recover it. 00:25:55.073 [2024-11-19 11:27:50.458820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.073 [2024-11-19 11:27:50.458846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.073 qpair failed and we were unable to recover it. 00:25:55.073 [2024-11-19 11:27:50.459053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.073 [2024-11-19 11:27:50.459079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.073 qpair failed and we were unable to recover it. 00:25:55.073 [2024-11-19 11:27:50.459259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.073 [2024-11-19 11:27:50.459283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.073 qpair failed and we were unable to recover it. 00:25:55.073 [2024-11-19 11:27:50.459511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.073 [2024-11-19 11:27:50.459537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.073 qpair failed and we were unable to recover it. 00:25:55.073 [2024-11-19 11:27:50.459750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.073 [2024-11-19 11:27:50.459775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.073 qpair failed and we were unable to recover it. 00:25:55.073 [2024-11-19 11:27:50.459926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.073 [2024-11-19 11:27:50.459950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.073 qpair failed and we were unable to recover it. 00:25:55.073 [2024-11-19 11:27:50.460146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.073 [2024-11-19 11:27:50.460172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.073 qpair failed and we were unable to recover it. 00:25:55.073 [2024-11-19 11:27:50.460394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.073 [2024-11-19 11:27:50.460421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.073 qpair failed and we were unable to recover it. 00:25:55.073 [2024-11-19 11:27:50.460592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.073 [2024-11-19 11:27:50.460617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.073 qpair failed and we were unable to recover it. 00:25:55.073 [2024-11-19 11:27:50.460796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.073 [2024-11-19 11:27:50.460820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.073 qpair failed and we were unable to recover it. 00:25:55.073 [2024-11-19 11:27:50.460989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.073 [2024-11-19 11:27:50.461029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.073 qpair failed and we were unable to recover it. 00:25:55.073 [2024-11-19 11:27:50.461240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.073 [2024-11-19 11:27:50.461265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.073 qpair failed and we were unable to recover it. 00:25:55.073 [2024-11-19 11:27:50.461438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.073 [2024-11-19 11:27:50.461464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.073 qpair failed and we were unable to recover it. 00:25:55.073 [2024-11-19 11:27:50.461679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.073 [2024-11-19 11:27:50.461718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.073 qpair failed and we were unable to recover it. 00:25:55.073 [2024-11-19 11:27:50.461946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.073 [2024-11-19 11:27:50.461970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.073 qpair failed and we were unable to recover it. 00:25:55.073 [2024-11-19 11:27:50.462137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.073 [2024-11-19 11:27:50.462161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.073 qpair failed and we were unable to recover it. 00:25:55.073 [2024-11-19 11:27:50.462386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.073 [2024-11-19 11:27:50.462412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.073 qpair failed and we were unable to recover it. 00:25:55.073 [2024-11-19 11:27:50.462651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.073 [2024-11-19 11:27:50.462674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.074 qpair failed and we were unable to recover it. 00:25:55.074 [2024-11-19 11:27:50.462835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.074 [2024-11-19 11:27:50.462860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.074 qpair failed and we were unable to recover it. 00:25:55.074 [2024-11-19 11:27:50.463054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.074 [2024-11-19 11:27:50.463078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.074 qpair failed and we were unable to recover it. 00:25:55.074 [2024-11-19 11:27:50.463312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.074 [2024-11-19 11:27:50.463337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.074 qpair failed and we were unable to recover it. 00:25:55.074 [2024-11-19 11:27:50.463548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.074 [2024-11-19 11:27:50.463574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.074 qpair failed and we were unable to recover it. 00:25:55.074 [2024-11-19 11:27:50.463777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.074 [2024-11-19 11:27:50.463801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.074 qpair failed and we were unable to recover it. 00:25:55.074 [2024-11-19 11:27:50.463968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.074 [2024-11-19 11:27:50.463992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.074 qpair failed and we were unable to recover it. 00:25:55.074 [2024-11-19 11:27:50.464186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.074 [2024-11-19 11:27:50.464211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.074 qpair failed and we were unable to recover it. 00:25:55.074 [2024-11-19 11:27:50.464400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.074 [2024-11-19 11:27:50.464427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.074 qpair failed and we were unable to recover it. 00:25:55.074 [2024-11-19 11:27:50.464625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.074 [2024-11-19 11:27:50.464650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.074 qpair failed and we were unable to recover it. 00:25:55.074 [2024-11-19 11:27:50.464856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.074 [2024-11-19 11:27:50.464880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.074 qpair failed and we were unable to recover it. 00:25:55.074 [2024-11-19 11:27:50.465124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.074 [2024-11-19 11:27:50.465149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.074 qpair failed and we were unable to recover it. 00:25:55.074 [2024-11-19 11:27:50.465376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.074 [2024-11-19 11:27:50.465402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.074 qpair failed and we were unable to recover it. 00:25:55.074 [2024-11-19 11:27:50.465609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.074 [2024-11-19 11:27:50.465634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.074 qpair failed and we were unable to recover it. 00:25:55.074 [2024-11-19 11:27:50.465848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.074 [2024-11-19 11:27:50.465873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.074 qpair failed and we were unable to recover it. 00:25:55.074 [2024-11-19 11:27:50.466039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.074 [2024-11-19 11:27:50.466063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.074 qpair failed and we were unable to recover it. 00:25:55.074 [2024-11-19 11:27:50.466234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.074 [2024-11-19 11:27:50.466258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.074 qpair failed and we were unable to recover it. 00:25:55.074 [2024-11-19 11:27:50.466409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.074 [2024-11-19 11:27:50.466435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.074 qpair failed and we were unable to recover it. 00:25:55.074 [2024-11-19 11:27:50.466580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.074 [2024-11-19 11:27:50.466605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.074 qpair failed and we were unable to recover it. 00:25:55.074 [2024-11-19 11:27:50.466746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.074 [2024-11-19 11:27:50.466771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.074 qpair failed and we were unable to recover it. 00:25:55.074 [2024-11-19 11:27:50.466961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.074 [2024-11-19 11:27:50.466985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.074 qpair failed and we were unable to recover it. 00:25:55.074 [2024-11-19 11:27:50.467203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.074 [2024-11-19 11:27:50.467229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.074 qpair failed and we were unable to recover it. 00:25:55.074 [2024-11-19 11:27:50.467383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.074 [2024-11-19 11:27:50.467412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.074 qpair failed and we were unable to recover it. 00:25:55.074 [2024-11-19 11:27:50.467617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.074 [2024-11-19 11:27:50.467643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.074 qpair failed and we were unable to recover it. 00:25:55.074 [2024-11-19 11:27:50.467830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.074 [2024-11-19 11:27:50.467856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.074 qpair failed and we were unable to recover it. 00:25:55.074 [2024-11-19 11:27:50.468023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.074 [2024-11-19 11:27:50.468049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.074 qpair failed and we were unable to recover it. 00:25:55.074 [2024-11-19 11:27:50.468210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.074 [2024-11-19 11:27:50.468234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.074 qpair failed and we were unable to recover it. 00:25:55.074 [2024-11-19 11:27:50.468456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.074 [2024-11-19 11:27:50.468482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.074 qpair failed and we were unable to recover it. 00:25:55.074 [2024-11-19 11:27:50.468699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.074 [2024-11-19 11:27:50.468739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.074 qpair failed and we were unable to recover it. 00:25:55.074 [2024-11-19 11:27:50.468935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.074 [2024-11-19 11:27:50.468960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.074 qpair failed and we were unable to recover it. 00:25:55.074 [2024-11-19 11:27:50.469145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.074 [2024-11-19 11:27:50.469170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.074 qpair failed and we were unable to recover it. 00:25:55.074 [2024-11-19 11:27:50.469388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.074 [2024-11-19 11:27:50.469413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.074 qpair failed and we were unable to recover it. 00:25:55.074 [2024-11-19 11:27:50.469566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.074 [2024-11-19 11:27:50.469591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.074 qpair failed and we were unable to recover it. 00:25:55.074 [2024-11-19 11:27:50.469757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.075 [2024-11-19 11:27:50.469781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.075 qpair failed and we were unable to recover it. 00:25:55.075 [2024-11-19 11:27:50.470007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.075 [2024-11-19 11:27:50.470033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.075 qpair failed and we were unable to recover it. 00:25:55.075 [2024-11-19 11:27:50.470202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.075 [2024-11-19 11:27:50.470228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.075 qpair failed and we were unable to recover it. 00:25:55.075 [2024-11-19 11:27:50.470421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.075 [2024-11-19 11:27:50.470447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.075 qpair failed and we were unable to recover it. 00:25:55.075 [2024-11-19 11:27:50.470627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.075 [2024-11-19 11:27:50.470653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.075 qpair failed and we were unable to recover it. 00:25:55.075 [2024-11-19 11:27:50.470869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.075 [2024-11-19 11:27:50.470894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.075 qpair failed and we were unable to recover it. 00:25:55.075 [2024-11-19 11:27:50.471081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.075 [2024-11-19 11:27:50.471107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.075 qpair failed and we were unable to recover it. 00:25:55.075 [2024-11-19 11:27:50.471296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.075 [2024-11-19 11:27:50.471321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.075 qpair failed and we were unable to recover it. 00:25:55.075 [2024-11-19 11:27:50.471524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.075 [2024-11-19 11:27:50.471550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.075 qpair failed and we were unable to recover it. 00:25:55.075 [2024-11-19 11:27:50.471672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.075 [2024-11-19 11:27:50.471699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.075 qpair failed and we were unable to recover it. 00:25:55.075 [2024-11-19 11:27:50.471928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.075 [2024-11-19 11:27:50.471952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.075 qpair failed and we were unable to recover it. 00:25:55.075 [2024-11-19 11:27:50.472098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.075 [2024-11-19 11:27:50.472122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.075 qpair failed and we were unable to recover it. 00:25:55.075 [2024-11-19 11:27:50.472288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.075 [2024-11-19 11:27:50.472314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.075 qpair failed and we were unable to recover it. 00:25:55.075 [2024-11-19 11:27:50.472527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.075 [2024-11-19 11:27:50.472553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.075 qpair failed and we were unable to recover it. 00:25:55.075 [2024-11-19 11:27:50.472680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.075 [2024-11-19 11:27:50.472704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.075 qpair failed and we were unable to recover it. 00:25:55.075 [2024-11-19 11:27:50.472937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.075 [2024-11-19 11:27:50.472962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.075 qpair failed and we were unable to recover it. 00:25:55.075 [2024-11-19 11:27:50.473186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.075 [2024-11-19 11:27:50.473216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.075 qpair failed and we were unable to recover it. 00:25:55.075 [2024-11-19 11:27:50.473360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.075 [2024-11-19 11:27:50.473394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.075 qpair failed and we were unable to recover it. 00:25:55.075 [2024-11-19 11:27:50.473610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.075 [2024-11-19 11:27:50.473635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.075 qpair failed and we were unable to recover it. 00:25:55.075 [2024-11-19 11:27:50.473829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.075 [2024-11-19 11:27:50.473852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.075 qpair failed and we were unable to recover it. 00:25:55.075 [2024-11-19 11:27:50.474038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.075 [2024-11-19 11:27:50.474063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.075 qpair failed and we were unable to recover it. 00:25:55.075 [2024-11-19 11:27:50.474296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.075 [2024-11-19 11:27:50.474323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.075 qpair failed and we were unable to recover it. 00:25:55.075 [2024-11-19 11:27:50.474497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.075 [2024-11-19 11:27:50.474509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:55.075 [2024-11-19 11:27:50.474523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.075 qpair failed and we were unable to recover it. 00:25:55.075 [2024-11-19 11:27:50.474735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.075 [2024-11-19 11:27:50.474759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.075 qpair failed and we were unable to recover it. 00:25:55.075 [2024-11-19 11:27:50.474969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.075 [2024-11-19 11:27:50.474993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.076 qpair failed and we were unable to recover it. 00:25:55.076 [2024-11-19 11:27:50.475132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.076 [2024-11-19 11:27:50.475155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.076 qpair failed and we were unable to recover it. 00:25:55.076 [2024-11-19 11:27:50.475351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.076 [2024-11-19 11:27:50.475387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.076 qpair failed and we were unable to recover it. 00:25:55.076 [2024-11-19 11:27:50.475605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.076 [2024-11-19 11:27:50.475646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.076 qpair failed and we were unable to recover it. 00:25:55.076 [2024-11-19 11:27:50.475846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.076 [2024-11-19 11:27:50.475872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.076 qpair failed and we were unable to recover it. 00:25:55.076 [2024-11-19 11:27:50.476091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.076 [2024-11-19 11:27:50.476115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.076 qpair failed and we were unable to recover it. 00:25:55.076 [2024-11-19 11:27:50.476353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.076 [2024-11-19 11:27:50.476404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.076 qpair failed and we were unable to recover it. 00:25:55.076 [2024-11-19 11:27:50.476591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.076 [2024-11-19 11:27:50.476616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.076 qpair failed and we were unable to recover it. 00:25:55.076 [2024-11-19 11:27:50.476833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.076 [2024-11-19 11:27:50.476857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.076 qpair failed and we were unable to recover it. 00:25:55.076 [2024-11-19 11:27:50.477077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.076 [2024-11-19 11:27:50.477103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.076 qpair failed and we were unable to recover it. 00:25:55.076 [2024-11-19 11:27:50.477256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.076 [2024-11-19 11:27:50.477281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.076 qpair failed and we were unable to recover it. 00:25:55.076 [2024-11-19 11:27:50.477449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.076 [2024-11-19 11:27:50.477474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.076 qpair failed and we were unable to recover it. 00:25:55.076 [2024-11-19 11:27:50.477683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.076 [2024-11-19 11:27:50.477708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.076 qpair failed and we were unable to recover it. 00:25:55.076 [2024-11-19 11:27:50.477900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.076 [2024-11-19 11:27:50.477925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.076 qpair failed and we were unable to recover it. 00:25:55.076 [2024-11-19 11:27:50.478114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.076 [2024-11-19 11:27:50.478138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.076 qpair failed and we were unable to recover it. 00:25:55.076 [2024-11-19 11:27:50.478352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.076 [2024-11-19 11:27:50.478402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.076 qpair failed and we were unable to recover it. 00:25:55.076 [2024-11-19 11:27:50.478563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.076 [2024-11-19 11:27:50.478588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.076 qpair failed and we were unable to recover it. 00:25:55.076 [2024-11-19 11:27:50.478774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.076 [2024-11-19 11:27:50.478798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.076 qpair failed and we were unable to recover it. 00:25:55.076 [2024-11-19 11:27:50.478996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.076 [2024-11-19 11:27:50.479019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.076 qpair failed and we were unable to recover it. 00:25:55.076 [2024-11-19 11:27:50.479247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.076 [2024-11-19 11:27:50.479272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.076 qpair failed and we were unable to recover it. 00:25:55.076 [2024-11-19 11:27:50.479507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.076 [2024-11-19 11:27:50.479534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.076 qpair failed and we were unable to recover it. 00:25:55.076 [2024-11-19 11:27:50.479721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.076 [2024-11-19 11:27:50.479761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.076 qpair failed and we were unable to recover it. 00:25:55.076 [2024-11-19 11:27:50.479967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.076 [2024-11-19 11:27:50.479992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.076 qpair failed and we were unable to recover it. 00:25:55.076 [2024-11-19 11:27:50.480194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.076 [2024-11-19 11:27:50.480218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.076 qpair failed and we were unable to recover it. 00:25:55.076 [2024-11-19 11:27:50.480383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.076 [2024-11-19 11:27:50.480410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.076 qpair failed and we were unable to recover it. 00:25:55.076 [2024-11-19 11:27:50.480591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.076 [2024-11-19 11:27:50.480615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.076 qpair failed and we were unable to recover it. 00:25:55.076 [2024-11-19 11:27:50.480839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.076 [2024-11-19 11:27:50.480865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.076 qpair failed and we were unable to recover it. 00:25:55.076 [2024-11-19 11:27:50.481078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.076 [2024-11-19 11:27:50.481101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.076 qpair failed and we were unable to recover it. 00:25:55.076 [2024-11-19 11:27:50.481282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.076 [2024-11-19 11:27:50.481307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.076 qpair failed and we were unable to recover it. 00:25:55.076 [2024-11-19 11:27:50.481540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.077 [2024-11-19 11:27:50.481566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.077 qpair failed and we were unable to recover it. 00:25:55.077 [2024-11-19 11:27:50.481751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.077 [2024-11-19 11:27:50.481776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.077 qpair failed and we were unable to recover it. 00:25:55.077 [2024-11-19 11:27:50.482009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.077 [2024-11-19 11:27:50.482035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.077 qpair failed and we were unable to recover it. 00:25:55.077 [2024-11-19 11:27:50.482218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.077 [2024-11-19 11:27:50.482242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.077 qpair failed and we were unable to recover it. 00:25:55.077 [2024-11-19 11:27:50.482395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.077 [2024-11-19 11:27:50.482420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.077 qpair failed and we were unable to recover it. 00:25:55.077 [2024-11-19 11:27:50.482601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.077 [2024-11-19 11:27:50.482626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.077 qpair failed and we were unable to recover it. 00:25:55.077 [2024-11-19 11:27:50.482836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.077 [2024-11-19 11:27:50.482860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.077 qpair failed and we were unable to recover it. 00:25:55.077 [2024-11-19 11:27:50.483084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.077 [2024-11-19 11:27:50.483110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.077 qpair failed and we were unable to recover it. 00:25:55.077 [2024-11-19 11:27:50.483261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.077 [2024-11-19 11:27:50.483285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.077 qpair failed and we were unable to recover it. 00:25:55.077 [2024-11-19 11:27:50.483489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.077 [2024-11-19 11:27:50.483515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.077 qpair failed and we were unable to recover it. 00:25:55.077 [2024-11-19 11:27:50.483666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.077 [2024-11-19 11:27:50.483688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.077 qpair failed and we were unable to recover it. 00:25:55.077 [2024-11-19 11:27:50.483874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.077 [2024-11-19 11:27:50.483899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.077 qpair failed and we were unable to recover it. 00:25:55.077 [2024-11-19 11:27:50.484073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.077 [2024-11-19 11:27:50.484098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.077 qpair failed and we were unable to recover it. 00:25:55.077 [2024-11-19 11:27:50.484304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.077 [2024-11-19 11:27:50.484329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.077 qpair failed and we were unable to recover it. 00:25:55.077 [2024-11-19 11:27:50.484487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.077 [2024-11-19 11:27:50.484512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.077 qpair failed and we were unable to recover it. 00:25:55.077 [2024-11-19 11:27:50.484698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.077 [2024-11-19 11:27:50.484723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.077 qpair failed and we were unable to recover it. 00:25:55.077 [2024-11-19 11:27:50.484924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.077 [2024-11-19 11:27:50.484949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.077 qpair failed and we were unable to recover it. 00:25:55.077 [2024-11-19 11:27:50.485149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.077 [2024-11-19 11:27:50.485179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.077 qpair failed and we were unable to recover it. 00:25:55.077 [2024-11-19 11:27:50.485359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.077 [2024-11-19 11:27:50.485406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.077 qpair failed and we were unable to recover it. 00:25:55.077 [2024-11-19 11:27:50.485580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.077 [2024-11-19 11:27:50.485605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.077 qpair failed and we were unable to recover it. 00:25:55.077 [2024-11-19 11:27:50.485813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.077 [2024-11-19 11:27:50.485836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.077 qpair failed and we were unable to recover it. 00:25:55.077 [2024-11-19 11:27:50.486063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.077 [2024-11-19 11:27:50.486088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.077 qpair failed and we were unable to recover it. 00:25:55.077 [2024-11-19 11:27:50.486268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.077 [2024-11-19 11:27:50.486292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.077 qpair failed and we were unable to recover it. 00:25:55.077 [2024-11-19 11:27:50.486520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.077 [2024-11-19 11:27:50.486547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.077 qpair failed and we were unable to recover it. 00:25:55.077 [2024-11-19 11:27:50.486703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.077 [2024-11-19 11:27:50.486728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.077 qpair failed and we were unable to recover it. 00:25:55.077 [2024-11-19 11:27:50.486953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.077 [2024-11-19 11:27:50.486977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.077 qpair failed and we were unable to recover it. 00:25:55.077 [2024-11-19 11:27:50.487187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.077 [2024-11-19 11:27:50.487210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.077 qpair failed and we were unable to recover it. 00:25:55.077 [2024-11-19 11:27:50.487437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.077 [2024-11-19 11:27:50.487464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.077 qpair failed and we were unable to recover it. 00:25:55.077 [2024-11-19 11:27:50.487613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.077 [2024-11-19 11:27:50.487638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.077 qpair failed and we were unable to recover it. 00:25:55.077 [2024-11-19 11:27:50.487784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.077 [2024-11-19 11:27:50.487807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.077 qpair failed and we were unable to recover it. 00:25:55.077 [2024-11-19 11:27:50.488010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.077 [2024-11-19 11:27:50.488034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.077 qpair failed and we were unable to recover it. 00:25:55.077 [2024-11-19 11:27:50.488219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.077 [2024-11-19 11:27:50.488243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.077 qpair failed and we were unable to recover it. 00:25:55.077 [2024-11-19 11:27:50.488402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.077 [2024-11-19 11:27:50.488427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.077 qpair failed and we were unable to recover it. 00:25:55.077 [2024-11-19 11:27:50.488597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.078 [2024-11-19 11:27:50.488622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.078 qpair failed and we were unable to recover it. 00:25:55.078 [2024-11-19 11:27:50.488796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.078 [2024-11-19 11:27:50.488820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.078 qpair failed and we were unable to recover it. 00:25:55.078 [2024-11-19 11:27:50.489043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.078 [2024-11-19 11:27:50.489068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.078 qpair failed and we were unable to recover it. 00:25:55.078 [2024-11-19 11:27:50.489294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.078 [2024-11-19 11:27:50.489318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.078 qpair failed and we were unable to recover it. 00:25:55.078 [2024-11-19 11:27:50.489567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.078 [2024-11-19 11:27:50.489593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.078 qpair failed and we were unable to recover it. 00:25:55.078 [2024-11-19 11:27:50.489744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.078 [2024-11-19 11:27:50.489770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.078 qpair failed and we were unable to recover it. 00:25:55.078 [2024-11-19 11:27:50.489993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.078 [2024-11-19 11:27:50.490018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.078 qpair failed and we were unable to recover it. 00:25:55.078 [2024-11-19 11:27:50.490171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.078 [2024-11-19 11:27:50.490194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.078 qpair failed and we were unable to recover it. 00:25:55.078 [2024-11-19 11:27:50.490426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.078 [2024-11-19 11:27:50.490452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.078 qpair failed and we were unable to recover it. 00:25:55.078 [2024-11-19 11:27:50.490677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.078 [2024-11-19 11:27:50.490701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.078 qpair failed and we were unable to recover it. 00:25:55.078 [2024-11-19 11:27:50.490931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.078 [2024-11-19 11:27:50.490971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.078 qpair failed and we were unable to recover it. 00:25:55.078 [2024-11-19 11:27:50.491169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.078 [2024-11-19 11:27:50.491197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.078 qpair failed and we were unable to recover it. 00:25:55.078 [2024-11-19 11:27:50.491385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.078 [2024-11-19 11:27:50.491411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.078 qpair failed and we were unable to recover it. 00:25:55.078 [2024-11-19 11:27:50.491564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.078 [2024-11-19 11:27:50.491588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.078 qpair failed and we were unable to recover it. 00:25:55.078 [2024-11-19 11:27:50.491740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.078 [2024-11-19 11:27:50.491780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.078 qpair failed and we were unable to recover it. 00:25:55.078 [2024-11-19 11:27:50.491960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.078 [2024-11-19 11:27:50.491985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.078 qpair failed and we were unable to recover it. 00:25:55.078 [2024-11-19 11:27:50.492154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.078 [2024-11-19 11:27:50.492178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.078 qpair failed and we were unable to recover it. 00:25:55.078 [2024-11-19 11:27:50.492400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.078 [2024-11-19 11:27:50.492425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.078 qpair failed and we were unable to recover it. 00:25:55.078 [2024-11-19 11:27:50.492647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.078 [2024-11-19 11:27:50.492673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.078 qpair failed and we were unable to recover it. 00:25:55.078 [2024-11-19 11:27:50.492926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.078 [2024-11-19 11:27:50.492952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.078 qpair failed and we were unable to recover it. 00:25:55.078 [2024-11-19 11:27:50.493171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.078 [2024-11-19 11:27:50.493196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.078 qpair failed and we were unable to recover it. 00:25:55.078 [2024-11-19 11:27:50.493350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.078 [2024-11-19 11:27:50.493410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.078 qpair failed and we were unable to recover it. 00:25:55.078 [2024-11-19 11:27:50.493593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.078 [2024-11-19 11:27:50.493619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.078 qpair failed and we were unable to recover it. 00:25:55.078 [2024-11-19 11:27:50.493798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.078 [2024-11-19 11:27:50.493837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.078 qpair failed and we were unable to recover it. 00:25:55.078 [2024-11-19 11:27:50.494041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.078 [2024-11-19 11:27:50.494066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.078 qpair failed and we were unable to recover it. 00:25:55.078 [2024-11-19 11:27:50.494236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.078 [2024-11-19 11:27:50.494259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.078 qpair failed and we were unable to recover it. 00:25:55.078 [2024-11-19 11:27:50.494477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.078 [2024-11-19 11:27:50.494504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.078 qpair failed and we were unable to recover it. 00:25:55.078 [2024-11-19 11:27:50.494670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.078 [2024-11-19 11:27:50.494710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.078 qpair failed and we were unable to recover it. 00:25:55.078 [2024-11-19 11:27:50.494897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.078 [2024-11-19 11:27:50.494922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.078 qpair failed and we were unable to recover it. 00:25:55.078 [2024-11-19 11:27:50.495108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.078 [2024-11-19 11:27:50.495133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.078 qpair failed and we were unable to recover it. 00:25:55.078 [2024-11-19 11:27:50.495381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.078 [2024-11-19 11:27:50.495406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.078 qpair failed and we were unable to recover it. 00:25:55.078 [2024-11-19 11:27:50.495620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.078 [2024-11-19 11:27:50.495645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.078 qpair failed and we were unable to recover it. 00:25:55.078 [2024-11-19 11:27:50.495856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.078 [2024-11-19 11:27:50.495880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.079 qpair failed and we were unable to recover it. 00:25:55.079 [2024-11-19 11:27:50.496054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.079 [2024-11-19 11:27:50.496080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.079 qpair failed and we were unable to recover it. 00:25:55.079 [2024-11-19 11:27:50.496277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.079 [2024-11-19 11:27:50.496303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.079 qpair failed and we were unable to recover it. 00:25:55.079 [2024-11-19 11:27:50.496513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.079 [2024-11-19 11:27:50.496539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.079 qpair failed and we were unable to recover it. 00:25:55.079 [2024-11-19 11:27:50.496680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.079 [2024-11-19 11:27:50.496704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.079 qpair failed and we were unable to recover it. 00:25:55.079 [2024-11-19 11:27:50.496880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.079 [2024-11-19 11:27:50.496902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.079 qpair failed and we were unable to recover it. 00:25:55.079 [2024-11-19 11:27:50.497094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.079 [2024-11-19 11:27:50.497117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.079 qpair failed and we were unable to recover it. 00:25:55.079 [2024-11-19 11:27:50.497305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.079 [2024-11-19 11:27:50.497330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.079 qpair failed and we were unable to recover it. 00:25:55.079 [2024-11-19 11:27:50.497509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.079 [2024-11-19 11:27:50.497536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.079 qpair failed and we were unable to recover it. 00:25:55.079 [2024-11-19 11:27:50.497759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.079 [2024-11-19 11:27:50.497782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.079 qpair failed and we were unable to recover it. 00:25:55.079 [2024-11-19 11:27:50.498003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.079 [2024-11-19 11:27:50.498027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.079 qpair failed and we were unable to recover it. 00:25:55.079 [2024-11-19 11:27:50.498232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.079 [2024-11-19 11:27:50.498257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.079 qpair failed and we were unable to recover it. 00:25:55.079 [2024-11-19 11:27:50.498475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.079 [2024-11-19 11:27:50.498501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.079 qpair failed and we were unable to recover it. 00:25:55.079 [2024-11-19 11:27:50.498637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.079 [2024-11-19 11:27:50.498675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.079 qpair failed and we were unable to recover it. 00:25:55.079 [2024-11-19 11:27:50.498897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.079 [2024-11-19 11:27:50.498921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.079 qpair failed and we were unable to recover it. 00:25:55.079 [2024-11-19 11:27:50.499148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.079 [2024-11-19 11:27:50.499172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.079 qpair failed and we were unable to recover it. 00:25:55.079 [2024-11-19 11:27:50.499347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.079 [2024-11-19 11:27:50.499380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.079 qpair failed and we were unable to recover it. 00:25:55.079 [2024-11-19 11:27:50.499587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.079 [2024-11-19 11:27:50.499612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.079 qpair failed and we were unable to recover it. 00:25:55.079 [2024-11-19 11:27:50.499829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.079 [2024-11-19 11:27:50.499854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.079 qpair failed and we were unable to recover it. 00:25:55.079 [2024-11-19 11:27:50.500043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.079 [2024-11-19 11:27:50.500068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.079 qpair failed and we were unable to recover it. 00:25:55.079 [2024-11-19 11:27:50.500286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.079 [2024-11-19 11:27:50.500310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.079 qpair failed and we were unable to recover it. 00:25:55.079 [2024-11-19 11:27:50.500544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.079 [2024-11-19 11:27:50.500569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.079 qpair failed and we were unable to recover it. 00:25:55.079 [2024-11-19 11:27:50.500774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.079 [2024-11-19 11:27:50.500798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.079 qpair failed and we were unable to recover it. 00:25:55.079 [2024-11-19 11:27:50.501026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.079 [2024-11-19 11:27:50.501051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.079 qpair failed and we were unable to recover it. 00:25:55.079 [2024-11-19 11:27:50.501281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.079 [2024-11-19 11:27:50.501306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.079 qpair failed and we were unable to recover it. 00:25:55.079 [2024-11-19 11:27:50.501480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.079 [2024-11-19 11:27:50.501506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.079 qpair failed and we were unable to recover it. 00:25:55.079 [2024-11-19 11:27:50.501677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.079 [2024-11-19 11:27:50.501703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.079 qpair failed and we were unable to recover it. 00:25:55.079 [2024-11-19 11:27:50.501875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.079 [2024-11-19 11:27:50.501899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.079 qpair failed and we were unable to recover it. 00:25:55.079 [2024-11-19 11:27:50.502061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.079 [2024-11-19 11:27:50.502084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.079 qpair failed and we were unable to recover it. 00:25:55.079 [2024-11-19 11:27:50.502259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.079 [2024-11-19 11:27:50.502282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.079 qpair failed and we were unable to recover it. 00:25:55.079 [2024-11-19 11:27:50.502421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.079 [2024-11-19 11:27:50.502448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.079 qpair failed and we were unable to recover it. 00:25:55.079 [2024-11-19 11:27:50.502600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.079 [2024-11-19 11:27:50.502625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.079 qpair failed and we were unable to recover it. 00:25:55.079 [2024-11-19 11:27:50.502836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.079 [2024-11-19 11:27:50.502876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.079 qpair failed and we were unable to recover it. 00:25:55.079 [2024-11-19 11:27:50.503099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.079 [2024-11-19 11:27:50.503123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.079 qpair failed and we were unable to recover it. 00:25:55.079 [2024-11-19 11:27:50.503350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.079 [2024-11-19 11:27:50.503397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.079 qpair failed and we were unable to recover it. 00:25:55.079 [2024-11-19 11:27:50.503606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.079 [2024-11-19 11:27:50.503631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.079 qpair failed and we were unable to recover it. 00:25:55.080 [2024-11-19 11:27:50.503807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.080 [2024-11-19 11:27:50.503831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.080 qpair failed and we were unable to recover it. 00:25:55.080 [2024-11-19 11:27:50.503993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.080 [2024-11-19 11:27:50.504033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.080 qpair failed and we were unable to recover it. 00:25:55.080 [2024-11-19 11:27:50.504220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.080 [2024-11-19 11:27:50.504245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.080 qpair failed and we were unable to recover it. 00:25:55.080 [2024-11-19 11:27:50.504431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.080 [2024-11-19 11:27:50.504456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.080 qpair failed and we were unable to recover it. 00:25:55.080 [2024-11-19 11:27:50.504633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.080 [2024-11-19 11:27:50.504658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.080 qpair failed and we were unable to recover it. 00:25:55.080 [2024-11-19 11:27:50.504878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.080 [2024-11-19 11:27:50.504903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.080 qpair failed and we were unable to recover it. 00:25:55.080 [2024-11-19 11:27:50.505124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.080 [2024-11-19 11:27:50.505149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.080 qpair failed and we were unable to recover it. 00:25:55.080 [2024-11-19 11:27:50.505354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.080 [2024-11-19 11:27:50.505387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.080 qpair failed and we were unable to recover it. 00:25:55.080 [2024-11-19 11:27:50.505572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.080 [2024-11-19 11:27:50.505598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.080 qpair failed and we were unable to recover it. 00:25:55.080 [2024-11-19 11:27:50.505776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.080 [2024-11-19 11:27:50.505801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.080 qpair failed and we were unable to recover it. 00:25:55.080 [2024-11-19 11:27:50.505971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.080 [2024-11-19 11:27:50.505996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.080 qpair failed and we were unable to recover it. 00:25:55.080 [2024-11-19 11:27:50.506138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.080 [2024-11-19 11:27:50.506167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.080 qpair failed and we were unable to recover it. 00:25:55.080 [2024-11-19 11:27:50.506353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.080 [2024-11-19 11:27:50.506389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.080 qpair failed and we were unable to recover it. 00:25:55.080 [2024-11-19 11:27:50.506600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.080 [2024-11-19 11:27:50.506625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.080 qpair failed and we were unable to recover it. 00:25:55.080 [2024-11-19 11:27:50.506838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.080 [2024-11-19 11:27:50.506864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.080 qpair failed and we were unable to recover it. 00:25:55.080 [2024-11-19 11:27:50.506971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.080 [2024-11-19 11:27:50.506996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.080 qpair failed and we were unable to recover it. 00:25:55.080 [2024-11-19 11:27:50.507160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.080 [2024-11-19 11:27:50.507185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.080 qpair failed and we were unable to recover it. 00:25:55.080 [2024-11-19 11:27:50.507378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.080 [2024-11-19 11:27:50.507406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.080 qpair failed and we were unable to recover it. 00:25:55.080 [2024-11-19 11:27:50.507612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.080 [2024-11-19 11:27:50.507638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.080 qpair failed and we were unable to recover it. 00:25:55.080 [2024-11-19 11:27:50.507852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.080 [2024-11-19 11:27:50.507877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.080 qpair failed and we were unable to recover it. 00:25:55.080 [2024-11-19 11:27:50.508052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.080 [2024-11-19 11:27:50.508077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.080 qpair failed and we were unable to recover it. 00:25:55.080 [2024-11-19 11:27:50.508240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.080 [2024-11-19 11:27:50.508265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.080 qpair failed and we were unable to recover it. 00:25:55.080 [2024-11-19 11:27:50.508473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.080 [2024-11-19 11:27:50.508500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.080 qpair failed and we were unable to recover it. 00:25:55.080 [2024-11-19 11:27:50.508661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.080 [2024-11-19 11:27:50.508686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.080 qpair failed and we were unable to recover it. 00:25:55.080 [2024-11-19 11:27:50.508907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.080 [2024-11-19 11:27:50.508933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.080 qpair failed and we were unable to recover it. 00:25:55.080 [2024-11-19 11:27:50.509108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.080 [2024-11-19 11:27:50.509133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.080 qpair failed and we were unable to recover it. 00:25:55.080 [2024-11-19 11:27:50.509316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.080 [2024-11-19 11:27:50.509341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.080 qpair failed and we were unable to recover it. 00:25:55.080 [2024-11-19 11:27:50.509559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.080 [2024-11-19 11:27:50.509602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.080 qpair failed and we were unable to recover it. 00:25:55.081 [2024-11-19 11:27:50.509830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.081 [2024-11-19 11:27:50.509859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.081 qpair failed and we were unable to recover it. 00:25:55.081 [2024-11-19 11:27:50.510069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.081 [2024-11-19 11:27:50.510108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.081 qpair failed and we were unable to recover it. 00:25:55.081 [2024-11-19 11:27:50.510329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.081 [2024-11-19 11:27:50.510358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.081 qpair failed and we were unable to recover it. 00:25:55.081 [2024-11-19 11:27:50.510590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.081 [2024-11-19 11:27:50.510617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.081 qpair failed and we were unable to recover it. 00:25:55.361 [2024-11-19 11:27:50.510823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.361 [2024-11-19 11:27:50.510849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.361 qpair failed and we were unable to recover it. 00:25:55.361 [2024-11-19 11:27:50.511060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.361 [2024-11-19 11:27:50.511085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.361 qpair failed and we were unable to recover it. 00:25:55.361 [2024-11-19 11:27:50.511264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.361 [2024-11-19 11:27:50.511290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.361 qpair failed and we were unable to recover it. 00:25:55.361 [2024-11-19 11:27:50.511473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.361 [2024-11-19 11:27:50.511500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.361 qpair failed and we were unable to recover it. 00:25:55.361 [2024-11-19 11:27:50.511685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.361 [2024-11-19 11:27:50.511711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.362 qpair failed and we were unable to recover it. 00:25:55.362 [2024-11-19 11:27:50.511932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.362 [2024-11-19 11:27:50.511958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.362 qpair failed and we were unable to recover it. 00:25:55.362 [2024-11-19 11:27:50.512128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.362 [2024-11-19 11:27:50.512160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.362 qpair failed and we were unable to recover it. 00:25:55.362 [2024-11-19 11:27:50.512334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.362 [2024-11-19 11:27:50.512360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.362 qpair failed and we were unable to recover it. 00:25:55.362 [2024-11-19 11:27:50.512579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.362 [2024-11-19 11:27:50.512605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.362 qpair failed and we were unable to recover it. 00:25:55.362 [2024-11-19 11:27:50.512829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.362 [2024-11-19 11:27:50.512855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.362 qpair failed and we were unable to recover it. 00:25:55.362 [2024-11-19 11:27:50.513065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.362 [2024-11-19 11:27:50.513092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.362 qpair failed and we were unable to recover it. 00:25:55.362 [2024-11-19 11:27:50.513240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.362 [2024-11-19 11:27:50.513266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.362 qpair failed and we were unable to recover it. 00:25:55.362 [2024-11-19 11:27:50.513477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.362 [2024-11-19 11:27:50.513504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.362 qpair failed and we were unable to recover it. 00:25:55.362 [2024-11-19 11:27:50.513687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.362 [2024-11-19 11:27:50.513712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.362 qpair failed and we were unable to recover it. 00:25:55.362 [2024-11-19 11:27:50.513937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.362 [2024-11-19 11:27:50.513977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.362 qpair failed and we were unable to recover it. 00:25:55.362 [2024-11-19 11:27:50.514141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.362 [2024-11-19 11:27:50.514181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.362 qpair failed and we were unable to recover it. 00:25:55.362 [2024-11-19 11:27:50.514351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.362 [2024-11-19 11:27:50.514384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.362 qpair failed and we were unable to recover it. 00:25:55.362 [2024-11-19 11:27:50.514565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.362 [2024-11-19 11:27:50.514593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.362 qpair failed and we were unable to recover it. 00:25:55.362 [2024-11-19 11:27:50.514820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.362 [2024-11-19 11:27:50.514846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.362 qpair failed and we were unable to recover it. 00:25:55.362 [2024-11-19 11:27:50.514971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.362 [2024-11-19 11:27:50.514996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.362 qpair failed and we were unable to recover it. 00:25:55.362 [2024-11-19 11:27:50.515197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.362 [2024-11-19 11:27:50.515223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.362 qpair failed and we were unable to recover it. 00:25:55.362 [2024-11-19 11:27:50.515387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.362 [2024-11-19 11:27:50.515413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.362 qpair failed and we were unable to recover it. 00:25:55.362 [2024-11-19 11:27:50.515585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.362 [2024-11-19 11:27:50.515612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.362 qpair failed and we were unable to recover it. 00:25:55.362 [2024-11-19 11:27:50.515789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.362 [2024-11-19 11:27:50.515816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.362 qpair failed and we were unable to recover it. 00:25:55.362 [2024-11-19 11:27:50.515994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.362 [2024-11-19 11:27:50.516019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.362 qpair failed and we were unable to recover it. 00:25:55.362 [2024-11-19 11:27:50.516199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.362 [2024-11-19 11:27:50.516225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.362 qpair failed and we were unable to recover it. 00:25:55.362 [2024-11-19 11:27:50.516435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.362 [2024-11-19 11:27:50.516463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.362 qpair failed and we were unable to recover it. 00:25:55.362 [2024-11-19 11:27:50.516652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.362 [2024-11-19 11:27:50.516679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.362 qpair failed and we were unable to recover it. 00:25:55.362 [2024-11-19 11:27:50.516800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.362 [2024-11-19 11:27:50.516827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.362 qpair failed and we were unable to recover it. 00:25:55.362 [2024-11-19 11:27:50.517046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.362 [2024-11-19 11:27:50.517070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.362 qpair failed and we were unable to recover it. 00:25:55.362 [2024-11-19 11:27:50.517189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.362 [2024-11-19 11:27:50.517214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.362 qpair failed and we were unable to recover it. 00:25:55.362 [2024-11-19 11:27:50.517438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.362 [2024-11-19 11:27:50.517465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.362 qpair failed and we were unable to recover it. 00:25:55.362 [2024-11-19 11:27:50.517685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.362 [2024-11-19 11:27:50.517734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.362 qpair failed and we were unable to recover it. 00:25:55.362 [2024-11-19 11:27:50.517917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.362 [2024-11-19 11:27:50.517942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.362 qpair failed and we were unable to recover it. 00:25:55.362 [2024-11-19 11:27:50.518128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.362 [2024-11-19 11:27:50.518153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.362 qpair failed and we were unable to recover it. 00:25:55.362 [2024-11-19 11:27:50.518384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.362 [2024-11-19 11:27:50.518411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.362 qpair failed and we were unable to recover it. 00:25:55.362 [2024-11-19 11:27:50.518623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.363 [2024-11-19 11:27:50.518664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.363 qpair failed and we were unable to recover it. 00:25:55.363 [2024-11-19 11:27:50.518843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.363 [2024-11-19 11:27:50.518867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.363 qpair failed and we were unable to recover it. 00:25:55.363 [2024-11-19 11:27:50.519063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.363 [2024-11-19 11:27:50.519089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.363 qpair failed and we were unable to recover it. 00:25:55.363 [2024-11-19 11:27:50.519319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.363 [2024-11-19 11:27:50.519344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.363 qpair failed and we were unable to recover it. 00:25:55.363 [2024-11-19 11:27:50.519587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.363 [2024-11-19 11:27:50.519613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.363 qpair failed and we were unable to recover it. 00:25:55.363 [2024-11-19 11:27:50.519842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.363 [2024-11-19 11:27:50.519867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.363 qpair failed and we were unable to recover it. 00:25:55.363 [2024-11-19 11:27:50.520046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.363 [2024-11-19 11:27:50.520072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.363 qpair failed and we were unable to recover it. 00:25:55.363 [2024-11-19 11:27:50.520322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.363 [2024-11-19 11:27:50.520347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.363 qpair failed and we were unable to recover it. 00:25:55.363 [2024-11-19 11:27:50.520581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.363 [2024-11-19 11:27:50.520608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.363 qpair failed and we were unable to recover it. 00:25:55.363 [2024-11-19 11:27:50.520827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.363 [2024-11-19 11:27:50.520851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.363 qpair failed and we were unable to recover it. 00:25:55.363 [2024-11-19 11:27:50.521053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.363 [2024-11-19 11:27:50.521086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.363 qpair failed and we were unable to recover it. 00:25:55.363 [2024-11-19 11:27:50.521309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.363 [2024-11-19 11:27:50.521334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.363 qpair failed and we were unable to recover it. 00:25:55.363 [2024-11-19 11:27:50.521545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.363 [2024-11-19 11:27:50.521572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.363 qpair failed and we were unable to recover it. 00:25:55.363 [2024-11-19 11:27:50.521791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.363 [2024-11-19 11:27:50.521814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.363 qpair failed and we were unable to recover it. 00:25:55.363 [2024-11-19 11:27:50.522036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.363 [2024-11-19 11:27:50.522062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.363 qpair failed and we were unable to recover it. 00:25:55.363 [2024-11-19 11:27:50.522258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.363 [2024-11-19 11:27:50.522282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.363 qpair failed and we were unable to recover it. 00:25:55.363 [2024-11-19 11:27:50.522503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.363 [2024-11-19 11:27:50.522530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.363 qpair failed and we were unable to recover it. 00:25:55.363 [2024-11-19 11:27:50.522711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.363 [2024-11-19 11:27:50.522736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.363 qpair failed and we were unable to recover it. 00:25:55.363 [2024-11-19 11:27:50.522964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.363 [2024-11-19 11:27:50.522989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.363 qpair failed and we were unable to recover it. 00:25:55.363 [2024-11-19 11:27:50.523180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.363 [2024-11-19 11:27:50.523208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.363 qpair failed and we were unable to recover it. 00:25:55.363 [2024-11-19 11:27:50.523370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.363 [2024-11-19 11:27:50.523397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.363 qpair failed and we were unable to recover it. 00:25:55.363 [2024-11-19 11:27:50.523556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.363 [2024-11-19 11:27:50.523582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.363 qpair failed and we were unable to recover it. 00:25:55.363 [2024-11-19 11:27:50.523739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.363 [2024-11-19 11:27:50.523765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.363 qpair failed and we were unable to recover it. 00:25:55.363 [2024-11-19 11:27:50.523967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.363 [2024-11-19 11:27:50.523992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.363 qpair failed and we were unable to recover it. 00:25:55.363 [2024-11-19 11:27:50.524239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.363 [2024-11-19 11:27:50.524267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.363 qpair failed and we were unable to recover it. 00:25:55.363 [2024-11-19 11:27:50.524451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.363 [2024-11-19 11:27:50.524477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.363 qpair failed and we were unable to recover it. 00:25:55.363 [2024-11-19 11:27:50.524692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.363 [2024-11-19 11:27:50.524733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.363 qpair failed and we were unable to recover it. 00:25:55.363 [2024-11-19 11:27:50.524932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.363 [2024-11-19 11:27:50.524958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.363 qpair failed and we were unable to recover it. 00:25:55.363 [2024-11-19 11:27:50.525189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.363 [2024-11-19 11:27:50.525216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.363 qpair failed and we were unable to recover it. 00:25:55.363 [2024-11-19 11:27:50.525456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.363 [2024-11-19 11:27:50.525483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.363 qpair failed and we were unable to recover it. 00:25:55.363 [2024-11-19 11:27:50.525639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.363 [2024-11-19 11:27:50.525664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.363 qpair failed and we were unable to recover it. 00:25:55.363 [2024-11-19 11:27:50.525855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.363 [2024-11-19 11:27:50.525879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.363 qpair failed and we were unable to recover it. 00:25:55.363 [2024-11-19 11:27:50.526112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.363 [2024-11-19 11:27:50.526137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.363 qpair failed and we were unable to recover it. 00:25:55.363 [2024-11-19 11:27:50.526232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.363 [2024-11-19 11:27:50.526257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.363 qpair failed and we were unable to recover it. 00:25:55.363 [2024-11-19 11:27:50.526443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.363 [2024-11-19 11:27:50.526471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.363 qpair failed and we were unable to recover it. 00:25:55.363 [2024-11-19 11:27:50.526661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.363 [2024-11-19 11:27:50.526685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.364 qpair failed and we were unable to recover it. 00:25:55.364 [2024-11-19 11:27:50.526871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.364 [2024-11-19 11:27:50.526895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.364 qpair failed and we were unable to recover it. 00:25:55.364 [2024-11-19 11:27:50.527115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.364 [2024-11-19 11:27:50.527140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.364 qpair failed and we were unable to recover it. 00:25:55.364 [2024-11-19 11:27:50.527349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.364 [2024-11-19 11:27:50.527391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.364 qpair failed and we were unable to recover it. 00:25:55.364 [2024-11-19 11:27:50.527612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.364 [2024-11-19 11:27:50.527638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.364 qpair failed and we were unable to recover it. 00:25:55.364 [2024-11-19 11:27:50.527823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.364 [2024-11-19 11:27:50.527849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.364 qpair failed and we were unable to recover it. 00:25:55.364 [2024-11-19 11:27:50.528061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.364 [2024-11-19 11:27:50.528085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.364 qpair failed and we were unable to recover it. 00:25:55.364 [2024-11-19 11:27:50.528232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.364 [2024-11-19 11:27:50.528257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.364 qpair failed and we were unable to recover it. 00:25:55.364 [2024-11-19 11:27:50.528451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.364 [2024-11-19 11:27:50.528479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.364 qpair failed and we were unable to recover it. 00:25:55.364 [2024-11-19 11:27:50.528681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.364 [2024-11-19 11:27:50.528705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.364 qpair failed and we were unable to recover it. 00:25:55.364 [2024-11-19 11:27:50.528941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.364 [2024-11-19 11:27:50.528966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.364 qpair failed and we were unable to recover it. 00:25:55.364 [2024-11-19 11:27:50.529197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.364 [2024-11-19 11:27:50.529223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.364 qpair failed and we were unable to recover it. 00:25:55.364 [2024-11-19 11:27:50.529448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.364 [2024-11-19 11:27:50.529476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.364 qpair failed and we were unable to recover it. 00:25:55.364 [2024-11-19 11:27:50.529662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.364 [2024-11-19 11:27:50.529688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.364 qpair failed and we were unable to recover it. 00:25:55.364 [2024-11-19 11:27:50.529873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.364 [2024-11-19 11:27:50.529898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.364 qpair failed and we were unable to recover it. 00:25:55.364 [2024-11-19 11:27:50.530136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.364 [2024-11-19 11:27:50.530172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.364 qpair failed and we were unable to recover it. 00:25:55.364 [2024-11-19 11:27:50.530379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.364 [2024-11-19 11:27:50.530405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.364 qpair failed and we were unable to recover it. 00:25:55.364 [2024-11-19 11:27:50.530624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.364 [2024-11-19 11:27:50.530657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.364 qpair failed and we were unable to recover it. 00:25:55.364 [2024-11-19 11:27:50.530882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.364 [2024-11-19 11:27:50.530907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.364 qpair failed and we were unable to recover it. 00:25:55.364 [2024-11-19 11:27:50.531108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.364 [2024-11-19 11:27:50.531133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.364 qpair failed and we were unable to recover it. 00:25:55.364 [2024-11-19 11:27:50.531292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.364 [2024-11-19 11:27:50.531332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.364 qpair failed and we were unable to recover it. 00:25:55.364 [2024-11-19 11:27:50.531540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.364 [2024-11-19 11:27:50.531567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.364 qpair failed and we were unable to recover it. 00:25:55.364 [2024-11-19 11:27:50.531757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.364 [2024-11-19 11:27:50.531784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.364 qpair failed and we were unable to recover it. 00:25:55.364 [2024-11-19 11:27:50.531984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.364 [2024-11-19 11:27:50.532024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.364 qpair failed and we were unable to recover it. 00:25:55.364 [2024-11-19 11:27:50.532221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.364 [2024-11-19 11:27:50.532245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.364 qpair failed and we were unable to recover it. 00:25:55.364 [2024-11-19 11:27:50.532459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.364 [2024-11-19 11:27:50.532485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.364 qpair failed and we were unable to recover it. 00:25:55.364 [2024-11-19 11:27:50.532636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.364 [2024-11-19 11:27:50.532660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.364 qpair failed and we were unable to recover it. 00:25:55.364 [2024-11-19 11:27:50.532863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.364 [2024-11-19 11:27:50.532891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.364 qpair failed and we were unable to recover it. 00:25:55.364 [2024-11-19 11:27:50.533025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.364 [2024-11-19 11:27:50.533052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.364 qpair failed and we were unable to recover it. 00:25:55.364 [2024-11-19 11:27:50.533242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.364 [2024-11-19 11:27:50.533266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.364 qpair failed and we were unable to recover it. 00:25:55.364 [2024-11-19 11:27:50.533470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.364 [2024-11-19 11:27:50.533496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.364 qpair failed and we were unable to recover it. 00:25:55.364 [2024-11-19 11:27:50.533625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.364 [2024-11-19 11:27:50.533651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.364 qpair failed and we were unable to recover it. 00:25:55.364 [2024-11-19 11:27:50.533857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.364 [2024-11-19 11:27:50.533892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.364 qpair failed and we were unable to recover it. 00:25:55.364 [2024-11-19 11:27:50.534086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.364 [2024-11-19 11:27:50.534112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.364 qpair failed and we were unable to recover it. 00:25:55.364 [2024-11-19 11:27:50.534302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-11-19 11:27:50.534326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-11-19 11:27:50.534551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-11-19 11:27:50.534577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-11-19 11:27:50.534763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-11-19 11:27:50.534787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-11-19 11:27:50.534980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-11-19 11:27:50.535012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-11-19 11:27:50.535233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-11-19 11:27:50.535259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-11-19 11:27:50.535468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-11-19 11:27:50.535495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-11-19 11:27:50.535671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-11-19 11:27:50.535696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-11-19 11:27:50.535869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-11-19 11:27:50.535895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-11-19 11:27:50.536062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-11-19 11:27:50.536088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-11-19 11:27:50.536318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-11-19 11:27:50.536343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-11-19 11:27:50.536506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-11-19 11:27:50.536531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-11-19 11:27:50.536745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-11-19 11:27:50.536771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-11-19 11:27:50.536949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-11-19 11:27:50.536974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-11-19 11:27:50.537158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-11-19 11:27:50.537189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-11-19 11:27:50.537376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-11-19 11:27:50.537401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-11-19 11:27:50.537543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-11-19 11:27:50.537568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-11-19 11:27:50.537778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-11-19 11:27:50.537804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-11-19 11:27:50.538024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-11-19 11:27:50.538049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-11-19 11:27:50.538236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-11-19 11:27:50.538261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-11-19 11:27:50.538483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-11-19 11:27:50.538510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-11-19 11:27:50.538660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-11-19 11:27:50.538698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-11-19 11:27:50.538918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-11-19 11:27:50.538947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-11-19 11:27:50.539115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-11-19 11:27:50.539140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-11-19 11:27:50.539386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-11-19 11:27:50.539415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-11-19 11:27:50.539542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-11-19 11:27:50.539568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-11-19 11:27:50.539748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-11-19 11:27:50.539772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-11-19 11:27:50.539978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-11-19 11:27:50.540002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-11-19 11:27:50.540206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-11-19 11:27:50.540230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-11-19 11:27:50.540479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-11-19 11:27:50.540506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-11-19 11:27:50.540718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-11-19 11:27:50.540743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-11-19 11:27:50.540966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-11-19 11:27:50.540991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-11-19 11:27:50.541181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.365 [2024-11-19 11:27:50.541206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.365 qpair failed and we were unable to recover it. 00:25:55.365 [2024-11-19 11:27:50.541428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-11-19 11:27:50.541455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-11-19 11:27:50.541646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-11-19 11:27:50.541672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-11-19 11:27:50.541844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-11-19 11:27:50.541869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-11-19 11:27:50.542100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-11-19 11:27:50.542125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-11-19 11:27:50.542240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-11-19 11:27:50.542264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-11-19 11:27:50.542433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-11-19 11:27:50.542468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-11-19 11:27:50.542682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-11-19 11:27:50.542708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-11-19 11:27:50.542865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-11-19 11:27:50.542890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-11-19 11:27:50.543070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-11-19 11:27:50.543094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-11-19 11:27:50.543276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-11-19 11:27:50.543300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-11-19 11:27:50.543533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-11-19 11:27:50.543568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-11-19 11:27:50.543736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-11-19 11:27:50.543762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-11-19 11:27:50.543921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-11-19 11:27:50.543960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-11-19 11:27:50.544185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-11-19 11:27:50.544210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-11-19 11:27:50.544437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-11-19 11:27:50.544463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-11-19 11:27:50.544631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-11-19 11:27:50.544666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-11-19 11:27:50.544891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-11-19 11:27:50.544916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-11-19 11:27:50.545092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-11-19 11:27:50.545117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-11-19 11:27:50.545285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-11-19 11:27:50.545310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-11-19 11:27:50.545505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-11-19 11:27:50.545533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-11-19 11:27:50.545738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-11-19 11:27:50.545770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-11-19 11:27:50.546009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-11-19 11:27:50.546050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-11-19 11:27:50.546279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-11-19 11:27:50.546304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-11-19 11:27:50.546494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-11-19 11:27:50.546521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-11-19 11:27:50.546680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-11-19 11:27:50.546704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-11-19 11:27:50.546927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-11-19 11:27:50.546952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-11-19 11:27:50.547157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-11-19 11:27:50.547181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-11-19 11:27:50.547410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-11-19 11:27:50.547436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-11-19 11:27:50.547611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-11-19 11:27:50.547637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-11-19 11:27:50.547814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-11-19 11:27:50.547845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-11-19 11:27:50.548066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-11-19 11:27:50.548094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.366 [2024-11-19 11:27:50.548266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.366 [2024-11-19 11:27:50.548292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.366 qpair failed and we were unable to recover it. 00:25:55.367 [2024-11-19 11:27:50.548505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-11-19 11:27:50.548532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-11-19 11:27:50.548704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-11-19 11:27:50.548730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-11-19 11:27:50.548919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-11-19 11:27:50.548945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-11-19 11:27:50.549154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-11-19 11:27:50.549196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-11-19 11:27:50.549391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-11-19 11:27:50.549423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-11-19 11:27:50.549561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-11-19 11:27:50.549587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-11-19 11:27:50.549614] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:55.367 [2024-11-19 11:27:50.549649] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:55.367 [2024-11-19 11:27:50.549664] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:55.367 [2024-11-19 11:27:50.549677] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:55.367 [2024-11-19 11:27:50.549687] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:55.367 [2024-11-19 11:27:50.549820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-11-19 11:27:50.549849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-11-19 11:27:50.550072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-11-19 11:27:50.550099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-11-19 11:27:50.550311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-11-19 11:27:50.550337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-11-19 11:27:50.550563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-11-19 11:27:50.550590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-11-19 11:27:50.550824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-11-19 11:27:50.550849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-11-19 11:27:50.551093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-11-19 11:27:50.551118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-11-19 11:27:50.551341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-11-19 11:27:50.551389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-11-19 11:27:50.551511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:25:55.367 [2024-11-19 11:27:50.551574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-11-19 11:27:50.551600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-11-19 11:27:50.551561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:25:55.367 [2024-11-19 11:27:50.551611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:25:55.367 [2024-11-19 11:27:50.551615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:55.367 [2024-11-19 11:27:50.551814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-11-19 11:27:50.551841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-11-19 11:27:50.552061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-11-19 11:27:50.552086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-11-19 11:27:50.552297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-11-19 11:27:50.552323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-11-19 11:27:50.552473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-11-19 11:27:50.552501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-11-19 11:27:50.552682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-11-19 11:27:50.552709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-11-19 11:27:50.552929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-11-19 11:27:50.552955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-11-19 11:27:50.553179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-11-19 11:27:50.553205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-11-19 11:27:50.553391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-11-19 11:27:50.553418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-11-19 11:27:50.553572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-11-19 11:27:50.553598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-11-19 11:27:50.553803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-11-19 11:27:50.553829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-11-19 11:27:50.554005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-11-19 11:27:50.554031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-11-19 11:27:50.554237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-11-19 11:27:50.554263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.367 [2024-11-19 11:27:50.554467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.367 [2024-11-19 11:27:50.554494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.367 qpair failed and we were unable to recover it. 00:25:55.368 [2024-11-19 11:27:50.554655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-11-19 11:27:50.554680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-11-19 11:27:50.554796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-11-19 11:27:50.554822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-11-19 11:27:50.554952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-11-19 11:27:50.554977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-11-19 11:27:50.555130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-11-19 11:27:50.555155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-11-19 11:27:50.555379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-11-19 11:27:50.555405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-11-19 11:27:50.555516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-11-19 11:27:50.555542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-11-19 11:27:50.555701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-11-19 11:27:50.555726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-11-19 11:27:50.555923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-11-19 11:27:50.555965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-11-19 11:27:50.556134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-11-19 11:27:50.556161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-11-19 11:27:50.556394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-11-19 11:27:50.556422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-11-19 11:27:50.556638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-11-19 11:27:50.556664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-11-19 11:27:50.556848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-11-19 11:27:50.556875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-11-19 11:27:50.557029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-11-19 11:27:50.557056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-11-19 11:27:50.557248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-11-19 11:27:50.557274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-11-19 11:27:50.557447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-11-19 11:27:50.557474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-11-19 11:27:50.557596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-11-19 11:27:50.557622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-11-19 11:27:50.557741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-11-19 11:27:50.557766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-11-19 11:27:50.557969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-11-19 11:27:50.557995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-11-19 11:27:50.558211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-11-19 11:27:50.558237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-11-19 11:27:50.558465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-11-19 11:27:50.558491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-11-19 11:27:50.558700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-11-19 11:27:50.558726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-11-19 11:27:50.558914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-11-19 11:27:50.558940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-11-19 11:27:50.559157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-11-19 11:27:50.559182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-11-19 11:27:50.559330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-11-19 11:27:50.559355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-11-19 11:27:50.559550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-11-19 11:27:50.559576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-11-19 11:27:50.559696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-11-19 11:27:50.559721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-11-19 11:27:50.559849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-11-19 11:27:50.559874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-11-19 11:27:50.560055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-11-19 11:27:50.560082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-11-19 11:27:50.560209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-11-19 11:27:50.560234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-11-19 11:27:50.560462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-11-19 11:27:50.560489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-11-19 11:27:50.560678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-11-19 11:27:50.560704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-11-19 11:27:50.560900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-11-19 11:27:50.560940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-11-19 11:27:50.561161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.368 [2024-11-19 11:27:50.561189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.368 qpair failed and we were unable to recover it. 00:25:55.368 [2024-11-19 11:27:50.561347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-11-19 11:27:50.561383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-11-19 11:27:50.561610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-11-19 11:27:50.561636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-11-19 11:27:50.561817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-11-19 11:27:50.561844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-11-19 11:27:50.561996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-11-19 11:27:50.562022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-11-19 11:27:50.562178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-11-19 11:27:50.562204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-11-19 11:27:50.562423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-11-19 11:27:50.562449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-11-19 11:27:50.562595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-11-19 11:27:50.562621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-11-19 11:27:50.562830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-11-19 11:27:50.562856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-11-19 11:27:50.563074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-11-19 11:27:50.563099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-11-19 11:27:50.563274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-11-19 11:27:50.563300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-11-19 11:27:50.563471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-11-19 11:27:50.563498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-11-19 11:27:50.563607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-11-19 11:27:50.563633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-11-19 11:27:50.563753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-11-19 11:27:50.563779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-11-19 11:27:50.563889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-11-19 11:27:50.563914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-11-19 11:27:50.564041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-11-19 11:27:50.564072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-11-19 11:27:50.564214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-11-19 11:27:50.564239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-11-19 11:27:50.564401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-11-19 11:27:50.564427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-11-19 11:27:50.564546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-11-19 11:27:50.564572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-11-19 11:27:50.564723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-11-19 11:27:50.564763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-11-19 11:27:50.564889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-11-19 11:27:50.564915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-11-19 11:27:50.565024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-11-19 11:27:50.565050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-11-19 11:27:50.565174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-11-19 11:27:50.565200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-11-19 11:27:50.565317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-11-19 11:27:50.565343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-11-19 11:27:50.565462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-11-19 11:27:50.565488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-11-19 11:27:50.565628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-11-19 11:27:50.565654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-11-19 11:27:50.565784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-11-19 11:27:50.565810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-11-19 11:27:50.565982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-11-19 11:27:50.566007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-11-19 11:27:50.566157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-11-19 11:27:50.566183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-11-19 11:27:50.566380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-11-19 11:27:50.566416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-11-19 11:27:50.566550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-11-19 11:27:50.566575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-11-19 11:27:50.566735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-11-19 11:27:50.566762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-11-19 11:27:50.566918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-11-19 11:27:50.566944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-11-19 11:27:50.567084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-11-19 11:27:50.567109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-11-19 11:27:50.567232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-11-19 11:27:50.567256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-11-19 11:27:50.567343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-11-19 11:27:50.567377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.369 [2024-11-19 11:27:50.567515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.369 [2024-11-19 11:27:50.567541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.369 qpair failed and we were unable to recover it. 00:25:55.370 [2024-11-19 11:27:50.567707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-11-19 11:27:50.567733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-11-19 11:27:50.567862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-11-19 11:27:50.567887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-11-19 11:27:50.568008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-11-19 11:27:50.568035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-11-19 11:27:50.568165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-11-19 11:27:50.568189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-11-19 11:27:50.568343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-11-19 11:27:50.568375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-11-19 11:27:50.568419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1053f30 (9): Bad file descriptor 00:25:55.370 [2024-11-19 11:27:50.568559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-11-19 11:27:50.568585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-11-19 11:27:50.568720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-11-19 11:27:50.568745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-11-19 11:27:50.568868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-11-19 11:27:50.568894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-11-19 11:27:50.569053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-11-19 11:27:50.569079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-11-19 11:27:50.569212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-11-19 11:27:50.569239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-11-19 11:27:50.569367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-11-19 11:27:50.569393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-11-19 11:27:50.569525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-11-19 11:27:50.569550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-11-19 11:27:50.569658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-11-19 11:27:50.569683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-11-19 11:27:50.569834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-11-19 11:27:50.569860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-11-19 11:27:50.569985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-11-19 11:27:50.570010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-11-19 11:27:50.570140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-11-19 11:27:50.570165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-11-19 11:27:50.570253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-11-19 11:27:50.570279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-11-19 11:27:50.570384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-11-19 11:27:50.570429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-11-19 11:27:50.570569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-11-19 11:27:50.570595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-11-19 11:27:50.570718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-11-19 11:27:50.570743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-11-19 11:27:50.570864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-11-19 11:27:50.570889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-11-19 11:27:50.571012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-11-19 11:27:50.571036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-11-19 11:27:50.571145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-11-19 11:27:50.571171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-11-19 11:27:50.571299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-11-19 11:27:50.571327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-11-19 11:27:50.571418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-11-19 11:27:50.571445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-11-19 11:27:50.571547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-11-19 11:27:50.571572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-11-19 11:27:50.571734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-11-19 11:27:50.571759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-11-19 11:27:50.571896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-11-19 11:27:50.571923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-11-19 11:27:50.572039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-11-19 11:27:50.572065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-11-19 11:27:50.572221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-11-19 11:27:50.572247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-11-19 11:27:50.572376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-11-19 11:27:50.572402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-11-19 11:27:50.572496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-11-19 11:27:50.572526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-11-19 11:27:50.572654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-11-19 11:27:50.572679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.370 [2024-11-19 11:27:50.572809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.370 [2024-11-19 11:27:50.572834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.370 qpair failed and we were unable to recover it. 00:25:55.371 [2024-11-19 11:27:50.572950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-11-19 11:27:50.572975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-11-19 11:27:50.573132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-11-19 11:27:50.573159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-11-19 11:27:50.573255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-11-19 11:27:50.573281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-11-19 11:27:50.573427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-11-19 11:27:50.573453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-11-19 11:27:50.573574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-11-19 11:27:50.573600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-11-19 11:27:50.573724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-11-19 11:27:50.573750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-11-19 11:27:50.573870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-11-19 11:27:50.573896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-11-19 11:27:50.574013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-11-19 11:27:50.574039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-11-19 11:27:50.574184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-11-19 11:27:50.574210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-11-19 11:27:50.574297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-11-19 11:27:50.574323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-11-19 11:27:50.574485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-11-19 11:27:50.574512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-11-19 11:27:50.574641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-11-19 11:27:50.574667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-11-19 11:27:50.574764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-11-19 11:27:50.574789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-11-19 11:27:50.574938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-11-19 11:27:50.574964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-11-19 11:27:50.575050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-11-19 11:27:50.575074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-11-19 11:27:50.575223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-11-19 11:27:50.575248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-11-19 11:27:50.575397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-11-19 11:27:50.575422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-11-19 11:27:50.575570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-11-19 11:27:50.575595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-11-19 11:27:50.575751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-11-19 11:27:50.575777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-11-19 11:27:50.575875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-11-19 11:27:50.575901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-11-19 11:27:50.576025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-11-19 11:27:50.576050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-11-19 11:27:50.576176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-11-19 11:27:50.576202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-11-19 11:27:50.576357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-11-19 11:27:50.576389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-11-19 11:27:50.576513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-11-19 11:27:50.576538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-11-19 11:27:50.576651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-11-19 11:27:50.576681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-11-19 11:27:50.576831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-11-19 11:27:50.576857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-11-19 11:27:50.577006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-11-19 11:27:50.577031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-11-19 11:27:50.577183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-11-19 11:27:50.577208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-11-19 11:27:50.577332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-11-19 11:27:50.577357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-11-19 11:27:50.577495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-11-19 11:27:50.577521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-11-19 11:27:50.577645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-11-19 11:27:50.577671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-11-19 11:27:50.577770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-11-19 11:27:50.577795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-11-19 11:27:50.577910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-11-19 11:27:50.577936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-11-19 11:27:50.578060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-11-19 11:27:50.578086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-11-19 11:27:50.578199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.371 [2024-11-19 11:27:50.578224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.371 qpair failed and we were unable to recover it. 00:25:55.371 [2024-11-19 11:27:50.578357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-11-19 11:27:50.578394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-11-19 11:27:50.578478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-11-19 11:27:50.578504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-11-19 11:27:50.578629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-11-19 11:27:50.578657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-11-19 11:27:50.578773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-11-19 11:27:50.578799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-11-19 11:27:50.578929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-11-19 11:27:50.578954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-11-19 11:27:50.579097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-11-19 11:27:50.579123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-11-19 11:27:50.579246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-11-19 11:27:50.579272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-11-19 11:27:50.579403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-11-19 11:27:50.579429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-11-19 11:27:50.579548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-11-19 11:27:50.579575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-11-19 11:27:50.579687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-11-19 11:27:50.579712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-11-19 11:27:50.579831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-11-19 11:27:50.579856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-11-19 11:27:50.579968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-11-19 11:27:50.579994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-11-19 11:27:50.580157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-11-19 11:27:50.580182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-11-19 11:27:50.580296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-11-19 11:27:50.580322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-11-19 11:27:50.580449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-11-19 11:27:50.580475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-11-19 11:27:50.580621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-11-19 11:27:50.580646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-11-19 11:27:50.580766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-11-19 11:27:50.580791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-11-19 11:27:50.580919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-11-19 11:27:50.580945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-11-19 11:27:50.581090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-11-19 11:27:50.581115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-11-19 11:27:50.581260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-11-19 11:27:50.581285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-11-19 11:27:50.581432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-11-19 11:27:50.581458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-11-19 11:27:50.581555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-11-19 11:27:50.581581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-11-19 11:27:50.581704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-11-19 11:27:50.581729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-11-19 11:27:50.581848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-11-19 11:27:50.581873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-11-19 11:27:50.582021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-11-19 11:27:50.582046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-11-19 11:27:50.582129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-11-19 11:27:50.582155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-11-19 11:27:50.582254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-11-19 11:27:50.582280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-11-19 11:27:50.582400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-11-19 11:27:50.582426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-11-19 11:27:50.582538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-11-19 11:27:50.582564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-11-19 11:27:50.582685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-11-19 11:27:50.582710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-11-19 11:27:50.582901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.372 [2024-11-19 11:27:50.582945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.372 qpair failed and we were unable to recover it. 00:25:55.372 [2024-11-19 11:27:50.583082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-11-19 11:27:50.583109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-11-19 11:27:50.583231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-11-19 11:27:50.583258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-11-19 11:27:50.583354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-11-19 11:27:50.583392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-11-19 11:27:50.583512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-11-19 11:27:50.583539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-11-19 11:27:50.583688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-11-19 11:27:50.583724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-11-19 11:27:50.583838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-11-19 11:27:50.583865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-11-19 11:27:50.584018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-11-19 11:27:50.584044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-11-19 11:27:50.584197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-11-19 11:27:50.584223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-11-19 11:27:50.584332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-11-19 11:27:50.584358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-11-19 11:27:50.584483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-11-19 11:27:50.584509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-11-19 11:27:50.584656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-11-19 11:27:50.584682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-11-19 11:27:50.584799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-11-19 11:27:50.584825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-11-19 11:27:50.584935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-11-19 11:27:50.584960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-11-19 11:27:50.585110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-11-19 11:27:50.585136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-11-19 11:27:50.585258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-11-19 11:27:50.585283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-11-19 11:27:50.585439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-11-19 11:27:50.585464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-11-19 11:27:50.585584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-11-19 11:27:50.585610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-11-19 11:27:50.585757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-11-19 11:27:50.585783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-11-19 11:27:50.585929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-11-19 11:27:50.585954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-11-19 11:27:50.586076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-11-19 11:27:50.586101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-11-19 11:27:50.586253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-11-19 11:27:50.586278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-11-19 11:27:50.586393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-11-19 11:27:50.586419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-11-19 11:27:50.586536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-11-19 11:27:50.586562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-11-19 11:27:50.586711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-11-19 11:27:50.586736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-11-19 11:27:50.586852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-11-19 11:27:50.586877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-11-19 11:27:50.586996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-11-19 11:27:50.587021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-11-19 11:27:50.587170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-11-19 11:27:50.587196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-11-19 11:27:50.587315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-11-19 11:27:50.587340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-11-19 11:27:50.587453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-11-19 11:27:50.587496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-11-19 11:27:50.587606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-11-19 11:27:50.587645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-11-19 11:27:50.587775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-11-19 11:27:50.587802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-11-19 11:27:50.587929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-11-19 11:27:50.587956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-11-19 11:27:50.588080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-11-19 11:27:50.588111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-11-19 11:27:50.588262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-11-19 11:27:50.588289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-11-19 11:27:50.588420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-11-19 11:27:50.588447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.373 qpair failed and we were unable to recover it. 00:25:55.373 [2024-11-19 11:27:50.588596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.373 [2024-11-19 11:27:50.588621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-11-19 11:27:50.588740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-11-19 11:27:50.588765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-11-19 11:27:50.588915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-11-19 11:27:50.588940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-11-19 11:27:50.589060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-11-19 11:27:50.589085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-11-19 11:27:50.589201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-11-19 11:27:50.589226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-11-19 11:27:50.589377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-11-19 11:27:50.589403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-11-19 11:27:50.589517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-11-19 11:27:50.589543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-11-19 11:27:50.589667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-11-19 11:27:50.589692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-11-19 11:27:50.589804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-11-19 11:27:50.589829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-11-19 11:27:50.589960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-11-19 11:27:50.589985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-11-19 11:27:50.590105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-11-19 11:27:50.590130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-11-19 11:27:50.590245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-11-19 11:27:50.590270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-11-19 11:27:50.590423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-11-19 11:27:50.590449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-11-19 11:27:50.590549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-11-19 11:27:50.590574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-11-19 11:27:50.590690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-11-19 11:27:50.590715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-11-19 11:27:50.590826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-11-19 11:27:50.590851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-11-19 11:27:50.590972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-11-19 11:27:50.590998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-11-19 11:27:50.591087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-11-19 11:27:50.591112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-11-19 11:27:50.591231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-11-19 11:27:50.591261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-11-19 11:27:50.591408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-11-19 11:27:50.591434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-11-19 11:27:50.591560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-11-19 11:27:50.591585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-11-19 11:27:50.591697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-11-19 11:27:50.591722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-11-19 11:27:50.591843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-11-19 11:27:50.591868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-11-19 11:27:50.591948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-11-19 11:27:50.591973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-11-19 11:27:50.592117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-11-19 11:27:50.592142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-11-19 11:27:50.592250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-11-19 11:27:50.592275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-11-19 11:27:50.592360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-11-19 11:27:50.592393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-11-19 11:27:50.592520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-11-19 11:27:50.592545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-11-19 11:27:50.592664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-11-19 11:27:50.592690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-11-19 11:27:50.592835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-11-19 11:27:50.592861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-11-19 11:27:50.592955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-11-19 11:27:50.592980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-11-19 11:27:50.593090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-11-19 11:27:50.593116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-11-19 11:27:50.593264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-11-19 11:27:50.593290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-11-19 11:27:50.593391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-11-19 11:27:50.593417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-11-19 11:27:50.593538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.374 [2024-11-19 11:27:50.593563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.374 qpair failed and we were unable to recover it. 00:25:55.374 [2024-11-19 11:27:50.593719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-11-19 11:27:50.593744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-11-19 11:27:50.593892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-11-19 11:27:50.593918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-11-19 11:27:50.594033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-11-19 11:27:50.594059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-11-19 11:27:50.594169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-11-19 11:27:50.594194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-11-19 11:27:50.594306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-11-19 11:27:50.594332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-11-19 11:27:50.594454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-11-19 11:27:50.594480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-11-19 11:27:50.594621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-11-19 11:27:50.594647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-11-19 11:27:50.594765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-11-19 11:27:50.594790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-11-19 11:27:50.594936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-11-19 11:27:50.594961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-11-19 11:27:50.595083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-11-19 11:27:50.595108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-11-19 11:27:50.595190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-11-19 11:27:50.595220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-11-19 11:27:50.595381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-11-19 11:27:50.595408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-11-19 11:27:50.595516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-11-19 11:27:50.595541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-11-19 11:27:50.595663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-11-19 11:27:50.595689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-11-19 11:27:50.595783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-11-19 11:27:50.595809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-11-19 11:27:50.595889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-11-19 11:27:50.595915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-11-19 11:27:50.596035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-11-19 11:27:50.596060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-11-19 11:27:50.596205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-11-19 11:27:50.596230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-11-19 11:27:50.596375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-11-19 11:27:50.596417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-11-19 11:27:50.596566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-11-19 11:27:50.596606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-11-19 11:27:50.596738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-11-19 11:27:50.596766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-11-19 11:27:50.596890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-11-19 11:27:50.596917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-11-19 11:27:50.597039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-11-19 11:27:50.597065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-11-19 11:27:50.597223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-11-19 11:27:50.597249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-11-19 11:27:50.597344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-11-19 11:27:50.597375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-11-19 11:27:50.597546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-11-19 11:27:50.597577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-11-19 11:27:50.597704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-11-19 11:27:50.597730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-11-19 11:27:50.597847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-11-19 11:27:50.597875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-11-19 11:27:50.597969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-11-19 11:27:50.597995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-11-19 11:27:50.598120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-11-19 11:27:50.598146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb720000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-11-19 11:27:50.598267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-11-19 11:27:50.598293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-11-19 11:27:50.598400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-11-19 11:27:50.598439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-11-19 11:27:50.598591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-11-19 11:27:50.598619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-11-19 11:27:50.598764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-11-19 11:27:50.598790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-11-19 11:27:50.598940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.375 [2024-11-19 11:27:50.598965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.375 qpair failed and we were unable to recover it. 00:25:55.375 [2024-11-19 11:27:50.599085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-11-19 11:27:50.599111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-11-19 11:27:50.599241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-11-19 11:27:50.599267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-11-19 11:27:50.599416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-11-19 11:27:50.599449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-11-19 11:27:50.599573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-11-19 11:27:50.599599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-11-19 11:27:50.599724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-11-19 11:27:50.599749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-11-19 11:27:50.599846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-11-19 11:27:50.599871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-11-19 11:27:50.600017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-11-19 11:27:50.600044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-11-19 11:27:50.600138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-11-19 11:27:50.600163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-11-19 11:27:50.600274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-11-19 11:27:50.600314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-11-19 11:27:50.600435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-11-19 11:27:50.600463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-11-19 11:27:50.600585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-11-19 11:27:50.600611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-11-19 11:27:50.600711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-11-19 11:27:50.600737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-11-19 11:27:50.600849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-11-19 11:27:50.600875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-11-19 11:27:50.601021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-11-19 11:27:50.601047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-11-19 11:27:50.601168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-11-19 11:27:50.601194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-11-19 11:27:50.601339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-11-19 11:27:50.601373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-11-19 11:27:50.601467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-11-19 11:27:50.601493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-11-19 11:27:50.601614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-11-19 11:27:50.601639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-11-19 11:27:50.601766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-11-19 11:27:50.601791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-11-19 11:27:50.601915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-11-19 11:27:50.601940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-11-19 11:27:50.602065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-11-19 11:27:50.602090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-11-19 11:27:50.602186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-11-19 11:27:50.602212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-11-19 11:27:50.602331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-11-19 11:27:50.602357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-11-19 11:27:50.602494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-11-19 11:27:50.602520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-11-19 11:27:50.602632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-11-19 11:27:50.602657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-11-19 11:27:50.602809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-11-19 11:27:50.602834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-11-19 11:27:50.602924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-11-19 11:27:50.602949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-11-19 11:27:50.603105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-11-19 11:27:50.603131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-11-19 11:27:50.603285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-11-19 11:27:50.603311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-11-19 11:27:50.603467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-11-19 11:27:50.603494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-11-19 11:27:50.603641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-11-19 11:27:50.603668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-11-19 11:27:50.603791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-11-19 11:27:50.603816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-11-19 11:27:50.603960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-11-19 11:27:50.603985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-11-19 11:27:50.604096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-11-19 11:27:50.604121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-11-19 11:27:50.604268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.376 [2024-11-19 11:27:50.604293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-11-19 11:27:50.604382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-11-19 11:27:50.604409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-11-19 11:27:50.604516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-11-19 11:27:50.604543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-11-19 11:27:50.604660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-11-19 11:27:50.604685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-11-19 11:27:50.604813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-11-19 11:27:50.604839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-11-19 11:27:50.604922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-11-19 11:27:50.604947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-11-19 11:27:50.605093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-11-19 11:27:50.605118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-11-19 11:27:50.605244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-11-19 11:27:50.605283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-11-19 11:27:50.605434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-11-19 11:27:50.605471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-11-19 11:27:50.605596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-11-19 11:27:50.605621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-11-19 11:27:50.605768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-11-19 11:27:50.605794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-11-19 11:27:50.605912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-11-19 11:27:50.605938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-11-19 11:27:50.606090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-11-19 11:27:50.606115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-11-19 11:27:50.606239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-11-19 11:27:50.606266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-11-19 11:27:50.606415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-11-19 11:27:50.606440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-11-19 11:27:50.606555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-11-19 11:27:50.606581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-11-19 11:27:50.606699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-11-19 11:27:50.606725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-11-19 11:27:50.606871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-11-19 11:27:50.606897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-11-19 11:27:50.607018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-11-19 11:27:50.607044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-11-19 11:27:50.607148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-11-19 11:27:50.607184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-11-19 11:27:50.607405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-11-19 11:27:50.607431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-11-19 11:27:50.607580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-11-19 11:27:50.607606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-11-19 11:27:50.607807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-11-19 11:27:50.607832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-11-19 11:27:50.608042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-11-19 11:27:50.608068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-11-19 11:27:50.608248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-11-19 11:27:50.608272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-11-19 11:27:50.608441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-11-19 11:27:50.608467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-11-19 11:27:50.608595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-11-19 11:27:50.608628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-11-19 11:27:50.608797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-11-19 11:27:50.608822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-11-19 11:27:50.609026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-11-19 11:27:50.609052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-11-19 11:27:50.609178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-11-19 11:27:50.609203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-11-19 11:27:50.609397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-11-19 11:27:50.609423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-11-19 11:27:50.609550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-11-19 11:27:50.609576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-11-19 11:27:50.609721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.377 [2024-11-19 11:27:50.609747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-11-19 11:27:50.609878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-11-19 11:27:50.609903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-11-19 11:27:50.610035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-11-19 11:27:50.610060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-11-19 11:27:50.610240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-11-19 11:27:50.610270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-11-19 11:27:50.610433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-11-19 11:27:50.610459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-11-19 11:27:50.610576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-11-19 11:27:50.610602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-11-19 11:27:50.610710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-11-19 11:27:50.610735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-11-19 11:27:50.610893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-11-19 11:27:50.610918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-11-19 11:27:50.611128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-11-19 11:27:50.611153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-11-19 11:27:50.611325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-11-19 11:27:50.611350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-11-19 11:27:50.611482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-11-19 11:27:50.611508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-11-19 11:27:50.611655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-11-19 11:27:50.611680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-11-19 11:27:50.611853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-11-19 11:27:50.611878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-11-19 11:27:50.612033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-11-19 11:27:50.612058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-11-19 11:27:50.612173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-11-19 11:27:50.612198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-11-19 11:27:50.612322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-11-19 11:27:50.612347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-11-19 11:27:50.612477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-11-19 11:27:50.612503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-11-19 11:27:50.612625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-11-19 11:27:50.612651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-11-19 11:27:50.612809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-11-19 11:27:50.612834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-11-19 11:27:50.613042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-11-19 11:27:50.613067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-11-19 11:27:50.613248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-11-19 11:27:50.613274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-11-19 11:27:50.613405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-11-19 11:27:50.613431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-11-19 11:27:50.613516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-11-19 11:27:50.613541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-11-19 11:27:50.613641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-11-19 11:27:50.613666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-11-19 11:27:50.613876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-11-19 11:27:50.613901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-11-19 11:27:50.614118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-11-19 11:27:50.614143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-11-19 11:27:50.614355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-11-19 11:27:50.614388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-11-19 11:27:50.614487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-11-19 11:27:50.614513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-11-19 11:27:50.614659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-11-19 11:27:50.614684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-11-19 11:27:50.614892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-11-19 11:27:50.614918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-11-19 11:27:50.615070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-11-19 11:27:50.615095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-11-19 11:27:50.615300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-11-19 11:27:50.615325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-11-19 11:27:50.615474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-11-19 11:27:50.615500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-11-19 11:27:50.615689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-11-19 11:27:50.615714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-11-19 11:27:50.615860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-11-19 11:27:50.615885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-11-19 11:27:50.616001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-11-19 11:27:50.616037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-11-19 11:27:50.616173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.378 [2024-11-19 11:27:50.616204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-11-19 11:27:50.616323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-11-19 11:27:50.616348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-11-19 11:27:50.616485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-11-19 11:27:50.616511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-11-19 11:27:50.616663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-11-19 11:27:50.616688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-11-19 11:27:50.616794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-11-19 11:27:50.616820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-11-19 11:27:50.617025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-11-19 11:27:50.617050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-11-19 11:27:50.617260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-11-19 11:27:50.617285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-11-19 11:27:50.617450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-11-19 11:27:50.617476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-11-19 11:27:50.617660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-11-19 11:27:50.617709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-11-19 11:27:50.617889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-11-19 11:27:50.617917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-11-19 11:27:50.618093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-11-19 11:27:50.618119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-11-19 11:27:50.618328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-11-19 11:27:50.618354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-11-19 11:27:50.618501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-11-19 11:27:50.618527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-11-19 11:27:50.618737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-11-19 11:27:50.618763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-11-19 11:27:50.618935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-11-19 11:27:50.618960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-11-19 11:27:50.619151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-11-19 11:27:50.619176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-11-19 11:27:50.619394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-11-19 11:27:50.619421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-11-19 11:27:50.619548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-11-19 11:27:50.619573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-11-19 11:27:50.619674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-11-19 11:27:50.619699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-11-19 11:27:50.619814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-11-19 11:27:50.619839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-11-19 11:27:50.619955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-11-19 11:27:50.619981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-11-19 11:27:50.620096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-11-19 11:27:50.620127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-11-19 11:27:50.620213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-11-19 11:27:50.620238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-11-19 11:27:50.620382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-11-19 11:27:50.620409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-11-19 11:27:50.620555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-11-19 11:27:50.620581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-11-19 11:27:50.620704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-11-19 11:27:50.620729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-11-19 11:27:50.620889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-11-19 11:27:50.620914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-11-19 11:27:50.621025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-11-19 11:27:50.621050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-11-19 11:27:50.621264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-11-19 11:27:50.621289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-11-19 11:27:50.621445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-11-19 11:27:50.621472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-11-19 11:27:50.621594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-11-19 11:27:50.621619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-11-19 11:27:50.621742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-11-19 11:27:50.621767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-11-19 11:27:50.621956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-11-19 11:27:50.621981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-11-19 11:27:50.622191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-11-19 11:27:50.622217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-11-19 11:27:50.622426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-11-19 11:27:50.622452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-11-19 11:27:50.622686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-11-19 11:27:50.622711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.379 qpair failed and we were unable to recover it. 00:25:55.379 [2024-11-19 11:27:50.622864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.379 [2024-11-19 11:27:50.622890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-11-19 11:27:50.623062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-11-19 11:27:50.623087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-11-19 11:27:50.623215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-11-19 11:27:50.623240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-11-19 11:27:50.623411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-11-19 11:27:50.623437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-11-19 11:27:50.623602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-11-19 11:27:50.623627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-11-19 11:27:50.623832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-11-19 11:27:50.623857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-11-19 11:27:50.624027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-11-19 11:27:50.624053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-11-19 11:27:50.624191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-11-19 11:27:50.624216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-11-19 11:27:50.624319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-11-19 11:27:50.624344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-11-19 11:27:50.624462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-11-19 11:27:50.624488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-11-19 11:27:50.624611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-11-19 11:27:50.624637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-11-19 11:27:50.624808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-11-19 11:27:50.624834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-11-19 11:27:50.624950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-11-19 11:27:50.624999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-11-19 11:27:50.625152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-11-19 11:27:50.625180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-11-19 11:27:50.625355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-11-19 11:27:50.625404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-11-19 11:27:50.625568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-11-19 11:27:50.625595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-11-19 11:27:50.625709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-11-19 11:27:50.625735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-11-19 11:27:50.625886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-11-19 11:27:50.625911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-11-19 11:27:50.626069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-11-19 11:27:50.626094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-11-19 11:27:50.626275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-11-19 11:27:50.626300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-11-19 11:27:50.626480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-11-19 11:27:50.626506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-11-19 11:27:50.626621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-11-19 11:27:50.626646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-11-19 11:27:50.626729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-11-19 11:27:50.626755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-11-19 11:27:50.626922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-11-19 11:27:50.626953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-11-19 11:27:50.627160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-11-19 11:27:50.627185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-11-19 11:27:50.627411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-11-19 11:27:50.627446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-11-19 11:27:50.627558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-11-19 11:27:50.627583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-11-19 11:27:50.627728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-11-19 11:27:50.627753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-11-19 11:27:50.627921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-11-19 11:27:50.627947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-11-19 11:27:50.628123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-11-19 11:27:50.628148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-11-19 11:27:50.628345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-11-19 11:27:50.628375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-11-19 11:27:50.628524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-11-19 11:27:50.628550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-11-19 11:27:50.628728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-11-19 11:27:50.628753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-11-19 11:27:50.628962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-11-19 11:27:50.628987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-11-19 11:27:50.629139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-11-19 11:27:50.629164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-11-19 11:27:50.629338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.380 [2024-11-19 11:27:50.629370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.380 qpair failed and we were unable to recover it. 00:25:55.380 [2024-11-19 11:27:50.629496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-11-19 11:27:50.629521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-11-19 11:27:50.629673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-11-19 11:27:50.629699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-11-19 11:27:50.629921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-11-19 11:27:50.629946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-11-19 11:27:50.630174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-11-19 11:27:50.630200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-11-19 11:27:50.630377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-11-19 11:27:50.630410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-11-19 11:27:50.630557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-11-19 11:27:50.630583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-11-19 11:27:50.630698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-11-19 11:27:50.630723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-11-19 11:27:50.630847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-11-19 11:27:50.630878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-11-19 11:27:50.631001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-11-19 11:27:50.631026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-11-19 11:27:50.631126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-11-19 11:27:50.631151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-11-19 11:27:50.631305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-11-19 11:27:50.631330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-11-19 11:27:50.631460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-11-19 11:27:50.631486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-11-19 11:27:50.631635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-11-19 11:27:50.631660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-11-19 11:27:50.631781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-11-19 11:27:50.631806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-11-19 11:27:50.631955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-11-19 11:27:50.631981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-11-19 11:27:50.632104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-11-19 11:27:50.632130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-11-19 11:27:50.632340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-11-19 11:27:50.632371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-11-19 11:27:50.632500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-11-19 11:27:50.632525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-11-19 11:27:50.632691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-11-19 11:27:50.632717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-11-19 11:27:50.632872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-11-19 11:27:50.632898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-11-19 11:27:50.633024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-11-19 11:27:50.633049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-11-19 11:27:50.633191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-11-19 11:27:50.633216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-11-19 11:27:50.633354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-11-19 11:27:50.633384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-11-19 11:27:50.633509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-11-19 11:27:50.633535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-11-19 11:27:50.633662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-11-19 11:27:50.633698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-11-19 11:27:50.633852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-11-19 11:27:50.633877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-11-19 11:27:50.634039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-11-19 11:27:50.634074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-11-19 11:27:50.634228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-11-19 11:27:50.634254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-11-19 11:27:50.634380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-11-19 11:27:50.634410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-11-19 11:27:50.634494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-11-19 11:27:50.634520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-11-19 11:27:50.634628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.381 [2024-11-19 11:27:50.634653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.381 qpair failed and we were unable to recover it. 00:25:55.381 [2024-11-19 11:27:50.634827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-11-19 11:27:50.634852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-11-19 11:27:50.635039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-11-19 11:27:50.635064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-11-19 11:27:50.635242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-11-19 11:27:50.635268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-11-19 11:27:50.635396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-11-19 11:27:50.635422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-11-19 11:27:50.635535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-11-19 11:27:50.635560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-11-19 11:27:50.635683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-11-19 11:27:50.635709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-11-19 11:27:50.635890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-11-19 11:27:50.635915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-11-19 11:27:50.636039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-11-19 11:27:50.636065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-11-19 11:27:50.636166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-11-19 11:27:50.636191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-11-19 11:27:50.636283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-11-19 11:27:50.636308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-11-19 11:27:50.636445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-11-19 11:27:50.636471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-11-19 11:27:50.636597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-11-19 11:27:50.636622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-11-19 11:27:50.636785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-11-19 11:27:50.636810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-11-19 11:27:50.636936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-11-19 11:27:50.636962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-11-19 11:27:50.637091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-11-19 11:27:50.637117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-11-19 11:27:50.637247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-11-19 11:27:50.637272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-11-19 11:27:50.637438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-11-19 11:27:50.637465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-11-19 11:27:50.637569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-11-19 11:27:50.637595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-11-19 11:27:50.637723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-11-19 11:27:50.637748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-11-19 11:27:50.637896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-11-19 11:27:50.637922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-11-19 11:27:50.638044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-11-19 11:27:50.638069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-11-19 11:27:50.638193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-11-19 11:27:50.638218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-11-19 11:27:50.638392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-11-19 11:27:50.638434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-11-19 11:27:50.638595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-11-19 11:27:50.638635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-11-19 11:27:50.638797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-11-19 11:27:50.638824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-11-19 11:27:50.638978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-11-19 11:27:50.639010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-11-19 11:27:50.639164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-11-19 11:27:50.639191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-11-19 11:27:50.639340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-11-19 11:27:50.639372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-11-19 11:27:50.639472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-11-19 11:27:50.639497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-11-19 11:27:50.639597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-11-19 11:27:50.639622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-11-19 11:27:50.639754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-11-19 11:27:50.639791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-11-19 11:27:50.639935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-11-19 11:27:50.639960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-11-19 11:27:50.640085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-11-19 11:27:50.640110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-11-19 11:27:50.640233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-11-19 11:27:50.640259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-11-19 11:27:50.640391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-11-19 11:27:50.640441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.382 [2024-11-19 11:27:50.640572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.382 [2024-11-19 11:27:50.640599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.382 qpair failed and we were unable to recover it. 00:25:55.383 [2024-11-19 11:27:50.640724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-11-19 11:27:50.640749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-11-19 11:27:50.640889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-11-19 11:27:50.640915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-11-19 11:27:50.641056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-11-19 11:27:50.641081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-11-19 11:27:50.641233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-11-19 11:27:50.641269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-11-19 11:27:50.641406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-11-19 11:27:50.641433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-11-19 11:27:50.641551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-11-19 11:27:50.641576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-11-19 11:27:50.641767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-11-19 11:27:50.641793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-11-19 11:27:50.641918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-11-19 11:27:50.641944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-11-19 11:27:50.642066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-11-19 11:27:50.642091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-11-19 11:27:50.642217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-11-19 11:27:50.642243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-11-19 11:27:50.642415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-11-19 11:27:50.642441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-11-19 11:27:50.642589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-11-19 11:27:50.642621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-11-19 11:27:50.642773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-11-19 11:27:50.642799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-11-19 11:27:50.642934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-11-19 11:27:50.642959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-11-19 11:27:50.643107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-11-19 11:27:50.643132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-11-19 11:27:50.643281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-11-19 11:27:50.643306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-11-19 11:27:50.643463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-11-19 11:27:50.643497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-11-19 11:27:50.643629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-11-19 11:27:50.643655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-11-19 11:27:50.643782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-11-19 11:27:50.643807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-11-19 11:27:50.643954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-11-19 11:27:50.643980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-11-19 11:27:50.644114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-11-19 11:27:50.644140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-11-19 11:27:50.644249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-11-19 11:27:50.644274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-11-19 11:27:50.644396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-11-19 11:27:50.644423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-11-19 11:27:50.644520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-11-19 11:27:50.644545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-11-19 11:27:50.644661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-11-19 11:27:50.644686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-11-19 11:27:50.644801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-11-19 11:27:50.644826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-11-19 11:27:50.644973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-11-19 11:27:50.644999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-11-19 11:27:50.645112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-11-19 11:27:50.645137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-11-19 11:27:50.645300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-11-19 11:27:50.645327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-11-19 11:27:50.645499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-11-19 11:27:50.645539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-11-19 11:27:50.645668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-11-19 11:27:50.645697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-11-19 11:27:50.645842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-11-19 11:27:50.645869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-11-19 11:27:50.645977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-11-19 11:27:50.646003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-11-19 11:27:50.646152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-11-19 11:27:50.646179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb71c000b90 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-11-19 11:27:50.646328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.383 [2024-11-19 11:27:50.646354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.383 qpair failed and we were unable to recover it. 00:25:55.383 [2024-11-19 11:27:50.646464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-11-19 11:27:50.646489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-11-19 11:27:50.646610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-11-19 11:27:50.646635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-11-19 11:27:50.646788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-11-19 11:27:50.646813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-11-19 11:27:50.646934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-11-19 11:27:50.646959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-11-19 11:27:50.647106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-11-19 11:27:50.647132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-11-19 11:27:50.647290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-11-19 11:27:50.647315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-11-19 11:27:50.647465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-11-19 11:27:50.647491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-11-19 11:27:50.647610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-11-19 11:27:50.647635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-11-19 11:27:50.647784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-11-19 11:27:50.647809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-11-19 11:27:50.647899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-11-19 11:27:50.647925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-11-19 11:27:50.648069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-11-19 11:27:50.648094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-11-19 11:27:50.648234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-11-19 11:27:50.648259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-11-19 11:27:50.648403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-11-19 11:27:50.648430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-11-19 11:27:50.648508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-11-19 11:27:50.648534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-11-19 11:27:50.648663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-11-19 11:27:50.648688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-11-19 11:27:50.648802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-11-19 11:27:50.648827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-11-19 11:27:50.648972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-11-19 11:27:50.648997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-11-19 11:27:50.649115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-11-19 11:27:50.649140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-11-19 11:27:50.649259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-11-19 11:27:50.649285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-11-19 11:27:50.649405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-11-19 11:27:50.649430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-11-19 11:27:50.649582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-11-19 11:27:50.649608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-11-19 11:27:50.649730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-11-19 11:27:50.649755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-11-19 11:27:50.649902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-11-19 11:27:50.649928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-11-19 11:27:50.650047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-11-19 11:27:50.650072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-11-19 11:27:50.650217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-11-19 11:27:50.650242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-11-19 11:27:50.650373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-11-19 11:27:50.650399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-11-19 11:27:50.650517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-11-19 11:27:50.650543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-11-19 11:27:50.650664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-11-19 11:27:50.650690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-11-19 11:27:50.650801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-11-19 11:27:50.650826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-11-19 11:27:50.650913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-11-19 11:27:50.650938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-11-19 11:27:50.651081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-11-19 11:27:50.651107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-11-19 11:27:50.651227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-11-19 11:27:50.651252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-11-19 11:27:50.651449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-11-19 11:27:50.651475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-11-19 11:27:50.651622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-11-19 11:27:50.651648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-11-19 11:27:50.651801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-11-19 11:27:50.651826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-11-19 11:27:50.651947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.384 [2024-11-19 11:27:50.651977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.384 qpair failed and we were unable to recover it. 00:25:55.384 [2024-11-19 11:27:50.652103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-11-19 11:27:50.652128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-11-19 11:27:50.652241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-11-19 11:27:50.652266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-11-19 11:27:50.652421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-11-19 11:27:50.652447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-11-19 11:27:50.652543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-11-19 11:27:50.652568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-11-19 11:27:50.652722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-11-19 11:27:50.652747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-11-19 11:27:50.652902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-11-19 11:27:50.652927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-11-19 11:27:50.653071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-11-19 11:27:50.653096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-11-19 11:27:50.653249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-11-19 11:27:50.653275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-11-19 11:27:50.653408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-11-19 11:27:50.653434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-11-19 11:27:50.653558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-11-19 11:27:50.653583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-11-19 11:27:50.653701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-11-19 11:27:50.653726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-11-19 11:27:50.653874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-11-19 11:27:50.653899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-11-19 11:27:50.654018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-11-19 11:27:50.654044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-11-19 11:27:50.654162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-11-19 11:27:50.654188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-11-19 11:27:50.654341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-11-19 11:27:50.654374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-11-19 11:27:50.654467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-11-19 11:27:50.654492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-11-19 11:27:50.654640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-11-19 11:27:50.654665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-11-19 11:27:50.654782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-11-19 11:27:50.654807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-11-19 11:27:50.654950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-11-19 11:27:50.654975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-11-19 11:27:50.655131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-11-19 11:27:50.655156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-11-19 11:27:50.655272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-11-19 11:27:50.655298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-11-19 11:27:50.655444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-11-19 11:27:50.655470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-11-19 11:27:50.655588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-11-19 11:27:50.655613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-11-19 11:27:50.655734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-11-19 11:27:50.655760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-11-19 11:27:50.655885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-11-19 11:27:50.655910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-11-19 11:27:50.656029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-11-19 11:27:50.656055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-11-19 11:27:50.656168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-11-19 11:27:50.656197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-11-19 11:27:50.656349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-11-19 11:27:50.656383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-11-19 11:27:50.656475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-11-19 11:27:50.656500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-11-19 11:27:50.656648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-11-19 11:27:50.656673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-11-19 11:27:50.656765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-11-19 11:27:50.656791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-11-19 11:27:50.656886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-11-19 11:27:50.656911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-11-19 11:27:50.657034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-11-19 11:27:50.657058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-11-19 11:27:50.657173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-11-19 11:27:50.657198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-11-19 11:27:50.657306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-11-19 11:27:50.657332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-11-19 11:27:50.657459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.385 [2024-11-19 11:27:50.657485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.385 qpair failed and we were unable to recover it. 00:25:55.385 [2024-11-19 11:27:50.657633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-11-19 11:27:50.657659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-11-19 11:27:50.657777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-11-19 11:27:50.657802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-11-19 11:27:50.657947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-11-19 11:27:50.657972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-11-19 11:27:50.658115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-11-19 11:27:50.658140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-11-19 11:27:50.658284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-11-19 11:27:50.658310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-11-19 11:27:50.658433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-11-19 11:27:50.658459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-11-19 11:27:50.658581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-11-19 11:27:50.658607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-11-19 11:27:50.658753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-11-19 11:27:50.658778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-11-19 11:27:50.658926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-11-19 11:27:50.658951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-11-19 11:27:50.659100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-11-19 11:27:50.659125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-11-19 11:27:50.659246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-11-19 11:27:50.659271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-11-19 11:27:50.659398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-11-19 11:27:50.659424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-11-19 11:27:50.659571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-11-19 11:27:50.659597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-11-19 11:27:50.659687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-11-19 11:27:50.659713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-11-19 11:27:50.659858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-11-19 11:27:50.659883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-11-19 11:27:50.660007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-11-19 11:27:50.660033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-11-19 11:27:50.660176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-11-19 11:27:50.660201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-11-19 11:27:50.660343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-11-19 11:27:50.660375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-11-19 11:27:50.660498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-11-19 11:27:50.660523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-11-19 11:27:50.660642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-11-19 11:27:50.660667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-11-19 11:27:50.660816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-11-19 11:27:50.660842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-11-19 11:27:50.660994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-11-19 11:27:50.661019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-11-19 11:27:50.661164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-11-19 11:27:50.661189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-11-19 11:27:50.661300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-11-19 11:27:50.661325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-11-19 11:27:50.661450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-11-19 11:27:50.661475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-11-19 11:27:50.661615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-11-19 11:27:50.661640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-11-19 11:27:50.661786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-11-19 11:27:50.661811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-11-19 11:27:50.661957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-11-19 11:27:50.661982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-11-19 11:27:50.662101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-11-19 11:27:50.662126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-11-19 11:27:50.662245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-11-19 11:27:50.662271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-11-19 11:27:50.662426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.386 [2024-11-19 11:27:50.662452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.386 qpair failed and we were unable to recover it. 00:25:55.386 [2024-11-19 11:27:50.662544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-11-19 11:27:50.662569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-11-19 11:27:50.662693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-11-19 11:27:50.662718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-11-19 11:27:50.662834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-11-19 11:27:50.662859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-11-19 11:27:50.663001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-11-19 11:27:50.663026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-11-19 11:27:50.663146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-11-19 11:27:50.663172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-11-19 11:27:50.663315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-11-19 11:27:50.663340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-11-19 11:27:50.663471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-11-19 11:27:50.663497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-11-19 11:27:50.663608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-11-19 11:27:50.663633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-11-19 11:27:50.663753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-11-19 11:27:50.663778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-11-19 11:27:50.663923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-11-19 11:27:50.663949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-11-19 11:27:50.664096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-11-19 11:27:50.664121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-11-19 11:27:50.664244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-11-19 11:27:50.664269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-11-19 11:27:50.664399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-11-19 11:27:50.664425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-11-19 11:27:50.664569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-11-19 11:27:50.664594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-11-19 11:27:50.664754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-11-19 11:27:50.664779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-11-19 11:27:50.664896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-11-19 11:27:50.664922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-11-19 11:27:50.665068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-11-19 11:27:50.665093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-11-19 11:27:50.665188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-11-19 11:27:50.665213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-11-19 11:27:50.665354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-11-19 11:27:50.665392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-11-19 11:27:50.665539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-11-19 11:27:50.665564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-11-19 11:27:50.665657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-11-19 11:27:50.665682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-11-19 11:27:50.665828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-11-19 11:27:50.665853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-11-19 11:27:50.665971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-11-19 11:27:50.665996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-11-19 11:27:50.666125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-11-19 11:27:50.666150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-11-19 11:27:50.666273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-11-19 11:27:50.666299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-11-19 11:27:50.666421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-11-19 11:27:50.666446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-11-19 11:27:50.666600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-11-19 11:27:50.666626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-11-19 11:27:50.666714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-11-19 11:27:50.666744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-11-19 11:27:50.666870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-11-19 11:27:50.666894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-11-19 11:27:50.667041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-11-19 11:27:50.667066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-11-19 11:27:50.667190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-11-19 11:27:50.667217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-11-19 11:27:50.667331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-11-19 11:27:50.667356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-11-19 11:27:50.667479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-11-19 11:27:50.667505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-11-19 11:27:50.667619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-11-19 11:27:50.667645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-11-19 11:27:50.667758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-11-19 11:27:50.667783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.387 qpair failed and we were unable to recover it. 00:25:55.387 [2024-11-19 11:27:50.667900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.387 [2024-11-19 11:27:50.667925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-11-19 11:27:50.668041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-11-19 11:27:50.668067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-11-19 11:27:50.668197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-11-19 11:27:50.668222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-11-19 11:27:50.668332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-11-19 11:27:50.668358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-11-19 11:27:50.668482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-11-19 11:27:50.668508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-11-19 11:27:50.668629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-11-19 11:27:50.668654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-11-19 11:27:50.668805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-11-19 11:27:50.668830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-11-19 11:27:50.668974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-11-19 11:27:50.669000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-11-19 11:27:50.669142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-11-19 11:27:50.669168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-11-19 11:27:50.669316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-11-19 11:27:50.669341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-11-19 11:27:50.669469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-11-19 11:27:50.669494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-11-19 11:27:50.669612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-11-19 11:27:50.669637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-11-19 11:27:50.669766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-11-19 11:27:50.669791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-11-19 11:27:50.669945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-11-19 11:27:50.669970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-11-19 11:27:50.670085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-11-19 11:27:50.670111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-11-19 11:27:50.670257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-11-19 11:27:50.670282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-11-19 11:27:50.670409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-11-19 11:27:50.670435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-11-19 11:27:50.670581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-11-19 11:27:50.670606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-11-19 11:27:50.670757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-11-19 11:27:50.670782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-11-19 11:27:50.670902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-11-19 11:27:50.670932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-11-19 11:27:50.671053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-11-19 11:27:50.671079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-11-19 11:27:50.671224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-11-19 11:27:50.671250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-11-19 11:27:50.671359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-11-19 11:27:50.671390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-11-19 11:27:50.671537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 11:27:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:55.388 [2024-11-19 11:27:50.671563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 11:27:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:25:55.388 [2024-11-19 11:27:50.671683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-11-19 11:27:50.671712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 11:27:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:55.388 [2024-11-19 11:27:50.671856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-11-19 11:27:50.671883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 11:27:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:55.388 [2024-11-19 11:27:50.672004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-11-19 11:27:50.672030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 11:27:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:55.388 [2024-11-19 11:27:50.672151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-11-19 11:27:50.672177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-11-19 11:27:50.672322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-11-19 11:27:50.672347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-11-19 11:27:50.672498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-11-19 11:27:50.672523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-11-19 11:27:50.672643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-11-19 11:27:50.672668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-11-19 11:27:50.672795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-11-19 11:27:50.672821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-11-19 11:27:50.672975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-11-19 11:27:50.673000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-11-19 11:27:50.673120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-11-19 11:27:50.673145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-11-19 11:27:50.673289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-11-19 11:27:50.673315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-11-19 11:27:50.673429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-11-19 11:27:50.673454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.388 qpair failed and we were unable to recover it. 00:25:55.388 [2024-11-19 11:27:50.673574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.388 [2024-11-19 11:27:50.673599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-11-19 11:27:50.673712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-11-19 11:27:50.673738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-11-19 11:27:50.673883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-11-19 11:27:50.673908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-11-19 11:27:50.674040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-11-19 11:27:50.674066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-11-19 11:27:50.674193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-11-19 11:27:50.674226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-11-19 11:27:50.674380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-11-19 11:27:50.674407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-11-19 11:27:50.674496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-11-19 11:27:50.674522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-11-19 11:27:50.674648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-11-19 11:27:50.674673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-11-19 11:27:50.674797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-11-19 11:27:50.674827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-11-19 11:27:50.674986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-11-19 11:27:50.675012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-11-19 11:27:50.675155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-11-19 11:27:50.675181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-11-19 11:27:50.675337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-11-19 11:27:50.675367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-11-19 11:27:50.675458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-11-19 11:27:50.675483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-11-19 11:27:50.675650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-11-19 11:27:50.675676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-11-19 11:27:50.675829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-11-19 11:27:50.675854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-11-19 11:27:50.675986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-11-19 11:27:50.676012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-11-19 11:27:50.676159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-11-19 11:27:50.676184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-11-19 11:27:50.676316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-11-19 11:27:50.676341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-11-19 11:27:50.676475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-11-19 11:27:50.676501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-11-19 11:27:50.676619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-11-19 11:27:50.676645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-11-19 11:27:50.676769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-11-19 11:27:50.676795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-11-19 11:27:50.676950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-11-19 11:27:50.676975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-11-19 11:27:50.677108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-11-19 11:27:50.677134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-11-19 11:27:50.677296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-11-19 11:27:50.677322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-11-19 11:27:50.677420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-11-19 11:27:50.677446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-11-19 11:27:50.677611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-11-19 11:27:50.677637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-11-19 11:27:50.677752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-11-19 11:27:50.677777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-11-19 11:27:50.677917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-11-19 11:27:50.677942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-11-19 11:27:50.678069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-11-19 11:27:50.678094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-11-19 11:27:50.678227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-11-19 11:27:50.678254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-11-19 11:27:50.678342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-11-19 11:27:50.678376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-11-19 11:27:50.678493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-11-19 11:27:50.678519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-11-19 11:27:50.678649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-11-19 11:27:50.678674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-11-19 11:27:50.678833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-11-19 11:27:50.678858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-11-19 11:27:50.678979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-11-19 11:27:50.679004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-11-19 11:27:50.679154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-11-19 11:27:50.679185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-11-19 11:27:50.679327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-11-19 11:27:50.679352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-11-19 11:27:50.679484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-11-19 11:27:50.679509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-11-19 11:27:50.679619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-11-19 11:27:50.679645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.389 [2024-11-19 11:27:50.679748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.389 [2024-11-19 11:27:50.679773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.389 qpair failed and we were unable to recover it. 00:25:55.390 [2024-11-19 11:27:50.679897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-11-19 11:27:50.679922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-11-19 11:27:50.680043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-11-19 11:27:50.680069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-11-19 11:27:50.680201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-11-19 11:27:50.680226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-11-19 11:27:50.680392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-11-19 11:27:50.680427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-11-19 11:27:50.680569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-11-19 11:27:50.680595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-11-19 11:27:50.680708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-11-19 11:27:50.680735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-11-19 11:27:50.680922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-11-19 11:27:50.680948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-11-19 11:27:50.681095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-11-19 11:27:50.681120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-11-19 11:27:50.681222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-11-19 11:27:50.681247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-11-19 11:27:50.681389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-11-19 11:27:50.681415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-11-19 11:27:50.681539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-11-19 11:27:50.681564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-11-19 11:27:50.681694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-11-19 11:27:50.681729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-11-19 11:27:50.681878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-11-19 11:27:50.681903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-11-19 11:27:50.682040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-11-19 11:27:50.682065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-11-19 11:27:50.682199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-11-19 11:27:50.682225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-11-19 11:27:50.682376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-11-19 11:27:50.682401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-11-19 11:27:50.682509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-11-19 11:27:50.682534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-11-19 11:27:50.682650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-11-19 11:27:50.682676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-11-19 11:27:50.682817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-11-19 11:27:50.682843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-11-19 11:27:50.682969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-11-19 11:27:50.682995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-11-19 11:27:50.683129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-11-19 11:27:50.683154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-11-19 11:27:50.683235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-11-19 11:27:50.683260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-11-19 11:27:50.683356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-11-19 11:27:50.683387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-11-19 11:27:50.683503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-11-19 11:27:50.683529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-11-19 11:27:50.683647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-11-19 11:27:50.683672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-11-19 11:27:50.683803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-11-19 11:27:50.683828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-11-19 11:27:50.683920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-11-19 11:27:50.683946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-11-19 11:27:50.684097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-11-19 11:27:50.684123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-11-19 11:27:50.684249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-11-19 11:27:50.684274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-11-19 11:27:50.684401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-11-19 11:27:50.684427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-11-19 11:27:50.684542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-11-19 11:27:50.684567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-11-19 11:27:50.684679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-11-19 11:27:50.684704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-11-19 11:27:50.684795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-11-19 11:27:50.684821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-11-19 11:27:50.684911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-11-19 11:27:50.684937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-11-19 11:27:50.685053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-11-19 11:27:50.685079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-11-19 11:27:50.685200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.390 [2024-11-19 11:27:50.685225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.390 qpair failed and we were unable to recover it. 00:25:55.390 [2024-11-19 11:27:50.685314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-11-19 11:27:50.685339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-11-19 11:27:50.685464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-11-19 11:27:50.685491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-11-19 11:27:50.685603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-11-19 11:27:50.685628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-11-19 11:27:50.685742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-11-19 11:27:50.685768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-11-19 11:27:50.685914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-11-19 11:27:50.685939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-11-19 11:27:50.686031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-11-19 11:27:50.686056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-11-19 11:27:50.686167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-11-19 11:27:50.686193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-11-19 11:27:50.686314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-11-19 11:27:50.686339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-11-19 11:27:50.686441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-11-19 11:27:50.686467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-11-19 11:27:50.686588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-11-19 11:27:50.686613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-11-19 11:27:50.686692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-11-19 11:27:50.686717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-11-19 11:27:50.686844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-11-19 11:27:50.686869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-11-19 11:27:50.686951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-11-19 11:27:50.686976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-11-19 11:27:50.687121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-11-19 11:27:50.687146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-11-19 11:27:50.687271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-11-19 11:27:50.687296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-11-19 11:27:50.687410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-11-19 11:27:50.687436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-11-19 11:27:50.687530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-11-19 11:27:50.687555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-11-19 11:27:50.687684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-11-19 11:27:50.687709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-11-19 11:27:50.687877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-11-19 11:27:50.687902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-11-19 11:27:50.688027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-11-19 11:27:50.688052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-11-19 11:27:50.688228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-11-19 11:27:50.688253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-11-19 11:27:50.688387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-11-19 11:27:50.688413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-11-19 11:27:50.688529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-11-19 11:27:50.688555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-11-19 11:27:50.688691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-11-19 11:27:50.688716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-11-19 11:27:50.688878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-11-19 11:27:50.688904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-11-19 11:27:50.689017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-11-19 11:27:50.689042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-11-19 11:27:50.689159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-11-19 11:27:50.689195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-11-19 11:27:50.689276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-11-19 11:27:50.689305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-11-19 11:27:50.689447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-11-19 11:27:50.689473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-11-19 11:27:50.689565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-11-19 11:27:50.689591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-11-19 11:27:50.689685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-11-19 11:27:50.689710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-11-19 11:27:50.689808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-11-19 11:27:50.689833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-11-19 11:27:50.689953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-11-19 11:27:50.689978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-11-19 11:27:50.690095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-11-19 11:27:50.690121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-11-19 11:27:50.690265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-11-19 11:27:50.690290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-11-19 11:27:50.690411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-11-19 11:27:50.690437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.391 qpair failed and we were unable to recover it. 00:25:55.391 [2024-11-19 11:27:50.690538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.391 [2024-11-19 11:27:50.690564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-11-19 11:27:50.690682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-11-19 11:27:50.690708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-11-19 11:27:50.690823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-11-19 11:27:50.690848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-11-19 11:27:50.690974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-11-19 11:27:50.690999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-11-19 11:27:50.691115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-11-19 11:27:50.691140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-11-19 11:27:50.691257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-11-19 11:27:50.691282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-11-19 11:27:50.691414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-11-19 11:27:50.691440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-11-19 11:27:50.691563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-11-19 11:27:50.691588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-11-19 11:27:50.691707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-11-19 11:27:50.691732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-11-19 11:27:50.691818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-11-19 11:27:50.691843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-11-19 11:27:50.691956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-11-19 11:27:50.691982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-11-19 11:27:50.692097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-11-19 11:27:50.692122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-11-19 11:27:50.692248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-11-19 11:27:50.692273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-11-19 11:27:50.692388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-11-19 11:27:50.692414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-11-19 11:27:50.692517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-11-19 11:27:50.692542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-11-19 11:27:50.692663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-11-19 11:27:50.692687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-11-19 11:27:50.692830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-11-19 11:27:50.692856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-11-19 11:27:50.692966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-11-19 11:27:50.692992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-11-19 11:27:50.693069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-11-19 11:27:50.693102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-11-19 11:27:50.693195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-11-19 11:27:50.693222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 11:27:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:55.392 [2024-11-19 11:27:50.693342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-11-19 11:27:50.693375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 11:27:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:55.392 [2024-11-19 11:27:50.693454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-11-19 11:27:50.693479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-11-19 11:27:50.693577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-11-19 11:27:50.693603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 11:27:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.392 11:27:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:55.392 [2024-11-19 11:27:50.693757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-11-19 11:27:50.693783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-11-19 11:27:50.693929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-11-19 11:27:50.693954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-11-19 11:27:50.694049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-11-19 11:27:50.694075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-11-19 11:27:50.694191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-11-19 11:27:50.694217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-11-19 11:27:50.694369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-11-19 11:27:50.694394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-11-19 11:27:50.694491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-11-19 11:27:50.694517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-11-19 11:27:50.694627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-11-19 11:27:50.694656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-11-19 11:27:50.694757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-11-19 11:27:50.694794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-11-19 11:27:50.694924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-11-19 11:27:50.694950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-11-19 11:27:50.695082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-11-19 11:27:50.695107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-11-19 11:27:50.695235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-11-19 11:27:50.695259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-11-19 11:27:50.695353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-11-19 11:27:50.695385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-11-19 11:27:50.695479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-11-19 11:27:50.695505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-11-19 11:27:50.695635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-11-19 11:27:50.695660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-11-19 11:27:50.695815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-11-19 11:27:50.695840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-11-19 11:27:50.696031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-11-19 11:27:50.696057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-11-19 11:27:50.696228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-11-19 11:27:50.696253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.392 [2024-11-19 11:27:50.696381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.392 [2024-11-19 11:27:50.696414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.392 qpair failed and we were unable to recover it. 00:25:55.393 [2024-11-19 11:27:50.696537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-11-19 11:27:50.696562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-11-19 11:27:50.696700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-11-19 11:27:50.696725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-11-19 11:27:50.696876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-11-19 11:27:50.696902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-11-19 11:27:50.697022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-11-19 11:27:50.697052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-11-19 11:27:50.697210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-11-19 11:27:50.697235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-11-19 11:27:50.697418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-11-19 11:27:50.697444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-11-19 11:27:50.697573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-11-19 11:27:50.697598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-11-19 11:27:50.697751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-11-19 11:27:50.697776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-11-19 11:27:50.697869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-11-19 11:27:50.697894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-11-19 11:27:50.698094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-11-19 11:27:50.698125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-11-19 11:27:50.698335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-11-19 11:27:50.698360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-11-19 11:27:50.698485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-11-19 11:27:50.698510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-11-19 11:27:50.698674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-11-19 11:27:50.698698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-11-19 11:27:50.698852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-11-19 11:27:50.698884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-11-19 11:27:50.699028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-11-19 11:27:50.699054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-11-19 11:27:50.699245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-11-19 11:27:50.699270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-11-19 11:27:50.699406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-11-19 11:27:50.699432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-11-19 11:27:50.699526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-11-19 11:27:50.699551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-11-19 11:27:50.699723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-11-19 11:27:50.699748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-11-19 11:27:50.699862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-11-19 11:27:50.699887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-11-19 11:27:50.700065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-11-19 11:27:50.700091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-11-19 11:27:50.700278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-11-19 11:27:50.700303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-11-19 11:27:50.700433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-11-19 11:27:50.700459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-11-19 11:27:50.700554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-11-19 11:27:50.700579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-11-19 11:27:50.700775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-11-19 11:27:50.700801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-11-19 11:27:50.700921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-11-19 11:27:50.700946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-11-19 11:27:50.701178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-11-19 11:27:50.701203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-11-19 11:27:50.701324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-11-19 11:27:50.701349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-11-19 11:27:50.701452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-11-19 11:27:50.701478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-11-19 11:27:50.701594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-11-19 11:27:50.701625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-11-19 11:27:50.701758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-11-19 11:27:50.701783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-11-19 11:27:50.701938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-11-19 11:27:50.701963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-11-19 11:27:50.702122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-11-19 11:27:50.702148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-11-19 11:27:50.702230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-11-19 11:27:50.702254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-11-19 11:27:50.702343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-11-19 11:27:50.702375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-11-19 11:27:50.702494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-11-19 11:27:50.702519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-11-19 11:27:50.702665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-11-19 11:27:50.702691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-11-19 11:27:50.702879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-11-19 11:27:50.702904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-11-19 11:27:50.703097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-11-19 11:27:50.703126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-11-19 11:27:50.703279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-11-19 11:27:50.703304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-11-19 11:27:50.703455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.393 [2024-11-19 11:27:50.703480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.393 qpair failed and we were unable to recover it. 00:25:55.393 [2024-11-19 11:27:50.703605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-11-19 11:27:50.703642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-11-19 11:27:50.703756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-11-19 11:27:50.703781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-11-19 11:27:50.703948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-11-19 11:27:50.703978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-11-19 11:27:50.704136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-11-19 11:27:50.704162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-11-19 11:27:50.704353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-11-19 11:27:50.704386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-11-19 11:27:50.704488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-11-19 11:27:50.704513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-11-19 11:27:50.704660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-11-19 11:27:50.704686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-11-19 11:27:50.704840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-11-19 11:27:50.704865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-11-19 11:27:50.705020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-11-19 11:27:50.705045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-11-19 11:27:50.705248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-11-19 11:27:50.705273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-11-19 11:27:50.705415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-11-19 11:27:50.705441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-11-19 11:27:50.705522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-11-19 11:27:50.705547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-11-19 11:27:50.705728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-11-19 11:27:50.705753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-11-19 11:27:50.705919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-11-19 11:27:50.705944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-11-19 11:27:50.706093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-11-19 11:27:50.706118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-11-19 11:27:50.706269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-11-19 11:27:50.706294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-11-19 11:27:50.706393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-11-19 11:27:50.706419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-11-19 11:27:50.706567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-11-19 11:27:50.706592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-11-19 11:27:50.706677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-11-19 11:27:50.706702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-11-19 11:27:50.706816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-11-19 11:27:50.706842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-11-19 11:27:50.706989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-11-19 11:27:50.707014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-11-19 11:27:50.707141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-11-19 11:27:50.707166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-11-19 11:27:50.707286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-11-19 11:27:50.707311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-11-19 11:27:50.707438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-11-19 11:27:50.707464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-11-19 11:27:50.707555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-11-19 11:27:50.707581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-11-19 11:27:50.707706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-11-19 11:27:50.707731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-11-19 11:27:50.707892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-11-19 11:27:50.707918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-11-19 11:27:50.708076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-11-19 11:27:50.708101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-11-19 11:27:50.708232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-11-19 11:27:50.708257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-11-19 11:27:50.708382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-11-19 11:27:50.708420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-11-19 11:27:50.708549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-11-19 11:27:50.708574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-11-19 11:27:50.708777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-11-19 11:27:50.708803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-11-19 11:27:50.708922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-11-19 11:27:50.708947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-11-19 11:27:50.709113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-11-19 11:27:50.709138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-11-19 11:27:50.709301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-11-19 11:27:50.709326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-11-19 11:27:50.709488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-11-19 11:27:50.709513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-11-19 11:27:50.709618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-11-19 11:27:50.709643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-11-19 11:27:50.709745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-11-19 11:27:50.709770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-11-19 11:27:50.709883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-11-19 11:27:50.709909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-11-19 11:27:50.710048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-11-19 11:27:50.710073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-11-19 11:27:50.710222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-11-19 11:27:50.710247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-11-19 11:27:50.710401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.394 [2024-11-19 11:27:50.710427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.394 qpair failed and we were unable to recover it. 00:25:55.394 [2024-11-19 11:27:50.710549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.395 [2024-11-19 11:27:50.710574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.395 qpair failed and we were unable to recover it. 00:25:55.395 [2024-11-19 11:27:50.710701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.395 [2024-11-19 11:27:50.710727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.395 qpair failed and we were unable to recover it. 00:25:55.395 [2024-11-19 11:27:50.710881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.395 [2024-11-19 11:27:50.710906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.395 qpair failed and we were unable to recover it. 00:25:55.395 [2024-11-19 11:27:50.711037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.395 [2024-11-19 11:27:50.711063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.395 qpair failed and we were unable to recover it. 00:25:55.395 [2024-11-19 11:27:50.711199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.395 [2024-11-19 11:27:50.711224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.395 qpair failed and we were unable to recover it. 00:25:55.395 [2024-11-19 11:27:50.711376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.395 [2024-11-19 11:27:50.711402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.395 qpair failed and we were unable to recover it. 00:25:55.395 [2024-11-19 11:27:50.711510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.395 [2024-11-19 11:27:50.711536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.395 qpair failed and we were unable to recover it. 00:25:55.395 [2024-11-19 11:27:50.711662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.395 [2024-11-19 11:27:50.711687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.395 qpair failed and we were unable to recover it. 00:25:55.395 [2024-11-19 11:27:50.711811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.395 [2024-11-19 11:27:50.711836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.395 qpair failed and we were unable to recover it. 00:25:55.395 [2024-11-19 11:27:50.711966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.395 [2024-11-19 11:27:50.711992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.395 qpair failed and we were unable to recover it. 00:25:55.395 [2024-11-19 11:27:50.712128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.395 [2024-11-19 11:27:50.712154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.395 qpair failed and we were unable to recover it. 00:25:55.395 [2024-11-19 11:27:50.712259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.395 [2024-11-19 11:27:50.712285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.395 qpair failed and we were unable to recover it. 00:25:55.395 [2024-11-19 11:27:50.712411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.395 [2024-11-19 11:27:50.712436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.395 qpair failed and we were unable to recover it. 00:25:55.395 [2024-11-19 11:27:50.712577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.395 [2024-11-19 11:27:50.712602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.395 qpair failed and we were unable to recover it. 00:25:55.395 [2024-11-19 11:27:50.712724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.395 [2024-11-19 11:27:50.712753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.395 qpair failed and we were unable to recover it. 00:25:55.395 [2024-11-19 11:27:50.712919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.395 [2024-11-19 11:27:50.712945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.395 qpair failed and we were unable to recover it. 00:25:55.395 [2024-11-19 11:27:50.713165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.395 [2024-11-19 11:27:50.713190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.395 qpair failed and we were unable to recover it. 00:25:55.395 [2024-11-19 11:27:50.713370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.395 [2024-11-19 11:27:50.713401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.395 qpair failed and we were unable to recover it. 00:25:55.395 [2024-11-19 11:27:50.713494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.395 [2024-11-19 11:27:50.713519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.395 qpair failed and we were unable to recover it. 00:25:55.395 [2024-11-19 11:27:50.713689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.395 [2024-11-19 11:27:50.713714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.395 qpair failed and we were unable to recover it. 00:25:55.395 [2024-11-19 11:27:50.713844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.395 [2024-11-19 11:27:50.713869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.395 qpair failed and we were unable to recover it. 00:25:55.395 [2024-11-19 11:27:50.713990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.395 [2024-11-19 11:27:50.714015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.395 qpair failed and we were unable to recover it. 00:25:55.395 [2024-11-19 11:27:50.714188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.395 [2024-11-19 11:27:50.714216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.395 qpair failed and we were unable to recover it. 00:25:55.395 [2024-11-19 11:27:50.714340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.395 [2024-11-19 11:27:50.714372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.395 qpair failed and we were unable to recover it. 00:25:55.395 [2024-11-19 11:27:50.714522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.395 [2024-11-19 11:27:50.714548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.395 qpair failed and we were unable to recover it. 00:25:55.395 [2024-11-19 11:27:50.714731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.395 [2024-11-19 11:27:50.714755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.395 qpair failed and we were unable to recover it. 00:25:55.395 [2024-11-19 11:27:50.714955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.395 [2024-11-19 11:27:50.714980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.395 qpair failed and we were unable to recover it. 00:25:55.395 [2024-11-19 11:27:50.715138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.395 [2024-11-19 11:27:50.715164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.395 qpair failed and we were unable to recover it. 00:25:55.395 [2024-11-19 11:27:50.715293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.395 [2024-11-19 11:27:50.715319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.395 qpair failed and we were unable to recover it. 00:25:55.395 [2024-11-19 11:27:50.715430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.395 [2024-11-19 11:27:50.715456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.395 qpair failed and we were unable to recover it. 00:25:55.395 [2024-11-19 11:27:50.715593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.395 [2024-11-19 11:27:50.715618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.395 qpair failed and we were unable to recover it. 00:25:55.395 [2024-11-19 11:27:50.715751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.395 [2024-11-19 11:27:50.715776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.395 qpair failed and we were unable to recover it. 00:25:55.395 [2024-11-19 11:27:50.715939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.395 [2024-11-19 11:27:50.715965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.395 qpair failed and we were unable to recover it. 00:25:55.395 [2024-11-19 11:27:50.716095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.395 [2024-11-19 11:27:50.716120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.395 qpair failed and we were unable to recover it. 00:25:55.395 [2024-11-19 11:27:50.716257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.395 [2024-11-19 11:27:50.716282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.395 qpair failed and we were unable to recover it. 00:25:55.395 [2024-11-19 11:27:50.716413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.395 [2024-11-19 11:27:50.716439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.395 qpair failed and we were unable to recover it. 00:25:55.395 [2024-11-19 11:27:50.716606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.395 [2024-11-19 11:27:50.716631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.395 qpair failed and we were unable to recover it. 00:25:55.395 [2024-11-19 11:27:50.716790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.396 [2024-11-19 11:27:50.716816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.396 qpair failed and we were unable to recover it. 00:25:55.396 [2024-11-19 11:27:50.716900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.396 [2024-11-19 11:27:50.716925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.396 qpair failed and we were unable to recover it. 00:25:55.396 [2024-11-19 11:27:50.717040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.396 [2024-11-19 11:27:50.717065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.396 qpair failed and we were unable to recover it. 00:25:55.396 [2024-11-19 11:27:50.717251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.396 [2024-11-19 11:27:50.717276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.396 qpair failed and we were unable to recover it. 00:25:55.396 [2024-11-19 11:27:50.717429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.396 [2024-11-19 11:27:50.717455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.396 qpair failed and we were unable to recover it. 00:25:55.396 [2024-11-19 11:27:50.717588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.396 [2024-11-19 11:27:50.717617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.396 qpair failed and we were unable to recover it. 00:25:55.396 [2024-11-19 11:27:50.717827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.396 [2024-11-19 11:27:50.717852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.396 qpair failed and we were unable to recover it. 00:25:55.396 [2024-11-19 11:27:50.717950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.396 [2024-11-19 11:27:50.717976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.396 qpair failed and we were unable to recover it. 00:25:55.396 [2024-11-19 11:27:50.718143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.396 [2024-11-19 11:27:50.718167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.396 qpair failed and we were unable to recover it. 00:25:55.396 [2024-11-19 11:27:50.718273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.396 [2024-11-19 11:27:50.718298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.396 qpair failed and we were unable to recover it. 00:25:55.396 [2024-11-19 11:27:50.718445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.396 [2024-11-19 11:27:50.718471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.396 qpair failed and we were unable to recover it. 00:25:55.396 [2024-11-19 11:27:50.718620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.396 [2024-11-19 11:27:50.718645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.396 qpair failed and we were unable to recover it. 00:25:55.396 [2024-11-19 11:27:50.718777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.396 [2024-11-19 11:27:50.718802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.396 qpair failed and we were unable to recover it. 00:25:55.396 [2024-11-19 11:27:50.718893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.396 [2024-11-19 11:27:50.718919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.396 qpair failed and we were unable to recover it. 00:25:55.396 [2024-11-19 11:27:50.719076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.396 [2024-11-19 11:27:50.719101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.396 qpair failed and we were unable to recover it. 00:25:55.396 [2024-11-19 11:27:50.719248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.396 [2024-11-19 11:27:50.719273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.396 qpair failed and we were unable to recover it. 00:25:55.396 [2024-11-19 11:27:50.719394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.396 [2024-11-19 11:27:50.719420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.396 qpair failed and we were unable to recover it. 00:25:55.396 [2024-11-19 11:27:50.719515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.396 [2024-11-19 11:27:50.719540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.396 qpair failed and we were unable to recover it. 00:25:55.396 [2024-11-19 11:27:50.719741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.396 [2024-11-19 11:27:50.719783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.396 qpair failed and we were unable to recover it. 00:25:55.396 [2024-11-19 11:27:50.719961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.396 [2024-11-19 11:27:50.719988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.396 qpair failed and we were unable to recover it. 00:25:55.396 [2024-11-19 11:27:50.720086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.396 [2024-11-19 11:27:50.720112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.396 qpair failed and we were unable to recover it. 00:25:55.396 [2024-11-19 11:27:50.720229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.396 [2024-11-19 11:27:50.720254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.396 qpair failed and we were unable to recover it. 00:25:55.396 [2024-11-19 11:27:50.720406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.396 [2024-11-19 11:27:50.720433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.396 qpair failed and we were unable to recover it. 00:25:55.396 [2024-11-19 11:27:50.720535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.396 [2024-11-19 11:27:50.720561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.396 qpair failed and we were unable to recover it. 00:25:55.396 [2024-11-19 11:27:50.720691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.396 [2024-11-19 11:27:50.720717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.396 qpair failed and we were unable to recover it. 00:25:55.396 [2024-11-19 11:27:50.720863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.396 [2024-11-19 11:27:50.720889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.396 qpair failed and we were unable to recover it. 00:25:55.396 [2024-11-19 11:27:50.721049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.396 [2024-11-19 11:27:50.721075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb728000b90 with addr=10.0.0.2, port=4420 00:25:55.396 qpair failed and we were unable to recover it. 00:25:55.396 [2024-11-19 11:27:50.721230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.396 [2024-11-19 11:27:50.721257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.396 qpair failed and we were unable to recover it. 00:25:55.396 [2024-11-19 11:27:50.721402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.396 [2024-11-19 11:27:50.721428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.396 qpair failed and we were unable to recover it. 00:25:55.396 [2024-11-19 11:27:50.721527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.396 [2024-11-19 11:27:50.721553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.396 qpair failed and we were unable to recover it. 00:25:55.396 [2024-11-19 11:27:50.721686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.396 [2024-11-19 11:27:50.721711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.396 qpair failed and we were unable to recover it. 00:25:55.396 [2024-11-19 11:27:50.721871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.396 [2024-11-19 11:27:50.721896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.396 qpair failed and we were unable to recover it. 00:25:55.396 [2024-11-19 11:27:50.722091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.396 [2024-11-19 11:27:50.722116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.396 qpair failed and we were unable to recover it. 00:25:55.396 [2024-11-19 11:27:50.722249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.396 [2024-11-19 11:27:50.722275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.396 qpair failed and we were unable to recover it. 00:25:55.396 [2024-11-19 11:27:50.722440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.396 [2024-11-19 11:27:50.722466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.396 qpair failed and we were unable to recover it. 00:25:55.396 [2024-11-19 11:27:50.722591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.396 [2024-11-19 11:27:50.722617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.396 qpair failed and we were unable to recover it. 00:25:55.396 [2024-11-19 11:27:50.722780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.396 [2024-11-19 11:27:50.722806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.396 qpair failed and we were unable to recover it. 00:25:55.396 [2024-11-19 11:27:50.722963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.396 [2024-11-19 11:27:50.722988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.396 qpair failed and we were unable to recover it. 00:25:55.396 [2024-11-19 11:27:50.723105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.396 [2024-11-19 11:27:50.723130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.396 qpair failed and we were unable to recover it. 00:25:55.396 [2024-11-19 11:27:50.723306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.396 [2024-11-19 11:27:50.723331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.396 qpair failed and we were unable to recover it. 00:25:55.396 [2024-11-19 11:27:50.723466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.396 [2024-11-19 11:27:50.723491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.396 qpair failed and we were unable to recover it. 00:25:55.396 [2024-11-19 11:27:50.723596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.396 [2024-11-19 11:27:50.723621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.396 qpair failed and we were unable to recover it. 00:25:55.396 [2024-11-19 11:27:50.723790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.396 [2024-11-19 11:27:50.723816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.396 qpair failed and we were unable to recover it. 00:25:55.396 [2024-11-19 11:27:50.723935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.396 [2024-11-19 11:27:50.723960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.396 qpair failed and we were unable to recover it. 00:25:55.397 [2024-11-19 11:27:50.724097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.397 [2024-11-19 11:27:50.724123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.397 qpair failed and we were unable to recover it. 00:25:55.397 [2024-11-19 11:27:50.724317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.397 [2024-11-19 11:27:50.724343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.397 qpair failed and we were unable to recover it. 00:25:55.397 [2024-11-19 11:27:50.724477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.397 [2024-11-19 11:27:50.724513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.397 qpair failed and we were unable to recover it. 00:25:55.397 [2024-11-19 11:27:50.724611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.397 [2024-11-19 11:27:50.724637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.397 qpair failed and we were unable to recover it. 00:25:55.397 [2024-11-19 11:27:50.724808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.397 [2024-11-19 11:27:50.724833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.397 qpair failed and we were unable to recover it. 00:25:55.397 [2024-11-19 11:27:50.724990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.397 [2024-11-19 11:27:50.725015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.397 qpair failed and we were unable to recover it. 00:25:55.397 [2024-11-19 11:27:50.725174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.397 [2024-11-19 11:27:50.725199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.397 qpair failed and we were unable to recover it. 00:25:55.397 [2024-11-19 11:27:50.725388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.397 [2024-11-19 11:27:50.725414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.397 qpair failed and we were unable to recover it. 00:25:55.397 [2024-11-19 11:27:50.725537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.397 [2024-11-19 11:27:50.725563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.397 qpair failed and we were unable to recover it. 00:25:55.397 [2024-11-19 11:27:50.725722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.397 [2024-11-19 11:27:50.725747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.397 qpair failed and we were unable to recover it. 00:25:55.397 [2024-11-19 11:27:50.725899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.397 [2024-11-19 11:27:50.725926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.397 qpair failed and we were unable to recover it. 00:25:55.397 [2024-11-19 11:27:50.726051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.397 [2024-11-19 11:27:50.726077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.397 qpair failed and we were unable to recover it. 00:25:55.397 [2024-11-19 11:27:50.726272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.397 [2024-11-19 11:27:50.726297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.397 qpair failed and we were unable to recover it. 00:25:55.397 [2024-11-19 11:27:50.726448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.397 [2024-11-19 11:27:50.726474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.397 qpair failed and we were unable to recover it. 00:25:55.397 [2024-11-19 11:27:50.726610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.397 [2024-11-19 11:27:50.726636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.397 qpair failed and we were unable to recover it. 00:25:55.397 [2024-11-19 11:27:50.726813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.397 [2024-11-19 11:27:50.726839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.397 qpair failed and we were unable to recover it. 00:25:55.397 [2024-11-19 11:27:50.727035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.397 [2024-11-19 11:27:50.727060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.397 qpair failed and we were unable to recover it. 00:25:55.397 [2024-11-19 11:27:50.727139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.397 [2024-11-19 11:27:50.727164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.397 qpair failed and we were unable to recover it. 00:25:55.397 [2024-11-19 11:27:50.727338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.397 [2024-11-19 11:27:50.727369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.397 qpair failed and we were unable to recover it. 00:25:55.397 [2024-11-19 11:27:50.727463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.397 [2024-11-19 11:27:50.727489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.397 qpair failed and we were unable to recover it. 00:25:55.397 [2024-11-19 11:27:50.727625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.397 [2024-11-19 11:27:50.727651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.397 qpair failed and we were unable to recover it. 00:25:55.397 [2024-11-19 11:27:50.727783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.397 [2024-11-19 11:27:50.727809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.397 qpair failed and we were unable to recover it. 00:25:55.397 [2024-11-19 11:27:50.727962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.397 [2024-11-19 11:27:50.727988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.397 qpair failed and we were unable to recover it. 00:25:55.397 [2024-11-19 11:27:50.728172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.397 [2024-11-19 11:27:50.728198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.397 qpair failed and we were unable to recover it. 00:25:55.397 [2024-11-19 11:27:50.728339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.397 [2024-11-19 11:27:50.728369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.397 qpair failed and we were unable to recover it. 00:25:55.397 [2024-11-19 11:27:50.728496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.397 [2024-11-19 11:27:50.728522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.397 qpair failed and we were unable to recover it. 00:25:55.397 [2024-11-19 11:27:50.728600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.397 [2024-11-19 11:27:50.728629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.397 qpair failed and we were unable to recover it. 00:25:55.397 [2024-11-19 11:27:50.728848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.397 [2024-11-19 11:27:50.728873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.397 qpair failed and we were unable to recover it. 00:25:55.397 [2024-11-19 11:27:50.729000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.397 [2024-11-19 11:27:50.729034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.397 qpair failed and we were unable to recover it. 00:25:55.397 [2024-11-19 11:27:50.729255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.397 [2024-11-19 11:27:50.729281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.397 qpair failed and we were unable to recover it. 00:25:55.397 [2024-11-19 11:27:50.729442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.397 [2024-11-19 11:27:50.729468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.397 qpair failed and we were unable to recover it. 00:25:55.397 [2024-11-19 11:27:50.729586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.397 [2024-11-19 11:27:50.729611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.397 qpair failed and we were unable to recover it. 00:25:55.397 [2024-11-19 11:27:50.729766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.397 [2024-11-19 11:27:50.729792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.397 qpair failed and we were unable to recover it. 00:25:55.397 [2024-11-19 11:27:50.729950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.397 [2024-11-19 11:27:50.729976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.397 qpair failed and we were unable to recover it. 00:25:55.397 [2024-11-19 11:27:50.730122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.397 [2024-11-19 11:27:50.730148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.397 qpair failed and we were unable to recover it. 00:25:55.397 [2024-11-19 11:27:50.730261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.397 [2024-11-19 11:27:50.730287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.397 qpair failed and we were unable to recover it. 00:25:55.397 [2024-11-19 11:27:50.730415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.397 [2024-11-19 11:27:50.730442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.397 qpair failed and we were unable to recover it. 00:25:55.397 [2024-11-19 11:27:50.730567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.397 [2024-11-19 11:27:50.730592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.397 qpair failed and we were unable to recover it. 00:25:55.397 [2024-11-19 11:27:50.730714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.397 [2024-11-19 11:27:50.730739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.397 qpair failed and we were unable to recover it. 00:25:55.397 [2024-11-19 11:27:50.730827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.397 [2024-11-19 11:27:50.730853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.397 qpair failed and we were unable to recover it. 00:25:55.397 [2024-11-19 11:27:50.731003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.397 [2024-11-19 11:27:50.731028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.397 qpair failed and we were unable to recover it. 00:25:55.397 [2024-11-19 11:27:50.731179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.397 [2024-11-19 11:27:50.731205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.397 qpair failed and we were unable to recover it. 00:25:55.397 [2024-11-19 11:27:50.731334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.397 [2024-11-19 11:27:50.731360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.397 qpair failed and we were unable to recover it. 00:25:55.397 [2024-11-19 11:27:50.731517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.397 [2024-11-19 11:27:50.731542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.397 qpair failed and we were unable to recover it. 00:25:55.397 [2024-11-19 11:27:50.731690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.397 [2024-11-19 11:27:50.731716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.397 qpair failed and we were unable to recover it. 00:25:55.398 [2024-11-19 11:27:50.731866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.398 [2024-11-19 11:27:50.731891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.398 qpair failed and we were unable to recover it. 00:25:55.398 [2024-11-19 11:27:50.732016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.398 [2024-11-19 11:27:50.732042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.398 qpair failed and we were unable to recover it. 00:25:55.398 [2024-11-19 11:27:50.732195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.398 [2024-11-19 11:27:50.732220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.398 qpair failed and we were unable to recover it. 00:25:55.398 [2024-11-19 11:27:50.732301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.398 [2024-11-19 11:27:50.732326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.398 qpair failed and we were unable to recover it. 00:25:55.398 [2024-11-19 11:27:50.732457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.398 [2024-11-19 11:27:50.732483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.398 qpair failed and we were unable to recover it. 00:25:55.398 [2024-11-19 11:27:50.732603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.398 [2024-11-19 11:27:50.732629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.398 qpair failed and we were unable to recover it. 00:25:55.398 [2024-11-19 11:27:50.732779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.398 [2024-11-19 11:27:50.732804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.398 qpair failed and we were unable to recover it. 00:25:55.398 [2024-11-19 11:27:50.732917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.398 [2024-11-19 11:27:50.732943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.398 qpair failed and we were unable to recover it. 00:25:55.398 [2024-11-19 11:27:50.733055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.398 [2024-11-19 11:27:50.733080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.398 qpair failed and we were unable to recover it. 00:25:55.398 [2024-11-19 11:27:50.733236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.398 [2024-11-19 11:27:50.733260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.398 qpair failed and we were unable to recover it. 00:25:55.398 [2024-11-19 11:27:50.733408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.398 [2024-11-19 11:27:50.733439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.398 qpair failed and we were unable to recover it. 00:25:55.398 [2024-11-19 11:27:50.733525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.398 [2024-11-19 11:27:50.733550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.398 qpair failed and we were unable to recover it. 00:25:55.398 [2024-11-19 11:27:50.733698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.398 [2024-11-19 11:27:50.733724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.398 qpair failed and we were unable to recover it. 00:25:55.398 [2024-11-19 11:27:50.733850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.398 [2024-11-19 11:27:50.733875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.398 qpair failed and we were unable to recover it. 00:25:55.398 [2024-11-19 11:27:50.733991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.398 [2024-11-19 11:27:50.734016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.398 qpair failed and we were unable to recover it. 00:25:55.398 [2024-11-19 11:27:50.734167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.398 [2024-11-19 11:27:50.734192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.398 qpair failed and we were unable to recover it. 00:25:55.398 [2024-11-19 11:27:50.734317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.398 [2024-11-19 11:27:50.734342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.398 qpair failed and we were unable to recover it. 00:25:55.398 [2024-11-19 11:27:50.734510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.398 [2024-11-19 11:27:50.734536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.398 qpair failed and we were unable to recover it. 00:25:55.398 [2024-11-19 11:27:50.734659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.398 [2024-11-19 11:27:50.734684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.398 qpair failed and we were unable to recover it. 00:25:55.398 [2024-11-19 11:27:50.734832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.398 [2024-11-19 11:27:50.734857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.398 qpair failed and we were unable to recover it. 00:25:55.398 [2024-11-19 11:27:50.734976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.398 [2024-11-19 11:27:50.735002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.398 qpair failed and we were unable to recover it. 00:25:55.398 [2024-11-19 11:27:50.735126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.398 [2024-11-19 11:27:50.735152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.398 qpair failed and we were unable to recover it. 00:25:55.398 [2024-11-19 11:27:50.735302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.398 [2024-11-19 11:27:50.735327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.398 qpair failed and we were unable to recover it. 00:25:55.398 [2024-11-19 11:27:50.735487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.398 [2024-11-19 11:27:50.735512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.398 qpair failed and we were unable to recover it. 00:25:55.398 [2024-11-19 11:27:50.735663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.398 [2024-11-19 11:27:50.735689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.398 qpair failed and we were unable to recover it. 00:25:55.398 [2024-11-19 11:27:50.735808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.398 [2024-11-19 11:27:50.735833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.398 qpair failed and we were unable to recover it. 00:25:55.398 [2024-11-19 11:27:50.735945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.398 [2024-11-19 11:27:50.735970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.398 qpair failed and we were unable to recover it. 00:25:55.398 [2024-11-19 11:27:50.736125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.398 [2024-11-19 11:27:50.736150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.398 qpair failed and we were unable to recover it. 00:25:55.398 [2024-11-19 11:27:50.736266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.398 [2024-11-19 11:27:50.736291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.398 qpair failed and we were unable to recover it. 00:25:55.398 [2024-11-19 11:27:50.736444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.398 [2024-11-19 11:27:50.736470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.398 qpair failed and we were unable to recover it. 00:25:55.398 [2024-11-19 11:27:50.736613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.398 [2024-11-19 11:27:50.736638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.398 qpair failed and we were unable to recover it. 00:25:55.398 [2024-11-19 11:27:50.736750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.398 [2024-11-19 11:27:50.736775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.398 qpair failed and we were unable to recover it. 00:25:55.398 [2024-11-19 11:27:50.736931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.398 [2024-11-19 11:27:50.736956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.398 qpair failed and we were unable to recover it. 00:25:55.398 [2024-11-19 11:27:50.737104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.398 [2024-11-19 11:27:50.737129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.398 qpair failed and we were unable to recover it. 00:25:55.398 [2024-11-19 11:27:50.737217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.398 [2024-11-19 11:27:50.737242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.398 qpair failed and we were unable to recover it. 00:25:55.398 [2024-11-19 11:27:50.737368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.398 [2024-11-19 11:27:50.737393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.398 qpair failed and we were unable to recover it. 00:25:55.398 [2024-11-19 11:27:50.737540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.398 [2024-11-19 11:27:50.737565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.398 qpair failed and we were unable to recover it. 00:25:55.398 Malloc0 00:25:55.398 [2024-11-19 11:27:50.737713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.398 [2024-11-19 11:27:50.737742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.398 qpair failed and we were unable to recover it. 00:25:55.398 [2024-11-19 11:27:50.737877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.398 [2024-11-19 11:27:50.737903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.398 qpair failed and we were unable to recover it. 00:25:55.398 [2024-11-19 11:27:50.738014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.398 [2024-11-19 11:27:50.738039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.398 qpair failed and we were unable to recover it. 00:25:55.398 11:27:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.398 [2024-11-19 11:27:50.738155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.398 [2024-11-19 11:27:50.738185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.398 qpair failed and we were unable to recover it. 00:25:55.398 [2024-11-19 11:27:50.738313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.398 [2024-11-19 11:27:50.738338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.398 qpair failed and we were unable to recover it. 00:25:55.398 11:27:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:55.398 [2024-11-19 11:27:50.738492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.398 [2024-11-19 11:27:50.738517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.398 qpair failed and we were unable to recover it. 00:25:55.398 11:27:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.398 [2024-11-19 11:27:50.738640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.398 [2024-11-19 11:27:50.738665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.398 qpair failed and we were unable to recover it. 00:25:55.398 11:27:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:55.399 [2024-11-19 11:27:50.738815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.399 [2024-11-19 11:27:50.738840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.399 qpair failed and we were unable to recover it. 00:25:55.399 [2024-11-19 11:27:50.738958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.399 [2024-11-19 11:27:50.738983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.399 qpair failed and we were unable to recover it. 00:25:55.399 [2024-11-19 11:27:50.739133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.399 [2024-11-19 11:27:50.739158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.399 qpair failed and we were unable to recover it. 00:25:55.399 [2024-11-19 11:27:50.739308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.399 [2024-11-19 11:27:50.739334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.399 qpair failed and we were unable to recover it. 00:25:55.399 [2024-11-19 11:27:50.739495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.399 [2024-11-19 11:27:50.739520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.399 qpair failed and we were unable to recover it. 00:25:55.399 [2024-11-19 11:27:50.739635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.399 [2024-11-19 11:27:50.739664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.399 qpair failed and we were unable to recover it. 00:25:55.399 [2024-11-19 11:27:50.739811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.399 [2024-11-19 11:27:50.739837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.399 qpair failed and we were unable to recover it. 00:25:55.399 [2024-11-19 11:27:50.739980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.399 [2024-11-19 11:27:50.740005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.399 qpair failed and we were unable to recover it. 00:25:55.399 [2024-11-19 11:27:50.740152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.399 [2024-11-19 11:27:50.740178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.399 qpair failed and we were unable to recover it. 00:25:55.399 [2024-11-19 11:27:50.740351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.399 [2024-11-19 11:27:50.740385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.399 qpair failed and we were unable to recover it. 00:25:55.399 [2024-11-19 11:27:50.740501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.399 [2024-11-19 11:27:50.740526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.399 qpair failed and we were unable to recover it. 00:25:55.399 [2024-11-19 11:27:50.740640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.399 [2024-11-19 11:27:50.740665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.399 qpair failed and we were unable to recover it. 00:25:55.399 [2024-11-19 11:27:50.740818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.399 [2024-11-19 11:27:50.740843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.399 qpair failed and we were unable to recover it. 00:25:55.399 [2024-11-19 11:27:50.740957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.399 [2024-11-19 11:27:50.740982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.399 qpair failed and we were unable to recover it. 00:25:55.399 [2024-11-19 11:27:50.741201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.399 [2024-11-19 11:27:50.741226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.399 qpair failed and we were unable to recover it. 00:25:55.399 [2024-11-19 11:27:50.741377] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:55.399 [2024-11-19 11:27:50.741450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.399 [2024-11-19 11:27:50.741475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.399 qpair failed and we were unable to recover it. 00:25:55.399 [2024-11-19 11:27:50.741605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.399 [2024-11-19 11:27:50.741631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.399 qpair failed and we were unable to recover it. 00:25:55.399 [2024-11-19 11:27:50.741804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.399 [2024-11-19 11:27:50.741830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.399 qpair failed and we were unable to recover it. 00:25:55.399 [2024-11-19 11:27:50.741955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.399 [2024-11-19 11:27:50.741984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.399 qpair failed and we were unable to recover it. 00:25:55.399 [2024-11-19 11:27:50.742087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.399 [2024-11-19 11:27:50.742113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.399 qpair failed and we were unable to recover it. 00:25:55.399 [2024-11-19 11:27:50.742266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.399 [2024-11-19 11:27:50.742292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.399 qpair failed and we were unable to recover it. 00:25:55.399 [2024-11-19 11:27:50.742520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.399 [2024-11-19 11:27:50.742546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.399 qpair failed and we were unable to recover it. 00:25:55.399 [2024-11-19 11:27:50.742707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.399 [2024-11-19 11:27:50.742733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.399 qpair failed and we were unable to recover it. 00:25:55.399 [2024-11-19 11:27:50.742952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.399 [2024-11-19 11:27:50.742977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.399 qpair failed and we were unable to recover it. 00:25:55.399 [2024-11-19 11:27:50.743128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.399 [2024-11-19 11:27:50.743153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.399 qpair failed and we were unable to recover it. 00:25:55.399 [2024-11-19 11:27:50.743276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.399 [2024-11-19 11:27:50.743301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.399 qpair failed and we were unable to recover it. 00:25:55.399 [2024-11-19 11:27:50.743411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.399 [2024-11-19 11:27:50.743437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.399 qpair failed and we were unable to recover it. 00:25:55.399 [2024-11-19 11:27:50.743558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.399 [2024-11-19 11:27:50.743584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.399 qpair failed and we were unable to recover it. 00:25:55.399 [2024-11-19 11:27:50.743799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.399 [2024-11-19 11:27:50.743824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.399 qpair failed and we were unable to recover it. 00:25:55.399 [2024-11-19 11:27:50.744025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.399 [2024-11-19 11:27:50.744050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.399 qpair failed and we were unable to recover it. 00:25:55.399 [2024-11-19 11:27:50.744228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.399 [2024-11-19 11:27:50.744253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.399 qpair failed and we were unable to recover it. 00:25:55.399 [2024-11-19 11:27:50.744425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.399 [2024-11-19 11:27:50.744451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.399 qpair failed and we were unable to recover it. 00:25:55.399 [2024-11-19 11:27:50.744661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.399 [2024-11-19 11:27:50.744687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.399 qpair failed and we were unable to recover it. 00:25:55.399 [2024-11-19 11:27:50.744852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.399 [2024-11-19 11:27:50.744877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.399 qpair failed and we were unable to recover it. 00:25:55.399 [2024-11-19 11:27:50.745029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.399 [2024-11-19 11:27:50.745054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.399 qpair failed and we were unable to recover it. 00:25:55.399 [2024-11-19 11:27:50.745261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.399 [2024-11-19 11:27:50.745286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.399 qpair failed and we were unable to recover it. 00:25:55.399 [2024-11-19 11:27:50.745465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.399 [2024-11-19 11:27:50.745491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.399 qpair failed and we were unable to recover it. 00:25:55.399 [2024-11-19 11:27:50.745693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.399 [2024-11-19 11:27:50.745719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.399 qpair failed and we were unable to recover it. 00:25:55.399 [2024-11-19 11:27:50.745883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.399 [2024-11-19 11:27:50.745909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.399 qpair failed and we were unable to recover it. 00:25:55.399 [2024-11-19 11:27:50.746131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.400 [2024-11-19 11:27:50.746155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.400 qpair failed and we were unable to recover it. 00:25:55.400 [2024-11-19 11:27:50.746281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.400 [2024-11-19 11:27:50.746306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.400 qpair failed and we were unable to recover it. 00:25:55.400 [2024-11-19 11:27:50.746549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.400 [2024-11-19 11:27:50.746575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.400 qpair failed and we were unable to recover it. 00:25:55.400 [2024-11-19 11:27:50.746731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.400 [2024-11-19 11:27:50.746756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.400 qpair failed and we were unable to recover it. 00:25:55.400 [2024-11-19 11:27:50.746994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.400 [2024-11-19 11:27:50.747019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.400 qpair failed and we were unable to recover it. 00:25:55.400 [2024-11-19 11:27:50.747138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.400 [2024-11-19 11:27:50.747163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.400 qpair failed and we were unable to recover it. 00:25:55.400 [2024-11-19 11:27:50.747317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.400 [2024-11-19 11:27:50.747342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.400 qpair failed and we were unable to recover it. 00:25:55.400 [2024-11-19 11:27:50.747536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.400 [2024-11-19 11:27:50.747564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.400 qpair failed and we were unable to recover it. 00:25:55.400 [2024-11-19 11:27:50.747798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.400 [2024-11-19 11:27:50.747823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.400 qpair failed and we were unable to recover it. 00:25:55.400 [2024-11-19 11:27:50.747941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.400 [2024-11-19 11:27:50.747966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.400 qpair failed and we were unable to recover it. 00:25:55.400 [2024-11-19 11:27:50.748192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.400 [2024-11-19 11:27:50.748217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.400 qpair failed and we were unable to recover it. 00:25:55.400 [2024-11-19 11:27:50.748342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.400 [2024-11-19 11:27:50.748372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.400 qpair failed and we were unable to recover it. 00:25:55.400 [2024-11-19 11:27:50.748559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.400 [2024-11-19 11:27:50.748584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.400 qpair failed and we were unable to recover it. 00:25:55.400 [2024-11-19 11:27:50.748755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.400 [2024-11-19 11:27:50.748780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.400 qpair failed and we were unable to recover it. 00:25:55.400 [2024-11-19 11:27:50.748987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.400 [2024-11-19 11:27:50.749011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.400 qpair failed and we were unable to recover it. 00:25:55.400 [2024-11-19 11:27:50.749217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.400 [2024-11-19 11:27:50.749242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.400 qpair failed and we were unable to recover it. 00:25:55.400 [2024-11-19 11:27:50.749357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.400 [2024-11-19 11:27:50.749399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.400 qpair failed and we were unable to recover it. 00:25:55.400 11:27:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.400 [2024-11-19 11:27:50.749620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.400 [2024-11-19 11:27:50.749646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.400 qpair failed and we were unable to recover it. 00:25:55.400 [2024-11-19 11:27:50.749813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.400 11:27:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:55.400 [2024-11-19 11:27:50.749838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.400 qpair failed and we were unable to recover it. 00:25:55.400 [2024-11-19 11:27:50.749974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.400 11:27:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.400 [2024-11-19 11:27:50.749999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.400 qpair failed and we were unable to recover it. 00:25:55.400 [2024-11-19 11:27:50.750087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.400 [2024-11-19 11:27:50.750111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.400 qpair failed and we were unable to recover it. 00:25:55.400 11:27:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:55.400 [2024-11-19 11:27:50.750240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.400 [2024-11-19 11:27:50.750265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.400 qpair failed and we were unable to recover it. 00:25:55.400 [2024-11-19 11:27:50.750429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.400 [2024-11-19 11:27:50.750455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.400 qpair failed and we were unable to recover it. 00:25:55.400 [2024-11-19 11:27:50.750678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.400 [2024-11-19 11:27:50.750703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.400 qpair failed and we were unable to recover it. 00:25:55.400 [2024-11-19 11:27:50.750824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.400 [2024-11-19 11:27:50.750849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.400 qpair failed and we were unable to recover it. 00:25:55.400 [2024-11-19 11:27:50.751003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.400 [2024-11-19 11:27:50.751029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.400 qpair failed and we were unable to recover it. 00:25:55.400 [2024-11-19 11:27:50.751235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.400 [2024-11-19 11:27:50.751260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.400 qpair failed and we were unable to recover it. 00:25:55.400 [2024-11-19 11:27:50.751443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.400 [2024-11-19 11:27:50.751468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.400 qpair failed and we were unable to recover it. 00:25:55.400 [2024-11-19 11:27:50.751677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.400 [2024-11-19 11:27:50.751702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.400 qpair failed and we were unable to recover it. 00:25:55.400 [2024-11-19 11:27:50.751849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.400 [2024-11-19 11:27:50.751875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.400 qpair failed and we were unable to recover it. 00:25:55.400 [2024-11-19 11:27:50.752059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.400 [2024-11-19 11:27:50.752085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.400 qpair failed and we were unable to recover it. 00:25:55.400 [2024-11-19 11:27:50.752214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.400 [2024-11-19 11:27:50.752239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.400 qpair failed and we were unable to recover it. 00:25:55.400 [2024-11-19 11:27:50.752378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.400 [2024-11-19 11:27:50.752406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.400 qpair failed and we were unable to recover it. 00:25:55.400 [2024-11-19 11:27:50.752622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.400 [2024-11-19 11:27:50.752646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.400 qpair failed and we were unable to recover it. 00:25:55.400 [2024-11-19 11:27:50.752860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.400 [2024-11-19 11:27:50.752885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.400 qpair failed and we were unable to recover it. 00:25:55.400 [2024-11-19 11:27:50.753006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.400 [2024-11-19 11:27:50.753031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.400 qpair failed and we were unable to recover it. 00:25:55.400 [2024-11-19 11:27:50.753184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.400 [2024-11-19 11:27:50.753209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.400 qpair failed and we were unable to recover it. 00:25:55.400 [2024-11-19 11:27:50.753447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.400 [2024-11-19 11:27:50.753473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.400 qpair failed and we were unable to recover it. 00:25:55.400 [2024-11-19 11:27:50.753679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.400 [2024-11-19 11:27:50.753705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.400 qpair failed and we were unable to recover it. 00:25:55.400 [2024-11-19 11:27:50.753831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.400 [2024-11-19 11:27:50.753857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.400 qpair failed and we were unable to recover it. 00:25:55.400 [2024-11-19 11:27:50.754021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.400 [2024-11-19 11:27:50.754046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.400 qpair failed and we were unable to recover it. 00:25:55.400 [2024-11-19 11:27:50.754202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.400 [2024-11-19 11:27:50.754227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.400 qpair failed and we were unable to recover it. 00:25:55.400 [2024-11-19 11:27:50.754350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.400 [2024-11-19 11:27:50.754383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.400 qpair failed and we were unable to recover it. 00:25:55.401 [2024-11-19 11:27:50.754596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.401 [2024-11-19 11:27:50.754622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.401 qpair failed and we were unable to recover it. 00:25:55.401 [2024-11-19 11:27:50.754784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.401 [2024-11-19 11:27:50.754809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.401 qpair failed and we were unable to recover it. 00:25:55.401 [2024-11-19 11:27:50.754967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.401 [2024-11-19 11:27:50.754996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.401 qpair failed and we were unable to recover it. 00:25:55.401 [2024-11-19 11:27:50.755149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.401 [2024-11-19 11:27:50.755175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.401 qpair failed and we were unable to recover it. 00:25:55.401 [2024-11-19 11:27:50.755306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.401 [2024-11-19 11:27:50.755331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.401 qpair failed and we were unable to recover it. 00:25:55.401 [2024-11-19 11:27:50.755441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.401 [2024-11-19 11:27:50.755466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.401 qpair failed and we were unable to recover it. 00:25:55.401 [2024-11-19 11:27:50.755670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.401 [2024-11-19 11:27:50.755695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.401 qpair failed and we were unable to recover it. 00:25:55.401 [2024-11-19 11:27:50.755895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.401 [2024-11-19 11:27:50.755920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.401 qpair failed and we were unable to recover it. 00:25:55.401 [2024-11-19 11:27:50.756047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.401 [2024-11-19 11:27:50.756072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.401 qpair failed and we were unable to recover it. 00:25:55.401 [2024-11-19 11:27:50.756238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.401 [2024-11-19 11:27:50.756264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.401 qpair failed and we were unable to recover it. 00:25:55.401 [2024-11-19 11:27:50.756414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.401 [2024-11-19 11:27:50.756440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.401 qpair failed and we were unable to recover it. 00:25:55.401 [2024-11-19 11:27:50.756618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.401 [2024-11-19 11:27:50.756643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.401 qpair failed and we were unable to recover it. 00:25:55.401 [2024-11-19 11:27:50.756796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.401 [2024-11-19 11:27:50.756822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.401 qpair failed and we were unable to recover it. 00:25:55.401 [2024-11-19 11:27:50.756978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.401 [2024-11-19 11:27:50.757003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.401 qpair failed and we were unable to recover it. 00:25:55.401 [2024-11-19 11:27:50.757127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.401 [2024-11-19 11:27:50.757153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.401 qpair failed and we were unable to recover it. 00:25:55.401 [2024-11-19 11:27:50.757318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.401 [2024-11-19 11:27:50.757343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.401 qpair failed and we were unable to recover it. 00:25:55.401 [2024-11-19 11:27:50.757573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.401 [2024-11-19 11:27:50.757599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.401 qpair failed and we were unable to recover it. 00:25:55.401 11:27:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.401 [2024-11-19 11:27:50.757777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.401 [2024-11-19 11:27:50.757801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.401 qpair failed and we were unable to recover it. 00:25:55.401 11:27:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:55.401 [2024-11-19 11:27:50.757943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.401 [2024-11-19 11:27:50.757968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.401 qpair failed and we were unable to recover it. 00:25:55.401 11:27:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.401 [2024-11-19 11:27:50.758119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.401 [2024-11-19 11:27:50.758145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.401 qpair failed and we were unable to recover it. 00:25:55.401 11:27:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:55.401 [2024-11-19 11:27:50.758360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.401 [2024-11-19 11:27:50.758391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.401 qpair failed and we were unable to recover it. 00:25:55.401 [2024-11-19 11:27:50.758545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.401 [2024-11-19 11:27:50.758570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.401 qpair failed and we were unable to recover it. 00:25:55.401 [2024-11-19 11:27:50.758690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.401 [2024-11-19 11:27:50.758716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.401 qpair failed and we were unable to recover it. 00:25:55.401 [2024-11-19 11:27:50.758918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.401 [2024-11-19 11:27:50.758943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.401 qpair failed and we were unable to recover it. 00:25:55.401 [2024-11-19 11:27:50.759110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.401 [2024-11-19 11:27:50.759136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.401 qpair failed and we were unable to recover it. 00:25:55.401 [2024-11-19 11:27:50.759241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.401 [2024-11-19 11:27:50.759267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.401 qpair failed and we were unable to recover it. 00:25:55.401 [2024-11-19 11:27:50.759432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.401 [2024-11-19 11:27:50.759457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.401 qpair failed and we were unable to recover it. 00:25:55.401 [2024-11-19 11:27:50.759609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.401 [2024-11-19 11:27:50.759634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.401 qpair failed and we were unable to recover it. 00:25:55.401 [2024-11-19 11:27:50.759851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.401 [2024-11-19 11:27:50.759876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.401 qpair failed and we were unable to recover it. 00:25:55.401 [2024-11-19 11:27:50.760074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.401 [2024-11-19 11:27:50.760100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.401 qpair failed and we were unable to recover it. 00:25:55.401 [2024-11-19 11:27:50.760259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.401 [2024-11-19 11:27:50.760284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.401 qpair failed and we were unable to recover it. 00:25:55.401 [2024-11-19 11:27:50.760481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.401 [2024-11-19 11:27:50.760507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.401 qpair failed and we were unable to recover it. 00:25:55.401 [2024-11-19 11:27:50.760695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.401 [2024-11-19 11:27:50.760720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.401 qpair failed and we were unable to recover it. 00:25:55.401 [2024-11-19 11:27:50.760886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.401 [2024-11-19 11:27:50.760911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.401 qpair failed and we were unable to recover it. 00:25:55.401 [2024-11-19 11:27:50.761054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.401 [2024-11-19 11:27:50.761079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.401 qpair failed and we were unable to recover it. 00:25:55.401 [2024-11-19 11:27:50.761252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.401 [2024-11-19 11:27:50.761277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.401 qpair failed and we were unable to recover it. 00:25:55.401 [2024-11-19 11:27:50.761457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.401 [2024-11-19 11:27:50.761484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.401 qpair failed and we were unable to recover it. 00:25:55.401 [2024-11-19 11:27:50.761631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.401 [2024-11-19 11:27:50.761656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.401 qpair failed and we were unable to recover it. 00:25:55.401 [2024-11-19 11:27:50.761788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.401 [2024-11-19 11:27:50.761813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.401 qpair failed and we were unable to recover it. 00:25:55.401 [2024-11-19 11:27:50.761969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.401 [2024-11-19 11:27:50.761995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.401 qpair failed and we were unable to recover it. 00:25:55.401 [2024-11-19 11:27:50.762198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.401 [2024-11-19 11:27:50.762223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.401 qpair failed and we were unable to recover it. 00:25:55.401 [2024-11-19 11:27:50.762411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.401 [2024-11-19 11:27:50.762441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.401 qpair failed and we were unable to recover it. 00:25:55.401 [2024-11-19 11:27:50.762608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.401 [2024-11-19 11:27:50.762633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.401 qpair failed and we were unable to recover it. 00:25:55.401 [2024-11-19 11:27:50.762848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.401 [2024-11-19 11:27:50.762873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.401 qpair failed and we were unable to recover it. 00:25:55.401 [2024-11-19 11:27:50.763038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.401 [2024-11-19 11:27:50.763063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.401 qpair failed and we were unable to recover it. 00:25:55.401 [2024-11-19 11:27:50.763218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.401 [2024-11-19 11:27:50.763243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.401 qpair failed and we were unable to recover it. 00:25:55.401 [2024-11-19 11:27:50.763406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.402 [2024-11-19 11:27:50.763432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.402 qpair failed and we were unable to recover it. 00:25:55.402 [2024-11-19 11:27:50.763583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.402 [2024-11-19 11:27:50.763609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.402 qpair failed and we were unable to recover it. 00:25:55.402 [2024-11-19 11:27:50.763769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.402 [2024-11-19 11:27:50.763794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.402 qpair failed and we were unable to recover it. 00:25:55.402 [2024-11-19 11:27:50.763973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.402 [2024-11-19 11:27:50.763998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.402 qpair failed and we were unable to recover it. 00:25:55.402 [2024-11-19 11:27:50.764184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.402 [2024-11-19 11:27:50.764209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.402 qpair failed and we were unable to recover it. 00:25:55.402 [2024-11-19 11:27:50.764308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.402 [2024-11-19 11:27:50.764333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.402 qpair failed and we were unable to recover it. 00:25:55.402 [2024-11-19 11:27:50.764484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.402 [2024-11-19 11:27:50.764510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.402 qpair failed and we were unable to recover it. 00:25:55.402 [2024-11-19 11:27:50.764629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.402 [2024-11-19 11:27:50.764655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.402 qpair failed and we were unable to recover it. 00:25:55.402 [2024-11-19 11:27:50.764817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.402 [2024-11-19 11:27:50.764842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.402 qpair failed and we were unable to recover it. 00:25:55.402 [2024-11-19 11:27:50.765008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.402 [2024-11-19 11:27:50.765033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.402 qpair failed and we were unable to recover it. 00:25:55.402 [2024-11-19 11:27:50.765198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.402 [2024-11-19 11:27:50.765224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.402 qpair failed and we were unable to recover it. 00:25:55.402 [2024-11-19 11:27:50.765376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.402 [2024-11-19 11:27:50.765402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.402 qpair failed and we were unable to recover it. 00:25:55.402 [2024-11-19 11:27:50.765544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.402 [2024-11-19 11:27:50.765569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.402 qpair failed and we were unable to recover it. 00:25:55.402 11:27:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.402 [2024-11-19 11:27:50.765702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.402 [2024-11-19 11:27:50.765730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.402 qpair failed and we were unable to recover it. 00:25:55.402 [2024-11-19 11:27:50.765883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.402 11:27:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:55.402 [2024-11-19 11:27:50.765908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.402 qpair failed and we were unable to recover it. 00:25:55.402 [2024-11-19 11:27:50.766065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.402 11:27:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.402 [2024-11-19 11:27:50.766090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.402 qpair failed and we were unable to recover it. 00:25:55.402 11:27:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:55.402 [2024-11-19 11:27:50.766321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.402 [2024-11-19 11:27:50.766347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.402 qpair failed and we were unable to recover it. 00:25:55.402 [2024-11-19 11:27:50.766593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.402 [2024-11-19 11:27:50.766618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.402 qpair failed and we were unable to recover it. 00:25:55.402 [2024-11-19 11:27:50.766805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.402 [2024-11-19 11:27:50.766830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.402 qpair failed and we were unable to recover it. 00:25:55.402 [2024-11-19 11:27:50.766963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.402 [2024-11-19 11:27:50.766988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.402 qpair failed and we were unable to recover it. 00:25:55.402 [2024-11-19 11:27:50.767180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.402 [2024-11-19 11:27:50.767210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.402 qpair failed and we were unable to recover it. 00:25:55.402 [2024-11-19 11:27:50.767374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.402 [2024-11-19 11:27:50.767399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.402 qpair failed and we were unable to recover it. 00:25:55.402 [2024-11-19 11:27:50.767621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.402 [2024-11-19 11:27:50.767646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.402 qpair failed and we were unable to recover it. 00:25:55.402 [2024-11-19 11:27:50.767825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.402 [2024-11-19 11:27:50.767850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.402 qpair failed and we were unable to recover it. 00:25:55.402 [2024-11-19 11:27:50.768059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.402 [2024-11-19 11:27:50.768085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.402 qpair failed and we were unable to recover it. 00:25:55.402 [2024-11-19 11:27:50.768244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.402 [2024-11-19 11:27:50.768269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.402 qpair failed and we were unable to recover it. 00:25:55.402 [2024-11-19 11:27:50.768434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.402 [2024-11-19 11:27:50.768460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.402 qpair failed and we were unable to recover it. 00:25:55.402 [2024-11-19 11:27:50.768669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.402 [2024-11-19 11:27:50.768694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.402 qpair failed and we were unable to recover it. 00:25:55.402 [2024-11-19 11:27:50.768788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.402 [2024-11-19 11:27:50.768813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.402 qpair failed and we were unable to recover it. 00:25:55.402 [2024-11-19 11:27:50.768941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.402 [2024-11-19 11:27:50.768966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.402 qpair failed and we were unable to recover it. 00:25:55.402 [2024-11-19 11:27:50.769121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.402 [2024-11-19 11:27:50.769146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.402 qpair failed and we were unable to recover it. 00:25:55.402 [2024-11-19 11:27:50.769272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.402 [2024-11-19 11:27:50.769296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.402 qpair failed and we were unable to recover it. 00:25:55.402 [2024-11-19 11:27:50.769422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.402 [2024-11-19 11:27:50.769449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1045fa0 with addr=10.0.0.2, port=4420 00:25:55.402 qpair failed and we were unable to recover it. 00:25:55.402 [2024-11-19 11:27:50.769643] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:55.402 [2024-11-19 11:27:50.772277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.402 [2024-11-19 11:27:50.772442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.402 [2024-11-19 11:27:50.772475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.402 [2024-11-19 11:27:50.772492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.402 [2024-11-19 11:27:50.772504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.402 [2024-11-19 11:27:50.772542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.402 qpair failed and we were unable to recover it. 00:25:55.402 11:27:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.402 11:27:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:55.402 11:27:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.402 11:27:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:55.402 11:27:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.402 11:27:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2727576 00:25:55.402 [2024-11-19 11:27:50.782105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.402 [2024-11-19 11:27:50.782190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.402 [2024-11-19 11:27:50.782217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.402 [2024-11-19 11:27:50.782232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.402 [2024-11-19 11:27:50.782244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.402 [2024-11-19 11:27:50.782273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.402 qpair failed and we were unable to recover it. 00:25:55.402 [2024-11-19 11:27:50.792084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.402 [2024-11-19 11:27:50.792184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.402 [2024-11-19 11:27:50.792208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.402 [2024-11-19 11:27:50.792223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.402 [2024-11-19 11:27:50.792235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.402 [2024-11-19 11:27:50.792264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.402 qpair failed and we were unable to recover it. 00:25:55.402 [2024-11-19 11:27:50.802086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.402 [2024-11-19 11:27:50.802219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.402 [2024-11-19 11:27:50.802244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.402 [2024-11-19 11:27:50.802259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.403 [2024-11-19 11:27:50.802277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.403 [2024-11-19 11:27:50.802306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.403 qpair failed and we were unable to recover it. 00:25:55.403 [2024-11-19 11:27:50.812007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.403 [2024-11-19 11:27:50.812117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.403 [2024-11-19 11:27:50.812143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.403 [2024-11-19 11:27:50.812158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.403 [2024-11-19 11:27:50.812170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.403 [2024-11-19 11:27:50.812199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.403 qpair failed and we were unable to recover it. 00:25:55.403 [2024-11-19 11:27:50.822061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.403 [2024-11-19 11:27:50.822163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.403 [2024-11-19 11:27:50.822188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.403 [2024-11-19 11:27:50.822201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.403 [2024-11-19 11:27:50.822213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.403 [2024-11-19 11:27:50.822242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.403 qpair failed and we were unable to recover it. 00:25:55.403 [2024-11-19 11:27:50.832048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.403 [2024-11-19 11:27:50.832132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.403 [2024-11-19 11:27:50.832156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.403 [2024-11-19 11:27:50.832170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.403 [2024-11-19 11:27:50.832182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.403 [2024-11-19 11:27:50.832212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.403 qpair failed and we were unable to recover it. 00:25:55.662 [2024-11-19 11:27:50.842088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.662 [2024-11-19 11:27:50.842198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.662 [2024-11-19 11:27:50.842224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.662 [2024-11-19 11:27:50.842238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.662 [2024-11-19 11:27:50.842250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.662 [2024-11-19 11:27:50.842278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.662 qpair failed and we were unable to recover it. 00:25:55.662 [2024-11-19 11:27:50.852167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.662 [2024-11-19 11:27:50.852271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.662 [2024-11-19 11:27:50.852301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.662 [2024-11-19 11:27:50.852316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.662 [2024-11-19 11:27:50.852329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.662 [2024-11-19 11:27:50.852358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.662 qpair failed and we were unable to recover it. 00:25:55.662 [2024-11-19 11:27:50.862184] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.662 [2024-11-19 11:27:50.862302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.662 [2024-11-19 11:27:50.862328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.662 [2024-11-19 11:27:50.862343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.662 [2024-11-19 11:27:50.862356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.662 [2024-11-19 11:27:50.862394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.662 qpair failed and we were unable to recover it. 00:25:55.662 [2024-11-19 11:27:50.872206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.662 [2024-11-19 11:27:50.872327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.662 [2024-11-19 11:27:50.872352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.662 [2024-11-19 11:27:50.872374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.662 [2024-11-19 11:27:50.872389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.662 [2024-11-19 11:27:50.872418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.662 qpair failed and we were unable to recover it. 00:25:55.662 [2024-11-19 11:27:50.882217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.662 [2024-11-19 11:27:50.882324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.662 [2024-11-19 11:27:50.882352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.662 [2024-11-19 11:27:50.882375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.662 [2024-11-19 11:27:50.882389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.662 [2024-11-19 11:27:50.882419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.662 qpair failed and we were unable to recover it. 00:25:55.662 [2024-11-19 11:27:50.892255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.662 [2024-11-19 11:27:50.892360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.662 [2024-11-19 11:27:50.892397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.662 [2024-11-19 11:27:50.892413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.662 [2024-11-19 11:27:50.892425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.662 [2024-11-19 11:27:50.892454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.662 qpair failed and we were unable to recover it. 00:25:55.662 [2024-11-19 11:27:50.902266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.662 [2024-11-19 11:27:50.902355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.662 [2024-11-19 11:27:50.902387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.662 [2024-11-19 11:27:50.902402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.662 [2024-11-19 11:27:50.902414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.662 [2024-11-19 11:27:50.902442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.662 qpair failed and we were unable to recover it. 00:25:55.662 [2024-11-19 11:27:50.912305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.662 [2024-11-19 11:27:50.912415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.662 [2024-11-19 11:27:50.912445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.662 [2024-11-19 11:27:50.912460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.663 [2024-11-19 11:27:50.912472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.663 [2024-11-19 11:27:50.912500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.663 qpair failed and we were unable to recover it. 00:25:55.663 [2024-11-19 11:27:50.922376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.663 [2024-11-19 11:27:50.922469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.663 [2024-11-19 11:27:50.922493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.663 [2024-11-19 11:27:50.922507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.663 [2024-11-19 11:27:50.922519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.663 [2024-11-19 11:27:50.922548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.663 qpair failed and we were unable to recover it. 00:25:55.663 [2024-11-19 11:27:50.932393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.663 [2024-11-19 11:27:50.932530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.663 [2024-11-19 11:27:50.932555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.663 [2024-11-19 11:27:50.932570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.663 [2024-11-19 11:27:50.932587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.663 [2024-11-19 11:27:50.932617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.663 qpair failed and we were unable to recover it. 00:25:55.663 [2024-11-19 11:27:50.942446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.663 [2024-11-19 11:27:50.942532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.663 [2024-11-19 11:27:50.942556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.663 [2024-11-19 11:27:50.942570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.663 [2024-11-19 11:27:50.942582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.663 [2024-11-19 11:27:50.942611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.663 qpair failed and we were unable to recover it. 00:25:55.663 [2024-11-19 11:27:50.952453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.663 [2024-11-19 11:27:50.952534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.663 [2024-11-19 11:27:50.952558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.663 [2024-11-19 11:27:50.952572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.663 [2024-11-19 11:27:50.952584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.663 [2024-11-19 11:27:50.952613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.663 qpair failed and we were unable to recover it. 00:25:55.663 [2024-11-19 11:27:50.962518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.663 [2024-11-19 11:27:50.962608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.663 [2024-11-19 11:27:50.962631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.663 [2024-11-19 11:27:50.962644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.663 [2024-11-19 11:27:50.962656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.663 [2024-11-19 11:27:50.962684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.663 qpair failed and we were unable to recover it. 00:25:55.663 [2024-11-19 11:27:50.972557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.663 [2024-11-19 11:27:50.972657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.663 [2024-11-19 11:27:50.972682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.663 [2024-11-19 11:27:50.972697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.663 [2024-11-19 11:27:50.972709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.663 [2024-11-19 11:27:50.972737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.663 qpair failed and we were unable to recover it. 00:25:55.663 [2024-11-19 11:27:50.982570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.663 [2024-11-19 11:27:50.982696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.663 [2024-11-19 11:27:50.982722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.663 [2024-11-19 11:27:50.982737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.663 [2024-11-19 11:27:50.982748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.663 [2024-11-19 11:27:50.982776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.663 qpair failed and we were unable to recover it. 00:25:55.663 [2024-11-19 11:27:50.992557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.663 [2024-11-19 11:27:50.992645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.663 [2024-11-19 11:27:50.992669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.663 [2024-11-19 11:27:50.992683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.663 [2024-11-19 11:27:50.992695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.663 [2024-11-19 11:27:50.992723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.663 qpair failed and we were unable to recover it. 00:25:55.663 [2024-11-19 11:27:51.002661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.663 [2024-11-19 11:27:51.002768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.663 [2024-11-19 11:27:51.002792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.663 [2024-11-19 11:27:51.002806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.663 [2024-11-19 11:27:51.002818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.663 [2024-11-19 11:27:51.002847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.663 qpair failed and we were unable to recover it. 00:25:55.663 [2024-11-19 11:27:51.012616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.663 [2024-11-19 11:27:51.012734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.663 [2024-11-19 11:27:51.012758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.663 [2024-11-19 11:27:51.012772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.663 [2024-11-19 11:27:51.012784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.663 [2024-11-19 11:27:51.012812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.663 qpair failed and we were unable to recover it. 00:25:55.663 [2024-11-19 11:27:51.022720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.663 [2024-11-19 11:27:51.022831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.663 [2024-11-19 11:27:51.022861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.663 [2024-11-19 11:27:51.022877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.663 [2024-11-19 11:27:51.022889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.663 [2024-11-19 11:27:51.022921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.663 qpair failed and we were unable to recover it. 00:25:55.663 [2024-11-19 11:27:51.032729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.663 [2024-11-19 11:27:51.032880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.663 [2024-11-19 11:27:51.032905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.663 [2024-11-19 11:27:51.032919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.663 [2024-11-19 11:27:51.032932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.663 [2024-11-19 11:27:51.032970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.663 qpair failed and we were unable to recover it. 00:25:55.664 [2024-11-19 11:27:51.042731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.664 [2024-11-19 11:27:51.042836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.664 [2024-11-19 11:27:51.042861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.664 [2024-11-19 11:27:51.042876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.664 [2024-11-19 11:27:51.042888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.664 [2024-11-19 11:27:51.042916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.664 qpair failed and we were unable to recover it. 00:25:55.664 [2024-11-19 11:27:51.052737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.664 [2024-11-19 11:27:51.052836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.664 [2024-11-19 11:27:51.052865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.664 [2024-11-19 11:27:51.052879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.664 [2024-11-19 11:27:51.052891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.664 [2024-11-19 11:27:51.052920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.664 qpair failed and we were unable to recover it. 00:25:55.664 [2024-11-19 11:27:51.062793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.664 [2024-11-19 11:27:51.062903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.664 [2024-11-19 11:27:51.062928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.664 [2024-11-19 11:27:51.062942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.664 [2024-11-19 11:27:51.062960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.664 [2024-11-19 11:27:51.062989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.664 qpair failed and we were unable to recover it. 00:25:55.664 [2024-11-19 11:27:51.072818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.664 [2024-11-19 11:27:51.072926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.664 [2024-11-19 11:27:51.072949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.664 [2024-11-19 11:27:51.072963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.664 [2024-11-19 11:27:51.072976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.664 [2024-11-19 11:27:51.073005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.664 qpair failed and we were unable to recover it. 00:25:55.664 [2024-11-19 11:27:51.082848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.664 [2024-11-19 11:27:51.082958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.664 [2024-11-19 11:27:51.082984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.664 [2024-11-19 11:27:51.082999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.664 [2024-11-19 11:27:51.083010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.664 [2024-11-19 11:27:51.083040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.664 qpair failed and we were unable to recover it. 00:25:55.664 [2024-11-19 11:27:51.092832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.664 [2024-11-19 11:27:51.092932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.664 [2024-11-19 11:27:51.092956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.664 [2024-11-19 11:27:51.092970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.664 [2024-11-19 11:27:51.092982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.664 [2024-11-19 11:27:51.093011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.664 qpair failed and we were unable to recover it. 00:25:55.664 [2024-11-19 11:27:51.102889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.664 [2024-11-19 11:27:51.103002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.664 [2024-11-19 11:27:51.103027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.664 [2024-11-19 11:27:51.103042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.664 [2024-11-19 11:27:51.103054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.664 [2024-11-19 11:27:51.103083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.664 qpair failed and we were unable to recover it. 00:25:55.664 [2024-11-19 11:27:51.112890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.664 [2024-11-19 11:27:51.112991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.664 [2024-11-19 11:27:51.113017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.664 [2024-11-19 11:27:51.113031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.664 [2024-11-19 11:27:51.113043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.664 [2024-11-19 11:27:51.113071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.664 qpair failed and we were unable to recover it. 00:25:55.664 [2024-11-19 11:27:51.122980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.664 [2024-11-19 11:27:51.123087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.664 [2024-11-19 11:27:51.123112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.664 [2024-11-19 11:27:51.123126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.664 [2024-11-19 11:27:51.123138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.664 [2024-11-19 11:27:51.123166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.664 qpair failed and we were unable to recover it. 00:25:55.664 [2024-11-19 11:27:51.132966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.664 [2024-11-19 11:27:51.133076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.664 [2024-11-19 11:27:51.133101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.664 [2024-11-19 11:27:51.133115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.664 [2024-11-19 11:27:51.133127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.664 [2024-11-19 11:27:51.133156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.664 qpair failed and we were unable to recover it. 00:25:55.664 [2024-11-19 11:27:51.143001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.664 [2024-11-19 11:27:51.143123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.664 [2024-11-19 11:27:51.143148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.664 [2024-11-19 11:27:51.143162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.664 [2024-11-19 11:27:51.143174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.664 [2024-11-19 11:27:51.143202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.664 qpair failed and we were unable to recover it. 00:25:55.664 [2024-11-19 11:27:51.153065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.664 [2024-11-19 11:27:51.153163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.664 [2024-11-19 11:27:51.153192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.664 [2024-11-19 11:27:51.153207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.664 [2024-11-19 11:27:51.153219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.664 [2024-11-19 11:27:51.153248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.664 qpair failed and we were unable to recover it. 00:25:55.924 [2024-11-19 11:27:51.163104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.924 [2024-11-19 11:27:51.163253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.924 [2024-11-19 11:27:51.163278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.924 [2024-11-19 11:27:51.163293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.924 [2024-11-19 11:27:51.163305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.924 [2024-11-19 11:27:51.163334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.924 qpair failed and we were unable to recover it. 00:25:55.924 [2024-11-19 11:27:51.173081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.924 [2024-11-19 11:27:51.173181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.924 [2024-11-19 11:27:51.173205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.924 [2024-11-19 11:27:51.173219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.924 [2024-11-19 11:27:51.173231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.924 [2024-11-19 11:27:51.173267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.924 qpair failed and we were unable to recover it. 00:25:55.924 [2024-11-19 11:27:51.183131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.924 [2024-11-19 11:27:51.183231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.924 [2024-11-19 11:27:51.183257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.924 [2024-11-19 11:27:51.183271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.924 [2024-11-19 11:27:51.183283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.924 [2024-11-19 11:27:51.183312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.924 qpair failed and we were unable to recover it. 00:25:55.924 [2024-11-19 11:27:51.193209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.924 [2024-11-19 11:27:51.193312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.924 [2024-11-19 11:27:51.193337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.924 [2024-11-19 11:27:51.193352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.924 [2024-11-19 11:27:51.193377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.924 [2024-11-19 11:27:51.193408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.924 qpair failed and we were unable to recover it. 00:25:55.924 [2024-11-19 11:27:51.203162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.924 [2024-11-19 11:27:51.203271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.924 [2024-11-19 11:27:51.203297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.924 [2024-11-19 11:27:51.203312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.924 [2024-11-19 11:27:51.203324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.924 [2024-11-19 11:27:51.203352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.924 qpair failed and we were unable to recover it. 00:25:55.924 [2024-11-19 11:27:51.213232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.924 [2024-11-19 11:27:51.213333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.924 [2024-11-19 11:27:51.213371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.924 [2024-11-19 11:27:51.213389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.924 [2024-11-19 11:27:51.213401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.924 [2024-11-19 11:27:51.213430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.924 qpair failed and we were unable to recover it. 00:25:55.924 [2024-11-19 11:27:51.223257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.924 [2024-11-19 11:27:51.223357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.924 [2024-11-19 11:27:51.223388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.924 [2024-11-19 11:27:51.223402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.924 [2024-11-19 11:27:51.223414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.924 [2024-11-19 11:27:51.223442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.924 qpair failed and we were unable to recover it. 00:25:55.924 [2024-11-19 11:27:51.233258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.924 [2024-11-19 11:27:51.233357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.924 [2024-11-19 11:27:51.233388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.924 [2024-11-19 11:27:51.233402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.924 [2024-11-19 11:27:51.233414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.924 [2024-11-19 11:27:51.233443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.924 qpair failed and we were unable to recover it. 00:25:55.924 [2024-11-19 11:27:51.243386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.924 [2024-11-19 11:27:51.243481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.924 [2024-11-19 11:27:51.243504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.924 [2024-11-19 11:27:51.243519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.924 [2024-11-19 11:27:51.243531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.925 [2024-11-19 11:27:51.243559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.925 qpair failed and we were unable to recover it. 00:25:55.925 [2024-11-19 11:27:51.253272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.925 [2024-11-19 11:27:51.253381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.925 [2024-11-19 11:27:51.253407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.925 [2024-11-19 11:27:51.253421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.925 [2024-11-19 11:27:51.253434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.925 [2024-11-19 11:27:51.253462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.925 qpair failed and we were unable to recover it. 00:25:55.925 [2024-11-19 11:27:51.263390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.925 [2024-11-19 11:27:51.263481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.925 [2024-11-19 11:27:51.263509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.925 [2024-11-19 11:27:51.263524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.925 [2024-11-19 11:27:51.263535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.925 [2024-11-19 11:27:51.263564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.925 qpair failed and we were unable to recover it. 00:25:55.925 [2024-11-19 11:27:51.273455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.925 [2024-11-19 11:27:51.273539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.925 [2024-11-19 11:27:51.273563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.925 [2024-11-19 11:27:51.273577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.925 [2024-11-19 11:27:51.273589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.925 [2024-11-19 11:27:51.273618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.925 qpair failed and we were unable to recover it. 00:25:55.925 [2024-11-19 11:27:51.283413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.925 [2024-11-19 11:27:51.283506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.925 [2024-11-19 11:27:51.283537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.925 [2024-11-19 11:27:51.283552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.925 [2024-11-19 11:27:51.283565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.925 [2024-11-19 11:27:51.283594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.925 qpair failed and we were unable to recover it. 00:25:55.925 [2024-11-19 11:27:51.293456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.925 [2024-11-19 11:27:51.293557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.925 [2024-11-19 11:27:51.293582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.925 [2024-11-19 11:27:51.293597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.925 [2024-11-19 11:27:51.293609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.925 [2024-11-19 11:27:51.293638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.925 qpair failed and we were unable to recover it. 00:25:55.925 [2024-11-19 11:27:51.303467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.925 [2024-11-19 11:27:51.303557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.925 [2024-11-19 11:27:51.303581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.925 [2024-11-19 11:27:51.303595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.925 [2024-11-19 11:27:51.303607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.925 [2024-11-19 11:27:51.303636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.925 qpair failed and we were unable to recover it. 00:25:55.925 [2024-11-19 11:27:51.313478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.925 [2024-11-19 11:27:51.313569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.925 [2024-11-19 11:27:51.313598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.925 [2024-11-19 11:27:51.313612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.925 [2024-11-19 11:27:51.313624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.925 [2024-11-19 11:27:51.313653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.925 qpair failed and we were unable to recover it. 00:25:55.925 [2024-11-19 11:27:51.323532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.925 [2024-11-19 11:27:51.323623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.925 [2024-11-19 11:27:51.323647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.925 [2024-11-19 11:27:51.323661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.925 [2024-11-19 11:27:51.323682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.925 [2024-11-19 11:27:51.323712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.925 qpair failed and we were unable to recover it. 00:25:55.925 [2024-11-19 11:27:51.333614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.925 [2024-11-19 11:27:51.333720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.925 [2024-11-19 11:27:51.333745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.925 [2024-11-19 11:27:51.333759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.925 [2024-11-19 11:27:51.333771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.925 [2024-11-19 11:27:51.333799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.925 qpair failed and we were unable to recover it. 00:25:55.925 [2024-11-19 11:27:51.343574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.925 [2024-11-19 11:27:51.343672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.925 [2024-11-19 11:27:51.343697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.925 [2024-11-19 11:27:51.343712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.925 [2024-11-19 11:27:51.343724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.925 [2024-11-19 11:27:51.343752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.925 qpair failed and we were unable to recover it. 00:25:55.925 [2024-11-19 11:27:51.353697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.925 [2024-11-19 11:27:51.353800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.925 [2024-11-19 11:27:51.353825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.925 [2024-11-19 11:27:51.353839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.925 [2024-11-19 11:27:51.353851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.925 [2024-11-19 11:27:51.353879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.926 qpair failed and we were unable to recover it. 00:25:55.926 [2024-11-19 11:27:51.363648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.926 [2024-11-19 11:27:51.363778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.926 [2024-11-19 11:27:51.363801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.926 [2024-11-19 11:27:51.363815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.926 [2024-11-19 11:27:51.363827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.926 [2024-11-19 11:27:51.363855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.926 qpair failed and we were unable to recover it. 00:25:55.926 [2024-11-19 11:27:51.373750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.926 [2024-11-19 11:27:51.373854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.926 [2024-11-19 11:27:51.373878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.926 [2024-11-19 11:27:51.373892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.926 [2024-11-19 11:27:51.373904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.926 [2024-11-19 11:27:51.373933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.926 qpair failed and we were unable to recover it. 00:25:55.926 [2024-11-19 11:27:51.383744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.926 [2024-11-19 11:27:51.383842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.926 [2024-11-19 11:27:51.383870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.926 [2024-11-19 11:27:51.383885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.926 [2024-11-19 11:27:51.383897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.926 [2024-11-19 11:27:51.383926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.926 qpair failed and we were unable to recover it. 00:25:55.926 [2024-11-19 11:27:51.393764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.926 [2024-11-19 11:27:51.393863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.926 [2024-11-19 11:27:51.393888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.926 [2024-11-19 11:27:51.393902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.926 [2024-11-19 11:27:51.393914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.926 [2024-11-19 11:27:51.393943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.926 qpair failed and we were unable to recover it. 00:25:55.926 [2024-11-19 11:27:51.403802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.926 [2024-11-19 11:27:51.403945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.926 [2024-11-19 11:27:51.403970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.926 [2024-11-19 11:27:51.403984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.926 [2024-11-19 11:27:51.403996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.926 [2024-11-19 11:27:51.404025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.926 qpair failed and we were unable to recover it. 00:25:55.926 [2024-11-19 11:27:51.413830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.926 [2024-11-19 11:27:51.413946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.926 [2024-11-19 11:27:51.413975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.926 [2024-11-19 11:27:51.413990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.926 [2024-11-19 11:27:51.414002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:55.926 [2024-11-19 11:27:51.414031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.926 qpair failed and we were unable to recover it. 00:25:56.185 [2024-11-19 11:27:51.423871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.185 [2024-11-19 11:27:51.423974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.185 [2024-11-19 11:27:51.423999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.185 [2024-11-19 11:27:51.424014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.185 [2024-11-19 11:27:51.424026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.186 [2024-11-19 11:27:51.424055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.186 qpair failed and we were unable to recover it. 00:25:56.186 [2024-11-19 11:27:51.433920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.186 [2024-11-19 11:27:51.434018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.186 [2024-11-19 11:27:51.434042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.186 [2024-11-19 11:27:51.434056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.186 [2024-11-19 11:27:51.434068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.186 [2024-11-19 11:27:51.434106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.186 qpair failed and we were unable to recover it. 00:25:56.186 [2024-11-19 11:27:51.443867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.186 [2024-11-19 11:27:51.443975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.186 [2024-11-19 11:27:51.444001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.186 [2024-11-19 11:27:51.444015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.186 [2024-11-19 11:27:51.444027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.186 [2024-11-19 11:27:51.444055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.186 qpair failed and we were unable to recover it. 00:25:56.186 [2024-11-19 11:27:51.453884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.186 [2024-11-19 11:27:51.453988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.186 [2024-11-19 11:27:51.454013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.186 [2024-11-19 11:27:51.454027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.186 [2024-11-19 11:27:51.454044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.186 [2024-11-19 11:27:51.454073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.186 qpair failed and we were unable to recover it. 00:25:56.186 [2024-11-19 11:27:51.463954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.186 [2024-11-19 11:27:51.464062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.186 [2024-11-19 11:27:51.464087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.186 [2024-11-19 11:27:51.464102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.186 [2024-11-19 11:27:51.464114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.186 [2024-11-19 11:27:51.464142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.186 qpair failed and we were unable to recover it. 00:25:56.186 [2024-11-19 11:27:51.473994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.186 [2024-11-19 11:27:51.474091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.186 [2024-11-19 11:27:51.474117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.186 [2024-11-19 11:27:51.474131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.186 [2024-11-19 11:27:51.474142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.186 [2024-11-19 11:27:51.474172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.186 qpair failed and we were unable to recover it. 00:25:56.186 [2024-11-19 11:27:51.484039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.186 [2024-11-19 11:27:51.484146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.186 [2024-11-19 11:27:51.484171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.186 [2024-11-19 11:27:51.484185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.186 [2024-11-19 11:27:51.484198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.186 [2024-11-19 11:27:51.484226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.186 qpair failed and we were unable to recover it. 00:25:56.186 [2024-11-19 11:27:51.494064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.186 [2024-11-19 11:27:51.494171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.186 [2024-11-19 11:27:51.494196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.186 [2024-11-19 11:27:51.494210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.186 [2024-11-19 11:27:51.494223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.186 [2024-11-19 11:27:51.494251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.186 qpair failed and we were unable to recover it. 00:25:56.186 [2024-11-19 11:27:51.504074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.186 [2024-11-19 11:27:51.504172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.186 [2024-11-19 11:27:51.504198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.186 [2024-11-19 11:27:51.504212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.186 [2024-11-19 11:27:51.504225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.186 [2024-11-19 11:27:51.504253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.186 qpair failed and we were unable to recover it. 00:25:56.186 [2024-11-19 11:27:51.514048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.186 [2024-11-19 11:27:51.514152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.186 [2024-11-19 11:27:51.514178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.186 [2024-11-19 11:27:51.514192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.186 [2024-11-19 11:27:51.514204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.186 [2024-11-19 11:27:51.514232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.186 qpair failed and we were unable to recover it. 00:25:56.186 [2024-11-19 11:27:51.524089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.186 [2024-11-19 11:27:51.524194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.186 [2024-11-19 11:27:51.524219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.186 [2024-11-19 11:27:51.524234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.186 [2024-11-19 11:27:51.524245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.186 [2024-11-19 11:27:51.524273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.186 qpair failed and we were unable to recover it. 00:25:56.186 [2024-11-19 11:27:51.534177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.186 [2024-11-19 11:27:51.534281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.186 [2024-11-19 11:27:51.534306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.186 [2024-11-19 11:27:51.534320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.186 [2024-11-19 11:27:51.534333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.186 [2024-11-19 11:27:51.534368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.186 qpair failed and we were unable to recover it. 00:25:56.186 [2024-11-19 11:27:51.544168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.186 [2024-11-19 11:27:51.544267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.186 [2024-11-19 11:27:51.544297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.186 [2024-11-19 11:27:51.544313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.186 [2024-11-19 11:27:51.544325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.186 [2024-11-19 11:27:51.544353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.186 qpair failed and we were unable to recover it. 00:25:56.187 [2024-11-19 11:27:51.554207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.187 [2024-11-19 11:27:51.554312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.187 [2024-11-19 11:27:51.554335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.187 [2024-11-19 11:27:51.554349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.187 [2024-11-19 11:27:51.554369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.187 [2024-11-19 11:27:51.554400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.187 qpair failed and we were unable to recover it. 00:25:56.187 [2024-11-19 11:27:51.564398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.187 [2024-11-19 11:27:51.564503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.187 [2024-11-19 11:27:51.564532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.187 [2024-11-19 11:27:51.564547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.187 [2024-11-19 11:27:51.564559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.187 [2024-11-19 11:27:51.564589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.187 qpair failed and we were unable to recover it. 00:25:56.187 [2024-11-19 11:27:51.574286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.187 [2024-11-19 11:27:51.574403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.187 [2024-11-19 11:27:51.574427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.187 [2024-11-19 11:27:51.574441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.187 [2024-11-19 11:27:51.574454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.187 [2024-11-19 11:27:51.574483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.187 qpair failed and we were unable to recover it. 00:25:56.187 [2024-11-19 11:27:51.584357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.187 [2024-11-19 11:27:51.584450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.187 [2024-11-19 11:27:51.584473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.187 [2024-11-19 11:27:51.584487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.187 [2024-11-19 11:27:51.584505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.187 [2024-11-19 11:27:51.584534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.187 qpair failed and we were unable to recover it. 00:25:56.187 [2024-11-19 11:27:51.594419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.187 [2024-11-19 11:27:51.594502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.187 [2024-11-19 11:27:51.594526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.187 [2024-11-19 11:27:51.594540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.187 [2024-11-19 11:27:51.594552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.187 [2024-11-19 11:27:51.594590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.187 qpair failed and we were unable to recover it. 00:25:56.187 [2024-11-19 11:27:51.604384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.187 [2024-11-19 11:27:51.604478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.187 [2024-11-19 11:27:51.604502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.187 [2024-11-19 11:27:51.604515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.187 [2024-11-19 11:27:51.604528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.187 [2024-11-19 11:27:51.604556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.187 qpair failed and we were unable to recover it. 00:25:56.187 [2024-11-19 11:27:51.614420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.187 [2024-11-19 11:27:51.614537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.187 [2024-11-19 11:27:51.614563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.187 [2024-11-19 11:27:51.614578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.187 [2024-11-19 11:27:51.614590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.187 [2024-11-19 11:27:51.614629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.187 qpair failed and we were unable to recover it. 00:25:56.187 [2024-11-19 11:27:51.624447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.187 [2024-11-19 11:27:51.624546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.187 [2024-11-19 11:27:51.624571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.187 [2024-11-19 11:27:51.624586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.187 [2024-11-19 11:27:51.624598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.187 [2024-11-19 11:27:51.624627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.187 qpair failed and we were unable to recover it. 00:25:56.187 [2024-11-19 11:27:51.634450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.187 [2024-11-19 11:27:51.634542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.187 [2024-11-19 11:27:51.634570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.187 [2024-11-19 11:27:51.634584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.187 [2024-11-19 11:27:51.634596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.187 [2024-11-19 11:27:51.634625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.187 qpair failed and we were unable to recover it. 00:25:56.187 [2024-11-19 11:27:51.644483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.187 [2024-11-19 11:27:51.644576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.187 [2024-11-19 11:27:51.644600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.187 [2024-11-19 11:27:51.644614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.187 [2024-11-19 11:27:51.644626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.187 [2024-11-19 11:27:51.644654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.187 qpair failed and we were unable to recover it. 00:25:56.187 [2024-11-19 11:27:51.654558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.187 [2024-11-19 11:27:51.654658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.187 [2024-11-19 11:27:51.654684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.187 [2024-11-19 11:27:51.654698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.187 [2024-11-19 11:27:51.654710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.187 [2024-11-19 11:27:51.654739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.187 qpair failed and we were unable to recover it. 00:25:56.187 [2024-11-19 11:27:51.664543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.187 [2024-11-19 11:27:51.664628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.187 [2024-11-19 11:27:51.664652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.187 [2024-11-19 11:27:51.664665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.187 [2024-11-19 11:27:51.664677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.187 [2024-11-19 11:27:51.664705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.187 qpair failed and we were unable to recover it. 00:25:56.187 [2024-11-19 11:27:51.674526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.187 [2024-11-19 11:27:51.674647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.188 [2024-11-19 11:27:51.674677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.188 [2024-11-19 11:27:51.674692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.188 [2024-11-19 11:27:51.674707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.188 [2024-11-19 11:27:51.674748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.188 qpair failed and we were unable to recover it. 00:25:56.447 [2024-11-19 11:27:51.684651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.447 [2024-11-19 11:27:51.684759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.447 [2024-11-19 11:27:51.684783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.447 [2024-11-19 11:27:51.684798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.447 [2024-11-19 11:27:51.684810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.447 [2024-11-19 11:27:51.684838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-11-19 11:27:51.694602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.447 [2024-11-19 11:27:51.694688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.447 [2024-11-19 11:27:51.694713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.447 [2024-11-19 11:27:51.694727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.447 [2024-11-19 11:27:51.694739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.447 [2024-11-19 11:27:51.694772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-11-19 11:27:51.704609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.447 [2024-11-19 11:27:51.704694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.447 [2024-11-19 11:27:51.704718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.447 [2024-11-19 11:27:51.704733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.447 [2024-11-19 11:27:51.704745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.447 [2024-11-19 11:27:51.704773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-11-19 11:27:51.714698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.447 [2024-11-19 11:27:51.714806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.447 [2024-11-19 11:27:51.714831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.447 [2024-11-19 11:27:51.714845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.447 [2024-11-19 11:27:51.714863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.447 [2024-11-19 11:27:51.714894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-11-19 11:27:51.724717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.447 [2024-11-19 11:27:51.724865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.447 [2024-11-19 11:27:51.724890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.447 [2024-11-19 11:27:51.724905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.447 [2024-11-19 11:27:51.724918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.447 [2024-11-19 11:27:51.724948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-11-19 11:27:51.734726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.447 [2024-11-19 11:27:51.734846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.447 [2024-11-19 11:27:51.734871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.447 [2024-11-19 11:27:51.734886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.447 [2024-11-19 11:27:51.734898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.447 [2024-11-19 11:27:51.734927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-11-19 11:27:51.744742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.447 [2024-11-19 11:27:51.744843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.447 [2024-11-19 11:27:51.744868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.447 [2024-11-19 11:27:51.744883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.447 [2024-11-19 11:27:51.744895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.447 [2024-11-19 11:27:51.744923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-11-19 11:27:51.754789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.447 [2024-11-19 11:27:51.754885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.447 [2024-11-19 11:27:51.754909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.447 [2024-11-19 11:27:51.754923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.447 [2024-11-19 11:27:51.754935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.447 [2024-11-19 11:27:51.754965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-11-19 11:27:51.764838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.447 [2024-11-19 11:27:51.764949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.447 [2024-11-19 11:27:51.764975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.447 [2024-11-19 11:27:51.764989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.447 [2024-11-19 11:27:51.765001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.447 [2024-11-19 11:27:51.765030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-11-19 11:27:51.774933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.447 [2024-11-19 11:27:51.775039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.447 [2024-11-19 11:27:51.775063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.447 [2024-11-19 11:27:51.775078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.447 [2024-11-19 11:27:51.775089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.447 [2024-11-19 11:27:51.775119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.447 qpair failed and we were unable to recover it. 00:25:56.447 [2024-11-19 11:27:51.784889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.447 [2024-11-19 11:27:51.784986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.447 [2024-11-19 11:27:51.785010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.448 [2024-11-19 11:27:51.785024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.448 [2024-11-19 11:27:51.785036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.448 [2024-11-19 11:27:51.785064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-11-19 11:27:51.794895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.448 [2024-11-19 11:27:51.795052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.448 [2024-11-19 11:27:51.795077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.448 [2024-11-19 11:27:51.795092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.448 [2024-11-19 11:27:51.795105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.448 [2024-11-19 11:27:51.795133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-11-19 11:27:51.804908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.448 [2024-11-19 11:27:51.805016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.448 [2024-11-19 11:27:51.805047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.448 [2024-11-19 11:27:51.805062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.448 [2024-11-19 11:27:51.805075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.448 [2024-11-19 11:27:51.805103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-11-19 11:27:51.815027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.448 [2024-11-19 11:27:51.815166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.448 [2024-11-19 11:27:51.815191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.448 [2024-11-19 11:27:51.815206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.448 [2024-11-19 11:27:51.815218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.448 [2024-11-19 11:27:51.815246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-11-19 11:27:51.824978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.448 [2024-11-19 11:27:51.825101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.448 [2024-11-19 11:27:51.825129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.448 [2024-11-19 11:27:51.825144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.448 [2024-11-19 11:27:51.825157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.448 [2024-11-19 11:27:51.825186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-11-19 11:27:51.835059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.448 [2024-11-19 11:27:51.835188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.448 [2024-11-19 11:27:51.835213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.448 [2024-11-19 11:27:51.835228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.448 [2024-11-19 11:27:51.835240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.448 [2024-11-19 11:27:51.835269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-11-19 11:27:51.845114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.448 [2024-11-19 11:27:51.845222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.448 [2024-11-19 11:27:51.845247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.448 [2024-11-19 11:27:51.845267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.448 [2024-11-19 11:27:51.845280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.448 [2024-11-19 11:27:51.845308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-11-19 11:27:51.855106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.448 [2024-11-19 11:27:51.855246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.448 [2024-11-19 11:27:51.855271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.448 [2024-11-19 11:27:51.855285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.448 [2024-11-19 11:27:51.855297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.448 [2024-11-19 11:27:51.855326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-11-19 11:27:51.865154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.448 [2024-11-19 11:27:51.865254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.448 [2024-11-19 11:27:51.865278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.448 [2024-11-19 11:27:51.865292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.448 [2024-11-19 11:27:51.865304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.448 [2024-11-19 11:27:51.865333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-11-19 11:27:51.875134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.448 [2024-11-19 11:27:51.875242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.448 [2024-11-19 11:27:51.875268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.448 [2024-11-19 11:27:51.875283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.448 [2024-11-19 11:27:51.875295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.448 [2024-11-19 11:27:51.875324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-11-19 11:27:51.885143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.448 [2024-11-19 11:27:51.885249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.448 [2024-11-19 11:27:51.885275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.448 [2024-11-19 11:27:51.885290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.448 [2024-11-19 11:27:51.885302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.448 [2024-11-19 11:27:51.885331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-11-19 11:27:51.895230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.448 [2024-11-19 11:27:51.895330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.448 [2024-11-19 11:27:51.895355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.448 [2024-11-19 11:27:51.895381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.448 [2024-11-19 11:27:51.895395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.448 [2024-11-19 11:27:51.895424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.448 qpair failed and we were unable to recover it. 00:25:56.448 [2024-11-19 11:27:51.905216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.448 [2024-11-19 11:27:51.905318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.448 [2024-11-19 11:27:51.905342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.448 [2024-11-19 11:27:51.905357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.448 [2024-11-19 11:27:51.905380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.449 [2024-11-19 11:27:51.905411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.449 qpair failed and we were unable to recover it. 00:25:56.449 [2024-11-19 11:27:51.915235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.449 [2024-11-19 11:27:51.915336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.449 [2024-11-19 11:27:51.915371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.449 [2024-11-19 11:27:51.915388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.449 [2024-11-19 11:27:51.915400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.449 [2024-11-19 11:27:51.915428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.449 qpair failed and we were unable to recover it. 00:25:56.449 [2024-11-19 11:27:51.925281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.449 [2024-11-19 11:27:51.925398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.449 [2024-11-19 11:27:51.925421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.449 [2024-11-19 11:27:51.925435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.449 [2024-11-19 11:27:51.925447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.449 [2024-11-19 11:27:51.925476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.449 qpair failed and we were unable to recover it. 00:25:56.449 [2024-11-19 11:27:51.935328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.449 [2024-11-19 11:27:51.935482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.449 [2024-11-19 11:27:51.935514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.449 [2024-11-19 11:27:51.935529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.449 [2024-11-19 11:27:51.935541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.449 [2024-11-19 11:27:51.935569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.449 qpair failed and we were unable to recover it. 00:25:56.708 [2024-11-19 11:27:51.945322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.708 [2024-11-19 11:27:51.945439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.708 [2024-11-19 11:27:51.945465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.708 [2024-11-19 11:27:51.945479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.708 [2024-11-19 11:27:51.945491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.708 [2024-11-19 11:27:51.945519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.708 qpair failed and we were unable to recover it. 00:25:56.708 [2024-11-19 11:27:51.955397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.708 [2024-11-19 11:27:51.955492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.708 [2024-11-19 11:27:51.955517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.708 [2024-11-19 11:27:51.955532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.708 [2024-11-19 11:27:51.955544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.708 [2024-11-19 11:27:51.955573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.708 qpair failed and we were unable to recover it. 00:25:56.708 [2024-11-19 11:27:51.965438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.708 [2024-11-19 11:27:51.965579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.708 [2024-11-19 11:27:51.965604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.708 [2024-11-19 11:27:51.965618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.708 [2024-11-19 11:27:51.965631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.708 [2024-11-19 11:27:51.965660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.708 qpair failed and we were unable to recover it. 00:25:56.708 [2024-11-19 11:27:51.975429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.708 [2024-11-19 11:27:51.975526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.708 [2024-11-19 11:27:51.975551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.708 [2024-11-19 11:27:51.975573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.708 [2024-11-19 11:27:51.975587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.708 [2024-11-19 11:27:51.975616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.708 qpair failed and we were unable to recover it. 00:25:56.708 [2024-11-19 11:27:51.985488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.709 [2024-11-19 11:27:51.985621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.709 [2024-11-19 11:27:51.985647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.709 [2024-11-19 11:27:51.985661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.709 [2024-11-19 11:27:51.985672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.709 [2024-11-19 11:27:51.985701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.709 qpair failed and we were unable to recover it. 00:25:56.709 [2024-11-19 11:27:51.995435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.709 [2024-11-19 11:27:51.995548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.709 [2024-11-19 11:27:51.995573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.709 [2024-11-19 11:27:51.995587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.709 [2024-11-19 11:27:51.995599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.709 [2024-11-19 11:27:51.995638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.709 qpair failed and we were unable to recover it. 00:25:56.709 [2024-11-19 11:27:52.005616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.709 [2024-11-19 11:27:52.005740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.709 [2024-11-19 11:27:52.005766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.709 [2024-11-19 11:27:52.005780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.709 [2024-11-19 11:27:52.005792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.709 [2024-11-19 11:27:52.005821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.709 qpair failed and we were unable to recover it. 00:25:56.709 [2024-11-19 11:27:52.015510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.709 [2024-11-19 11:27:52.015656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.709 [2024-11-19 11:27:52.015681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.709 [2024-11-19 11:27:52.015696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.709 [2024-11-19 11:27:52.015708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.709 [2024-11-19 11:27:52.015736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.709 qpair failed and we were unable to recover it. 00:25:56.709 [2024-11-19 11:27:52.025533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.709 [2024-11-19 11:27:52.025635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.709 [2024-11-19 11:27:52.025660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.709 [2024-11-19 11:27:52.025674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.709 [2024-11-19 11:27:52.025686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.709 [2024-11-19 11:27:52.025714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.709 qpair failed and we were unable to recover it. 00:25:56.709 [2024-11-19 11:27:52.035545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.709 [2024-11-19 11:27:52.035675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.709 [2024-11-19 11:27:52.035700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.709 [2024-11-19 11:27:52.035714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.709 [2024-11-19 11:27:52.035726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.709 [2024-11-19 11:27:52.035755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.709 qpair failed and we were unable to recover it. 00:25:56.709 [2024-11-19 11:27:52.045576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.709 [2024-11-19 11:27:52.045681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.709 [2024-11-19 11:27:52.045706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.709 [2024-11-19 11:27:52.045720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.709 [2024-11-19 11:27:52.045732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.709 [2024-11-19 11:27:52.045761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.709 qpair failed and we were unable to recover it. 00:25:56.709 [2024-11-19 11:27:52.055635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.709 [2024-11-19 11:27:52.055750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.709 [2024-11-19 11:27:52.055774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.709 [2024-11-19 11:27:52.055788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.709 [2024-11-19 11:27:52.055800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.709 [2024-11-19 11:27:52.055828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.709 qpair failed and we were unable to recover it. 00:25:56.709 [2024-11-19 11:27:52.065657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.709 [2024-11-19 11:27:52.065784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.709 [2024-11-19 11:27:52.065815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.709 [2024-11-19 11:27:52.065831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.709 [2024-11-19 11:27:52.065843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.709 [2024-11-19 11:27:52.065871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.709 qpair failed and we were unable to recover it. 00:25:56.709 [2024-11-19 11:27:52.075664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.709 [2024-11-19 11:27:52.075779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.709 [2024-11-19 11:27:52.075804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.709 [2024-11-19 11:27:52.075818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.709 [2024-11-19 11:27:52.075830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.709 [2024-11-19 11:27:52.075860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.709 qpair failed and we were unable to recover it. 00:25:56.709 [2024-11-19 11:27:52.085773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.709 [2024-11-19 11:27:52.085923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.709 [2024-11-19 11:27:52.085949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.709 [2024-11-19 11:27:52.085963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.709 [2024-11-19 11:27:52.085975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.709 [2024-11-19 11:27:52.086004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.709 qpair failed and we were unable to recover it. 00:25:56.709 [2024-11-19 11:27:52.095830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.709 [2024-11-19 11:27:52.095967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.709 [2024-11-19 11:27:52.095993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.709 [2024-11-19 11:27:52.096007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.709 [2024-11-19 11:27:52.096019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.709 [2024-11-19 11:27:52.096048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.709 qpair failed and we were unable to recover it. 00:25:56.709 [2024-11-19 11:27:52.105797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.709 [2024-11-19 11:27:52.105897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.709 [2024-11-19 11:27:52.105922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.709 [2024-11-19 11:27:52.105944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.710 [2024-11-19 11:27:52.105957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.710 [2024-11-19 11:27:52.105986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.710 qpair failed and we were unable to recover it. 00:25:56.710 [2024-11-19 11:27:52.115833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.710 [2024-11-19 11:27:52.115981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.710 [2024-11-19 11:27:52.116007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.710 [2024-11-19 11:27:52.116022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.710 [2024-11-19 11:27:52.116034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.710 [2024-11-19 11:27:52.116063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.710 qpair failed and we were unable to recover it. 00:25:56.710 [2024-11-19 11:27:52.125890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.710 [2024-11-19 11:27:52.126035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.710 [2024-11-19 11:27:52.126060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.710 [2024-11-19 11:27:52.126075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.710 [2024-11-19 11:27:52.126087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.710 [2024-11-19 11:27:52.126115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.710 qpair failed and we were unable to recover it. 00:25:56.710 [2024-11-19 11:27:52.135948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.710 [2024-11-19 11:27:52.136083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.710 [2024-11-19 11:27:52.136108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.710 [2024-11-19 11:27:52.136123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.710 [2024-11-19 11:27:52.136134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.710 [2024-11-19 11:27:52.136163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.710 qpair failed and we were unable to recover it. 00:25:56.710 [2024-11-19 11:27:52.145967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.710 [2024-11-19 11:27:52.146073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.710 [2024-11-19 11:27:52.146098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.710 [2024-11-19 11:27:52.146112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.710 [2024-11-19 11:27:52.146124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.710 [2024-11-19 11:27:52.146152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.710 qpair failed and we were unable to recover it. 00:25:56.710 [2024-11-19 11:27:52.155937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.710 [2024-11-19 11:27:52.156018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.710 [2024-11-19 11:27:52.156041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.710 [2024-11-19 11:27:52.156055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.710 [2024-11-19 11:27:52.156067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.710 [2024-11-19 11:27:52.156096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.710 qpair failed and we were unable to recover it. 00:25:56.710 [2024-11-19 11:27:52.166032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.710 [2024-11-19 11:27:52.166140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.710 [2024-11-19 11:27:52.166165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.710 [2024-11-19 11:27:52.166179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.710 [2024-11-19 11:27:52.166191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.710 [2024-11-19 11:27:52.166219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.710 qpair failed and we were unable to recover it. 00:25:56.710 [2024-11-19 11:27:52.176051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.710 [2024-11-19 11:27:52.176189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.710 [2024-11-19 11:27:52.176214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.710 [2024-11-19 11:27:52.176228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.710 [2024-11-19 11:27:52.176241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.710 [2024-11-19 11:27:52.176270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.710 qpair failed and we were unable to recover it. 00:25:56.710 [2024-11-19 11:27:52.186025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.710 [2024-11-19 11:27:52.186129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.710 [2024-11-19 11:27:52.186153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.710 [2024-11-19 11:27:52.186166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.710 [2024-11-19 11:27:52.186178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.710 [2024-11-19 11:27:52.186208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.710 qpair failed and we were unable to recover it. 00:25:56.710 [2024-11-19 11:27:52.196056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.710 [2024-11-19 11:27:52.196154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.710 [2024-11-19 11:27:52.196184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.710 [2024-11-19 11:27:52.196199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.710 [2024-11-19 11:27:52.196211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.710 [2024-11-19 11:27:52.196239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.710 qpair failed and we were unable to recover it. 00:25:56.969 [2024-11-19 11:27:52.206171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.969 [2024-11-19 11:27:52.206280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.969 [2024-11-19 11:27:52.206305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.969 [2024-11-19 11:27:52.206319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.969 [2024-11-19 11:27:52.206331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.969 [2024-11-19 11:27:52.206360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.969 qpair failed and we were unable to recover it. 00:25:56.969 [2024-11-19 11:27:52.216103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.969 [2024-11-19 11:27:52.216204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.969 [2024-11-19 11:27:52.216229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.969 [2024-11-19 11:27:52.216243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.969 [2024-11-19 11:27:52.216255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.969 [2024-11-19 11:27:52.216284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.969 qpair failed and we were unable to recover it. 00:25:56.969 [2024-11-19 11:27:52.226134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.969 [2024-11-19 11:27:52.226264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.969 [2024-11-19 11:27:52.226289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.969 [2024-11-19 11:27:52.226304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.969 [2024-11-19 11:27:52.226316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.969 [2024-11-19 11:27:52.226344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.969 qpair failed and we were unable to recover it. 00:25:56.969 [2024-11-19 11:27:52.236161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.969 [2024-11-19 11:27:52.236259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.969 [2024-11-19 11:27:52.236283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.969 [2024-11-19 11:27:52.236302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.969 [2024-11-19 11:27:52.236315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.969 [2024-11-19 11:27:52.236344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.969 qpair failed and we were unable to recover it. 00:25:56.969 [2024-11-19 11:27:52.246155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.969 [2024-11-19 11:27:52.246260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.969 [2024-11-19 11:27:52.246285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.969 [2024-11-19 11:27:52.246299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.969 [2024-11-19 11:27:52.246312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.969 [2024-11-19 11:27:52.246341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.969 qpair failed and we were unable to recover it. 00:25:56.969 [2024-11-19 11:27:52.256234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.969 [2024-11-19 11:27:52.256339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.969 [2024-11-19 11:27:52.256372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.969 [2024-11-19 11:27:52.256389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.969 [2024-11-19 11:27:52.256401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.970 [2024-11-19 11:27:52.256430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.970 qpair failed and we were unable to recover it. 00:25:56.970 [2024-11-19 11:27:52.266302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.970 [2024-11-19 11:27:52.266430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.970 [2024-11-19 11:27:52.266457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.970 [2024-11-19 11:27:52.266471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.970 [2024-11-19 11:27:52.266484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.970 [2024-11-19 11:27:52.266512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.970 qpair failed and we were unable to recover it. 00:25:56.970 [2024-11-19 11:27:52.276301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.970 [2024-11-19 11:27:52.276412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.970 [2024-11-19 11:27:52.276438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.970 [2024-11-19 11:27:52.276453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.970 [2024-11-19 11:27:52.276465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.970 [2024-11-19 11:27:52.276493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.970 qpair failed and we were unable to recover it. 00:25:56.970 [2024-11-19 11:27:52.286343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.970 [2024-11-19 11:27:52.286475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.970 [2024-11-19 11:27:52.286501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.970 [2024-11-19 11:27:52.286515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.970 [2024-11-19 11:27:52.286527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.970 [2024-11-19 11:27:52.286556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.970 qpair failed and we were unable to recover it. 00:25:56.970 [2024-11-19 11:27:52.296356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.970 [2024-11-19 11:27:52.296464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.970 [2024-11-19 11:27:52.296490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.970 [2024-11-19 11:27:52.296504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.970 [2024-11-19 11:27:52.296517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.970 [2024-11-19 11:27:52.296545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.970 qpair failed and we were unable to recover it. 00:25:56.970 [2024-11-19 11:27:52.306397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.970 [2024-11-19 11:27:52.306513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.970 [2024-11-19 11:27:52.306538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.970 [2024-11-19 11:27:52.306553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.970 [2024-11-19 11:27:52.306566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.970 [2024-11-19 11:27:52.306596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.970 qpair failed and we were unable to recover it. 00:25:56.970 [2024-11-19 11:27:52.316399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.970 [2024-11-19 11:27:52.316494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.970 [2024-11-19 11:27:52.316518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.970 [2024-11-19 11:27:52.316533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.970 [2024-11-19 11:27:52.316545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.970 [2024-11-19 11:27:52.316574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.970 qpair failed and we were unable to recover it. 00:25:56.970 [2024-11-19 11:27:52.326437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.970 [2024-11-19 11:27:52.326538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.970 [2024-11-19 11:27:52.326562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.970 [2024-11-19 11:27:52.326576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.970 [2024-11-19 11:27:52.326588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.970 [2024-11-19 11:27:52.326626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.970 qpair failed and we were unable to recover it. 00:25:56.970 [2024-11-19 11:27:52.336445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.970 [2024-11-19 11:27:52.336532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.970 [2024-11-19 11:27:52.336558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.970 [2024-11-19 11:27:52.336572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.970 [2024-11-19 11:27:52.336584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.970 [2024-11-19 11:27:52.336612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.970 qpair failed and we were unable to recover it. 00:25:56.970 [2024-11-19 11:27:52.346531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.970 [2024-11-19 11:27:52.346619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.970 [2024-11-19 11:27:52.346643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.970 [2024-11-19 11:27:52.346656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.970 [2024-11-19 11:27:52.346668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.970 [2024-11-19 11:27:52.346696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.970 qpair failed and we were unable to recover it. 00:25:56.970 [2024-11-19 11:27:52.356499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.970 [2024-11-19 11:27:52.356587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.970 [2024-11-19 11:27:52.356616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.970 [2024-11-19 11:27:52.356630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.970 [2024-11-19 11:27:52.356643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.970 [2024-11-19 11:27:52.356672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.970 qpair failed and we were unable to recover it. 00:25:56.970 [2024-11-19 11:27:52.366605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.970 [2024-11-19 11:27:52.366726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.970 [2024-11-19 11:27:52.366751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.970 [2024-11-19 11:27:52.366771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.970 [2024-11-19 11:27:52.366784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.970 [2024-11-19 11:27:52.366812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.970 qpair failed and we were unable to recover it. 00:25:56.970 [2024-11-19 11:27:52.376611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.970 [2024-11-19 11:27:52.376746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.970 [2024-11-19 11:27:52.376770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.970 [2024-11-19 11:27:52.376784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.970 [2024-11-19 11:27:52.376796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.970 [2024-11-19 11:27:52.376825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.970 qpair failed and we were unable to recover it. 00:25:56.970 [2024-11-19 11:27:52.386614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.971 [2024-11-19 11:27:52.386704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.971 [2024-11-19 11:27:52.386731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.971 [2024-11-19 11:27:52.386746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.971 [2024-11-19 11:27:52.386758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.971 [2024-11-19 11:27:52.386786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.971 qpair failed and we were unable to recover it. 00:25:56.971 [2024-11-19 11:27:52.396649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.971 [2024-11-19 11:27:52.396766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.971 [2024-11-19 11:27:52.396790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.971 [2024-11-19 11:27:52.396804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.971 [2024-11-19 11:27:52.396816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.971 [2024-11-19 11:27:52.396844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.971 qpair failed and we were unable to recover it. 00:25:56.971 [2024-11-19 11:27:52.406713] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.971 [2024-11-19 11:27:52.406823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.971 [2024-11-19 11:27:52.406849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.971 [2024-11-19 11:27:52.406863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.971 [2024-11-19 11:27:52.406876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.971 [2024-11-19 11:27:52.406911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.971 qpair failed and we were unable to recover it. 00:25:56.971 [2024-11-19 11:27:52.416723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.971 [2024-11-19 11:27:52.416849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.971 [2024-11-19 11:27:52.416874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.971 [2024-11-19 11:27:52.416888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.971 [2024-11-19 11:27:52.416900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.971 [2024-11-19 11:27:52.416929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.971 qpair failed and we were unable to recover it. 00:25:56.971 [2024-11-19 11:27:52.426745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.971 [2024-11-19 11:27:52.426851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.971 [2024-11-19 11:27:52.426876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.971 [2024-11-19 11:27:52.426890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.971 [2024-11-19 11:27:52.426902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.971 [2024-11-19 11:27:52.426931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.971 qpair failed and we were unable to recover it. 00:25:56.971 [2024-11-19 11:27:52.436767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.971 [2024-11-19 11:27:52.436917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.971 [2024-11-19 11:27:52.436942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.971 [2024-11-19 11:27:52.436956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.971 [2024-11-19 11:27:52.436969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.971 [2024-11-19 11:27:52.436997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.971 qpair failed and we were unable to recover it. 00:25:56.971 [2024-11-19 11:27:52.446805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.971 [2024-11-19 11:27:52.446913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.971 [2024-11-19 11:27:52.446938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.971 [2024-11-19 11:27:52.446953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.971 [2024-11-19 11:27:52.446966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.971 [2024-11-19 11:27:52.447006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.971 qpair failed and we were unable to recover it. 00:25:56.971 [2024-11-19 11:27:52.456784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.971 [2024-11-19 11:27:52.456900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.971 [2024-11-19 11:27:52.456925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.971 [2024-11-19 11:27:52.456939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.971 [2024-11-19 11:27:52.456952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:56.971 [2024-11-19 11:27:52.456981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.971 qpair failed and we were unable to recover it. 00:25:57.231 [2024-11-19 11:27:52.466862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.231 [2024-11-19 11:27:52.466961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.231 [2024-11-19 11:27:52.466985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.231 [2024-11-19 11:27:52.466999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.231 [2024-11-19 11:27:52.467012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.231 [2024-11-19 11:27:52.467040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.231 qpair failed and we were unable to recover it. 00:25:57.231 [2024-11-19 11:27:52.476890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.231 [2024-11-19 11:27:52.477027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.231 [2024-11-19 11:27:52.477052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.231 [2024-11-19 11:27:52.477067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.231 [2024-11-19 11:27:52.477079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.231 [2024-11-19 11:27:52.477109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.231 qpair failed and we were unable to recover it. 00:25:57.231 [2024-11-19 11:27:52.486907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.231 [2024-11-19 11:27:52.487018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.231 [2024-11-19 11:27:52.487044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.231 [2024-11-19 11:27:52.487058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.231 [2024-11-19 11:27:52.487070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.231 [2024-11-19 11:27:52.487109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.231 qpair failed and we were unable to recover it. 00:25:57.231 [2024-11-19 11:27:52.496968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.231 [2024-11-19 11:27:52.497102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.231 [2024-11-19 11:27:52.497129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.231 [2024-11-19 11:27:52.497149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.231 [2024-11-19 11:27:52.497162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.231 [2024-11-19 11:27:52.497191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.231 qpair failed and we were unable to recover it. 00:25:57.231 [2024-11-19 11:27:52.506949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.231 [2024-11-19 11:27:52.507049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.231 [2024-11-19 11:27:52.507073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.231 [2024-11-19 11:27:52.507087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.231 [2024-11-19 11:27:52.507099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.231 [2024-11-19 11:27:52.507128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.231 qpair failed and we were unable to recover it. 00:25:57.231 [2024-11-19 11:27:52.516947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.231 [2024-11-19 11:27:52.517061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.231 [2024-11-19 11:27:52.517086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.231 [2024-11-19 11:27:52.517101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.231 [2024-11-19 11:27:52.517113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.231 [2024-11-19 11:27:52.517142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.231 qpair failed and we were unable to recover it. 00:25:57.231 [2024-11-19 11:27:52.527032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.231 [2024-11-19 11:27:52.527158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.231 [2024-11-19 11:27:52.527184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.231 [2024-11-19 11:27:52.527198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.232 [2024-11-19 11:27:52.527211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.232 [2024-11-19 11:27:52.527239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.232 qpair failed and we were unable to recover it. 00:25:57.232 [2024-11-19 11:27:52.537056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.232 [2024-11-19 11:27:52.537177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.232 [2024-11-19 11:27:52.537202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.232 [2024-11-19 11:27:52.537216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.232 [2024-11-19 11:27:52.537228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.232 [2024-11-19 11:27:52.537257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.232 qpair failed and we were unable to recover it. 00:25:57.232 [2024-11-19 11:27:52.547032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.232 [2024-11-19 11:27:52.547152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.232 [2024-11-19 11:27:52.547178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.232 [2024-11-19 11:27:52.547192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.232 [2024-11-19 11:27:52.547204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.232 [2024-11-19 11:27:52.547233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.232 qpair failed and we were unable to recover it. 00:25:57.232 [2024-11-19 11:27:52.557100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.232 [2024-11-19 11:27:52.557201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.232 [2024-11-19 11:27:52.557227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.232 [2024-11-19 11:27:52.557241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.232 [2024-11-19 11:27:52.557253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.232 [2024-11-19 11:27:52.557281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.232 qpair failed and we were unable to recover it. 00:25:57.232 [2024-11-19 11:27:52.567124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.232 [2024-11-19 11:27:52.567232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.232 [2024-11-19 11:27:52.567258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.232 [2024-11-19 11:27:52.567272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.232 [2024-11-19 11:27:52.567285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.232 [2024-11-19 11:27:52.567313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.232 qpair failed and we were unable to recover it. 00:25:57.232 [2024-11-19 11:27:52.577123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.232 [2024-11-19 11:27:52.577223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.232 [2024-11-19 11:27:52.577246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.232 [2024-11-19 11:27:52.577261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.232 [2024-11-19 11:27:52.577273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.232 [2024-11-19 11:27:52.577302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.232 qpair failed and we were unable to recover it. 00:25:57.232 [2024-11-19 11:27:52.587158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.232 [2024-11-19 11:27:52.587264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.232 [2024-11-19 11:27:52.587288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.232 [2024-11-19 11:27:52.587302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.232 [2024-11-19 11:27:52.587315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.232 [2024-11-19 11:27:52.587343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.232 qpair failed and we were unable to recover it. 00:25:57.232 [2024-11-19 11:27:52.597203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.232 [2024-11-19 11:27:52.597303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.232 [2024-11-19 11:27:52.597327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.232 [2024-11-19 11:27:52.597341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.232 [2024-11-19 11:27:52.597352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.232 [2024-11-19 11:27:52.597389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.232 qpair failed and we were unable to recover it. 00:25:57.232 [2024-11-19 11:27:52.607264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.232 [2024-11-19 11:27:52.607382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.232 [2024-11-19 11:27:52.607408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.232 [2024-11-19 11:27:52.607422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.232 [2024-11-19 11:27:52.607434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.232 [2024-11-19 11:27:52.607463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.232 qpair failed and we were unable to recover it. 00:25:57.232 [2024-11-19 11:27:52.617256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.232 [2024-11-19 11:27:52.617357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.232 [2024-11-19 11:27:52.617387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.232 [2024-11-19 11:27:52.617402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.232 [2024-11-19 11:27:52.617413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.232 [2024-11-19 11:27:52.617442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.232 qpair failed and we were unable to recover it. 00:25:57.232 [2024-11-19 11:27:52.627316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.232 [2024-11-19 11:27:52.627460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.232 [2024-11-19 11:27:52.627485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.232 [2024-11-19 11:27:52.627506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.232 [2024-11-19 11:27:52.627519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.232 [2024-11-19 11:27:52.627548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.232 qpair failed and we were unable to recover it. 00:25:57.232 [2024-11-19 11:27:52.637306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.232 [2024-11-19 11:27:52.637415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.232 [2024-11-19 11:27:52.637441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.232 [2024-11-19 11:27:52.637456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.232 [2024-11-19 11:27:52.637468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.232 [2024-11-19 11:27:52.637496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.232 qpair failed and we were unable to recover it. 00:25:57.232 [2024-11-19 11:27:52.647426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.232 [2024-11-19 11:27:52.647526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.232 [2024-11-19 11:27:52.647564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.232 [2024-11-19 11:27:52.647578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.232 [2024-11-19 11:27:52.647599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.232 [2024-11-19 11:27:52.647628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.233 qpair failed and we were unable to recover it. 00:25:57.233 [2024-11-19 11:27:52.657392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.233 [2024-11-19 11:27:52.657487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.233 [2024-11-19 11:27:52.657512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.233 [2024-11-19 11:27:52.657526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.233 [2024-11-19 11:27:52.657537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.233 [2024-11-19 11:27:52.657566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.233 qpair failed and we were unable to recover it. 00:25:57.233 [2024-11-19 11:27:52.667418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.233 [2024-11-19 11:27:52.667521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.233 [2024-11-19 11:27:52.667547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.233 [2024-11-19 11:27:52.667561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.233 [2024-11-19 11:27:52.667573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.233 [2024-11-19 11:27:52.667608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.233 qpair failed and we were unable to recover it. 00:25:57.233 [2024-11-19 11:27:52.677437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.233 [2024-11-19 11:27:52.677523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.233 [2024-11-19 11:27:52.677547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.233 [2024-11-19 11:27:52.677561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.233 [2024-11-19 11:27:52.677573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.233 [2024-11-19 11:27:52.677602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.233 qpair failed and we were unable to recover it. 00:25:57.233 [2024-11-19 11:27:52.687462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.233 [2024-11-19 11:27:52.687573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.233 [2024-11-19 11:27:52.687598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.233 [2024-11-19 11:27:52.687613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.233 [2024-11-19 11:27:52.687625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.233 [2024-11-19 11:27:52.687654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.233 qpair failed and we were unable to recover it. 00:25:57.233 [2024-11-19 11:27:52.697463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.233 [2024-11-19 11:27:52.697553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.233 [2024-11-19 11:27:52.697577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.233 [2024-11-19 11:27:52.697591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.233 [2024-11-19 11:27:52.697603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.233 [2024-11-19 11:27:52.697630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.233 qpair failed and we were unable to recover it. 00:25:57.233 [2024-11-19 11:27:52.707496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.233 [2024-11-19 11:27:52.707638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.233 [2024-11-19 11:27:52.707665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.233 [2024-11-19 11:27:52.707679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.233 [2024-11-19 11:27:52.707691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.233 [2024-11-19 11:27:52.707719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.233 qpair failed and we were unable to recover it. 00:25:57.233 [2024-11-19 11:27:52.717553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.233 [2024-11-19 11:27:52.717659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.233 [2024-11-19 11:27:52.717685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.233 [2024-11-19 11:27:52.717700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.233 [2024-11-19 11:27:52.717712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.233 [2024-11-19 11:27:52.717741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.233 qpair failed and we were unable to recover it. 00:25:57.493 [2024-11-19 11:27:52.727612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.493 [2024-11-19 11:27:52.727738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.493 [2024-11-19 11:27:52.727761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.493 [2024-11-19 11:27:52.727775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.493 [2024-11-19 11:27:52.727787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.493 [2024-11-19 11:27:52.727817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.493 qpair failed and we were unable to recover it. 00:25:57.493 [2024-11-19 11:27:52.737593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.493 [2024-11-19 11:27:52.737681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.493 [2024-11-19 11:27:52.737705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.493 [2024-11-19 11:27:52.737719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.493 [2024-11-19 11:27:52.737731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.493 [2024-11-19 11:27:52.737760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.493 qpair failed and we were unable to recover it. 00:25:57.493 [2024-11-19 11:27:52.747620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.493 [2024-11-19 11:27:52.747742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.493 [2024-11-19 11:27:52.747767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.493 [2024-11-19 11:27:52.747781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.493 [2024-11-19 11:27:52.747793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.493 [2024-11-19 11:27:52.747822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.493 qpair failed and we were unable to recover it. 00:25:57.493 [2024-11-19 11:27:52.757694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.493 [2024-11-19 11:27:52.757795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.493 [2024-11-19 11:27:52.757818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.493 [2024-11-19 11:27:52.757838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.493 [2024-11-19 11:27:52.757850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.493 [2024-11-19 11:27:52.757879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.493 qpair failed and we were unable to recover it. 00:25:57.493 [2024-11-19 11:27:52.767698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.493 [2024-11-19 11:27:52.767821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.493 [2024-11-19 11:27:52.767846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.493 [2024-11-19 11:27:52.767861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.493 [2024-11-19 11:27:52.767873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.493 [2024-11-19 11:27:52.767902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.493 qpair failed and we were unable to recover it. 00:25:57.493 [2024-11-19 11:27:52.777713] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.493 [2024-11-19 11:27:52.777816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.493 [2024-11-19 11:27:52.777842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.493 [2024-11-19 11:27:52.777856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.493 [2024-11-19 11:27:52.777868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.493 [2024-11-19 11:27:52.777897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.493 qpair failed and we were unable to recover it. 00:25:57.493 [2024-11-19 11:27:52.787725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.493 [2024-11-19 11:27:52.787830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.493 [2024-11-19 11:27:52.787855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.493 [2024-11-19 11:27:52.787870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.493 [2024-11-19 11:27:52.787882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.493 [2024-11-19 11:27:52.787910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.493 qpair failed and we were unable to recover it. 00:25:57.493 [2024-11-19 11:27:52.797744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.493 [2024-11-19 11:27:52.797844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.493 [2024-11-19 11:27:52.797871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.493 [2024-11-19 11:27:52.797886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.493 [2024-11-19 11:27:52.797897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.493 [2024-11-19 11:27:52.797934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.493 qpair failed and we were unable to recover it. 00:25:57.493 [2024-11-19 11:27:52.807808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.493 [2024-11-19 11:27:52.807914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.493 [2024-11-19 11:27:52.807939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.493 [2024-11-19 11:27:52.807954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.493 [2024-11-19 11:27:52.807966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.493 [2024-11-19 11:27:52.807994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.493 qpair failed and we were unable to recover it. 00:25:57.493 [2024-11-19 11:27:52.817837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.493 [2024-11-19 11:27:52.817937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.493 [2024-11-19 11:27:52.817963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.493 [2024-11-19 11:27:52.817977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.493 [2024-11-19 11:27:52.817990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.493 [2024-11-19 11:27:52.818018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.493 qpair failed and we were unable to recover it. 00:25:57.493 [2024-11-19 11:27:52.827876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.493 [2024-11-19 11:27:52.827980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.493 [2024-11-19 11:27:52.828003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.494 [2024-11-19 11:27:52.828017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.494 [2024-11-19 11:27:52.828030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.494 [2024-11-19 11:27:52.828058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.494 qpair failed and we were unable to recover it. 00:25:57.494 [2024-11-19 11:27:52.837868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.494 [2024-11-19 11:27:52.837971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.494 [2024-11-19 11:27:52.837995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.494 [2024-11-19 11:27:52.838009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.494 [2024-11-19 11:27:52.838021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.494 [2024-11-19 11:27:52.838050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.494 qpair failed and we were unable to recover it. 00:25:57.494 [2024-11-19 11:27:52.847976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.494 [2024-11-19 11:27:52.848118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.494 [2024-11-19 11:27:52.848142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.494 [2024-11-19 11:27:52.848157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.494 [2024-11-19 11:27:52.848168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.494 [2024-11-19 11:27:52.848197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.494 qpair failed and we were unable to recover it. 00:25:57.494 [2024-11-19 11:27:52.857938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.494 [2024-11-19 11:27:52.858037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.494 [2024-11-19 11:27:52.858061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.494 [2024-11-19 11:27:52.858075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.494 [2024-11-19 11:27:52.858087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.494 [2024-11-19 11:27:52.858115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.494 qpair failed and we were unable to recover it. 00:25:57.494 [2024-11-19 11:27:52.867968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.494 [2024-11-19 11:27:52.868070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.494 [2024-11-19 11:27:52.868094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.494 [2024-11-19 11:27:52.868108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.494 [2024-11-19 11:27:52.868120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.494 [2024-11-19 11:27:52.868149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.494 qpair failed and we were unable to recover it. 00:25:57.494 [2024-11-19 11:27:52.877983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.494 [2024-11-19 11:27:52.878086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.494 [2024-11-19 11:27:52.878112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.494 [2024-11-19 11:27:52.878126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.494 [2024-11-19 11:27:52.878139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.494 [2024-11-19 11:27:52.878169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.494 qpair failed and we were unable to recover it. 00:25:57.494 [2024-11-19 11:27:52.888082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.494 [2024-11-19 11:27:52.888226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.494 [2024-11-19 11:27:52.888252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.494 [2024-11-19 11:27:52.888272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.494 [2024-11-19 11:27:52.888285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.494 [2024-11-19 11:27:52.888314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.494 qpair failed and we were unable to recover it. 00:25:57.494 [2024-11-19 11:27:52.898121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.494 [2024-11-19 11:27:52.898246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.494 [2024-11-19 11:27:52.898271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.494 [2024-11-19 11:27:52.898286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.494 [2024-11-19 11:27:52.898299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.494 [2024-11-19 11:27:52.898337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.494 qpair failed and we were unable to recover it. 00:25:57.494 [2024-11-19 11:27:52.908102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.494 [2024-11-19 11:27:52.908202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.494 [2024-11-19 11:27:52.908225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.494 [2024-11-19 11:27:52.908239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.494 [2024-11-19 11:27:52.908251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.494 [2024-11-19 11:27:52.908280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.494 qpair failed and we were unable to recover it. 00:25:57.494 [2024-11-19 11:27:52.918081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.494 [2024-11-19 11:27:52.918178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.494 [2024-11-19 11:27:52.918202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.494 [2024-11-19 11:27:52.918216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.494 [2024-11-19 11:27:52.918228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.494 [2024-11-19 11:27:52.918257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.494 qpair failed and we were unable to recover it. 00:25:57.494 [2024-11-19 11:27:52.928176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.494 [2024-11-19 11:27:52.928285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.494 [2024-11-19 11:27:52.928311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.494 [2024-11-19 11:27:52.928325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.494 [2024-11-19 11:27:52.928338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.494 [2024-11-19 11:27:52.928380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.494 qpair failed and we were unable to recover it. 00:25:57.494 [2024-11-19 11:27:52.938136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.494 [2024-11-19 11:27:52.938239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.494 [2024-11-19 11:27:52.938265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.494 [2024-11-19 11:27:52.938279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.494 [2024-11-19 11:27:52.938292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.494 [2024-11-19 11:27:52.938321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.494 qpair failed and we were unable to recover it. 00:25:57.494 [2024-11-19 11:27:52.948178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.494 [2024-11-19 11:27:52.948279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.494 [2024-11-19 11:27:52.948304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.494 [2024-11-19 11:27:52.948318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.494 [2024-11-19 11:27:52.948330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.494 [2024-11-19 11:27:52.948359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.495 qpair failed and we were unable to recover it. 00:25:57.495 [2024-11-19 11:27:52.958234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.495 [2024-11-19 11:27:52.958331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.495 [2024-11-19 11:27:52.958356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.495 [2024-11-19 11:27:52.958379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.495 [2024-11-19 11:27:52.958392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.495 [2024-11-19 11:27:52.958421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.495 qpair failed and we were unable to recover it. 00:25:57.495 [2024-11-19 11:27:52.968232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.495 [2024-11-19 11:27:52.968338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.495 [2024-11-19 11:27:52.968383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.495 [2024-11-19 11:27:52.968400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.495 [2024-11-19 11:27:52.968412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.495 [2024-11-19 11:27:52.968441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.495 qpair failed and we were unable to recover it. 00:25:57.495 [2024-11-19 11:27:52.978322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.495 [2024-11-19 11:27:52.978451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.495 [2024-11-19 11:27:52.978479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.495 [2024-11-19 11:27:52.978494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.495 [2024-11-19 11:27:52.978506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.495 [2024-11-19 11:27:52.978544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.495 qpair failed and we were unable to recover it. 00:25:57.754 [2024-11-19 11:27:52.988355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.754 [2024-11-19 11:27:52.988448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.754 [2024-11-19 11:27:52.988472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.754 [2024-11-19 11:27:52.988485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.754 [2024-11-19 11:27:52.988498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.754 [2024-11-19 11:27:52.988526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.754 qpair failed and we were unable to recover it. 00:25:57.754 [2024-11-19 11:27:52.998320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.754 [2024-11-19 11:27:52.998450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.754 [2024-11-19 11:27:52.998476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.754 [2024-11-19 11:27:52.998491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.754 [2024-11-19 11:27:52.998503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.754 [2024-11-19 11:27:52.998532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.754 qpair failed and we were unable to recover it. 00:25:57.754 [2024-11-19 11:27:53.008422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.754 [2024-11-19 11:27:53.008525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.754 [2024-11-19 11:27:53.008550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.754 [2024-11-19 11:27:53.008565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.754 [2024-11-19 11:27:53.008577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.754 [2024-11-19 11:27:53.008605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.754 qpair failed and we were unable to recover it. 00:25:57.754 [2024-11-19 11:27:53.018412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.754 [2024-11-19 11:27:53.018504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.754 [2024-11-19 11:27:53.018527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.754 [2024-11-19 11:27:53.018547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.754 [2024-11-19 11:27:53.018560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.754 [2024-11-19 11:27:53.018589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.754 qpair failed and we were unable to recover it. 00:25:57.754 [2024-11-19 11:27:53.028459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.754 [2024-11-19 11:27:53.028560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.754 [2024-11-19 11:27:53.028585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.754 [2024-11-19 11:27:53.028600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.754 [2024-11-19 11:27:53.028612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.754 [2024-11-19 11:27:53.028640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.754 qpair failed and we were unable to recover it. 00:25:57.754 [2024-11-19 11:27:53.038482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.754 [2024-11-19 11:27:53.038566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.754 [2024-11-19 11:27:53.038590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.754 [2024-11-19 11:27:53.038604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.754 [2024-11-19 11:27:53.038616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.754 [2024-11-19 11:27:53.038644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.754 qpair failed and we were unable to recover it. 00:25:57.754 [2024-11-19 11:27:53.048579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.754 [2024-11-19 11:27:53.048679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.754 [2024-11-19 11:27:53.048704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.754 [2024-11-19 11:27:53.048718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.754 [2024-11-19 11:27:53.048731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.754 [2024-11-19 11:27:53.048758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.754 qpair failed and we were unable to recover it. 00:25:57.754 [2024-11-19 11:27:53.058513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.754 [2024-11-19 11:27:53.058642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.754 [2024-11-19 11:27:53.058667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.754 [2024-11-19 11:27:53.058681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.754 [2024-11-19 11:27:53.058693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.754 [2024-11-19 11:27:53.058727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.754 qpair failed and we were unable to recover it. 00:25:57.754 [2024-11-19 11:27:53.068580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.755 [2024-11-19 11:27:53.068667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.755 [2024-11-19 11:27:53.068691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.755 [2024-11-19 11:27:53.068705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.755 [2024-11-19 11:27:53.068717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.755 [2024-11-19 11:27:53.068745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.755 qpair failed and we were unable to recover it. 00:25:57.755 [2024-11-19 11:27:53.078607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.755 [2024-11-19 11:27:53.078694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.755 [2024-11-19 11:27:53.078718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.755 [2024-11-19 11:27:53.078733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.755 [2024-11-19 11:27:53.078746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.755 [2024-11-19 11:27:53.078775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.755 qpair failed and we were unable to recover it. 00:25:57.755 [2024-11-19 11:27:53.088621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.755 [2024-11-19 11:27:53.088755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.755 [2024-11-19 11:27:53.088779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.755 [2024-11-19 11:27:53.088793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.755 [2024-11-19 11:27:53.088805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.755 [2024-11-19 11:27:53.088847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.755 qpair failed and we were unable to recover it. 00:25:57.755 [2024-11-19 11:27:53.098650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.755 [2024-11-19 11:27:53.098780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.755 [2024-11-19 11:27:53.098805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.755 [2024-11-19 11:27:53.098819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.755 [2024-11-19 11:27:53.098831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.755 [2024-11-19 11:27:53.098865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.755 qpair failed and we were unable to recover it. 00:25:57.755 [2024-11-19 11:27:53.108666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.755 [2024-11-19 11:27:53.108753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.755 [2024-11-19 11:27:53.108777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.755 [2024-11-19 11:27:53.108790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.755 [2024-11-19 11:27:53.108802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.755 [2024-11-19 11:27:53.108831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.755 qpair failed and we were unable to recover it. 00:25:57.755 [2024-11-19 11:27:53.118741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.755 [2024-11-19 11:27:53.118835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.755 [2024-11-19 11:27:53.118859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.755 [2024-11-19 11:27:53.118872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.755 [2024-11-19 11:27:53.118885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.755 [2024-11-19 11:27:53.118924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.755 qpair failed and we were unable to recover it. 00:25:57.755 [2024-11-19 11:27:53.128724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.755 [2024-11-19 11:27:53.128828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.755 [2024-11-19 11:27:53.128851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.755 [2024-11-19 11:27:53.128866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.755 [2024-11-19 11:27:53.128878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.755 [2024-11-19 11:27:53.128906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.755 qpair failed and we were unable to recover it. 00:25:57.755 [2024-11-19 11:27:53.138749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.755 [2024-11-19 11:27:53.138859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.755 [2024-11-19 11:27:53.138885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.755 [2024-11-19 11:27:53.138899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.755 [2024-11-19 11:27:53.138911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.755 [2024-11-19 11:27:53.138940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.755 qpair failed and we were unable to recover it. 00:25:57.755 [2024-11-19 11:27:53.148760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.755 [2024-11-19 11:27:53.148860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.755 [2024-11-19 11:27:53.148888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.755 [2024-11-19 11:27:53.148908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.755 [2024-11-19 11:27:53.148921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.755 [2024-11-19 11:27:53.148949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.755 qpair failed and we were unable to recover it. 00:25:57.755 [2024-11-19 11:27:53.158794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.755 [2024-11-19 11:27:53.158889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.755 [2024-11-19 11:27:53.158913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.755 [2024-11-19 11:27:53.158927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.755 [2024-11-19 11:27:53.158939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.755 [2024-11-19 11:27:53.158967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.755 qpair failed and we were unable to recover it. 00:25:57.755 [2024-11-19 11:27:53.168902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.755 [2024-11-19 11:27:53.169012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.755 [2024-11-19 11:27:53.169037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.755 [2024-11-19 11:27:53.169051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.755 [2024-11-19 11:27:53.169063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.755 [2024-11-19 11:27:53.169091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.755 qpair failed and we were unable to recover it. 00:25:57.755 [2024-11-19 11:27:53.178852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.755 [2024-11-19 11:27:53.178956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.755 [2024-11-19 11:27:53.178981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.755 [2024-11-19 11:27:53.178996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.755 [2024-11-19 11:27:53.179008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.755 [2024-11-19 11:27:53.179037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.755 qpair failed and we were unable to recover it. 00:25:57.755 [2024-11-19 11:27:53.188884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.755 [2024-11-19 11:27:53.189001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.755 [2024-11-19 11:27:53.189026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.756 [2024-11-19 11:27:53.189048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.756 [2024-11-19 11:27:53.189060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.756 [2024-11-19 11:27:53.189093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.756 qpair failed and we were unable to recover it. 00:25:57.756 [2024-11-19 11:27:53.198949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.756 [2024-11-19 11:27:53.199045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.756 [2024-11-19 11:27:53.199068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.756 [2024-11-19 11:27:53.199083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.756 [2024-11-19 11:27:53.199095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.756 [2024-11-19 11:27:53.199123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.756 qpair failed and we were unable to recover it. 00:25:57.756 [2024-11-19 11:27:53.209009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.756 [2024-11-19 11:27:53.209112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.756 [2024-11-19 11:27:53.209135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.756 [2024-11-19 11:27:53.209149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.756 [2024-11-19 11:27:53.209161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.756 [2024-11-19 11:27:53.209189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.756 qpair failed and we were unable to recover it. 00:25:57.756 [2024-11-19 11:27:53.219050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.756 [2024-11-19 11:27:53.219153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.756 [2024-11-19 11:27:53.219179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.756 [2024-11-19 11:27:53.219194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.756 [2024-11-19 11:27:53.219206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.756 [2024-11-19 11:27:53.219234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.756 qpair failed and we were unable to recover it. 00:25:57.756 [2024-11-19 11:27:53.229016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.756 [2024-11-19 11:27:53.229118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.756 [2024-11-19 11:27:53.229144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.756 [2024-11-19 11:27:53.229158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.756 [2024-11-19 11:27:53.229170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.756 [2024-11-19 11:27:53.229199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.756 qpair failed and we were unable to recover it. 00:25:57.756 [2024-11-19 11:27:53.239056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.756 [2024-11-19 11:27:53.239163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.756 [2024-11-19 11:27:53.239188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.756 [2024-11-19 11:27:53.239203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.756 [2024-11-19 11:27:53.239214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.756 [2024-11-19 11:27:53.239243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.756 qpair failed and we were unable to recover it. 00:25:57.756 [2024-11-19 11:27:53.249100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:57.756 [2024-11-19 11:27:53.249189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:57.756 [2024-11-19 11:27:53.249213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:57.756 [2024-11-19 11:27:53.249227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:57.756 [2024-11-19 11:27:53.249239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:57.756 [2024-11-19 11:27:53.249267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.756 qpair failed and we were unable to recover it. 00:25:58.015 [2024-11-19 11:27:53.259138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.015 [2024-11-19 11:27:53.259243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.015 [2024-11-19 11:27:53.259269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.015 [2024-11-19 11:27:53.259283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.015 [2024-11-19 11:27:53.259295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.015 [2024-11-19 11:27:53.259324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.015 qpair failed and we were unable to recover it. 00:25:58.015 [2024-11-19 11:27:53.269133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.015 [2024-11-19 11:27:53.269236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.015 [2024-11-19 11:27:53.269262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.015 [2024-11-19 11:27:53.269276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.015 [2024-11-19 11:27:53.269289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.015 [2024-11-19 11:27:53.269318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.015 qpair failed and we were unable to recover it. 00:25:58.015 [2024-11-19 11:27:53.279162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.015 [2024-11-19 11:27:53.279261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.015 [2024-11-19 11:27:53.279286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.015 [2024-11-19 11:27:53.279306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.015 [2024-11-19 11:27:53.279319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.015 [2024-11-19 11:27:53.279348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.015 qpair failed and we were unable to recover it. 00:25:58.015 [2024-11-19 11:27:53.289201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.015 [2024-11-19 11:27:53.289309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.015 [2024-11-19 11:27:53.289335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.015 [2024-11-19 11:27:53.289349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.015 [2024-11-19 11:27:53.289371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.015 [2024-11-19 11:27:53.289402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.015 qpair failed and we were unable to recover it. 00:25:58.015 [2024-11-19 11:27:53.299249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.015 [2024-11-19 11:27:53.299352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.015 [2024-11-19 11:27:53.299384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.015 [2024-11-19 11:27:53.299399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.015 [2024-11-19 11:27:53.299411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.015 [2024-11-19 11:27:53.299440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.015 qpair failed and we were unable to recover it. 00:25:58.015 [2024-11-19 11:27:53.309250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.015 [2024-11-19 11:27:53.309346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.015 [2024-11-19 11:27:53.309378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.015 [2024-11-19 11:27:53.309393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.015 [2024-11-19 11:27:53.309405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.015 [2024-11-19 11:27:53.309434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.015 qpair failed and we were unable to recover it. 00:25:58.015 [2024-11-19 11:27:53.319318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.015 [2024-11-19 11:27:53.319421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.015 [2024-11-19 11:27:53.319445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.015 [2024-11-19 11:27:53.319459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.015 [2024-11-19 11:27:53.319471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.015 [2024-11-19 11:27:53.319514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.015 qpair failed and we were unable to recover it. 00:25:58.015 [2024-11-19 11:27:53.329321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.015 [2024-11-19 11:27:53.329439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.015 [2024-11-19 11:27:53.329463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.015 [2024-11-19 11:27:53.329477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.015 [2024-11-19 11:27:53.329490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.015 [2024-11-19 11:27:53.329519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.015 qpair failed and we were unable to recover it. 00:25:58.015 [2024-11-19 11:27:53.339378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.016 [2024-11-19 11:27:53.339467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.016 [2024-11-19 11:27:53.339492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.016 [2024-11-19 11:27:53.339506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.016 [2024-11-19 11:27:53.339519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.016 [2024-11-19 11:27:53.339547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.016 qpair failed and we were unable to recover it. 00:25:58.016 [2024-11-19 11:27:53.349396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.016 [2024-11-19 11:27:53.349486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.016 [2024-11-19 11:27:53.349510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.016 [2024-11-19 11:27:53.349524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.016 [2024-11-19 11:27:53.349536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.016 [2024-11-19 11:27:53.349565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.016 qpair failed and we were unable to recover it. 00:25:58.016 [2024-11-19 11:27:53.359417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.016 [2024-11-19 11:27:53.359508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.016 [2024-11-19 11:27:53.359533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.016 [2024-11-19 11:27:53.359548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.016 [2024-11-19 11:27:53.359560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.016 [2024-11-19 11:27:53.359589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.016 qpair failed and we were unable to recover it. 00:25:58.016 [2024-11-19 11:27:53.369442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.016 [2024-11-19 11:27:53.369543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.016 [2024-11-19 11:27:53.369566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.016 [2024-11-19 11:27:53.369580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.016 [2024-11-19 11:27:53.369592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.016 [2024-11-19 11:27:53.369620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.016 qpair failed and we were unable to recover it. 00:25:58.016 [2024-11-19 11:27:53.379478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.016 [2024-11-19 11:27:53.379578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.016 [2024-11-19 11:27:53.379602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.016 [2024-11-19 11:27:53.379616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.016 [2024-11-19 11:27:53.379629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.016 [2024-11-19 11:27:53.379658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.016 qpair failed and we were unable to recover it. 00:25:58.016 [2024-11-19 11:27:53.389468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.016 [2024-11-19 11:27:53.389555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.016 [2024-11-19 11:27:53.389579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.016 [2024-11-19 11:27:53.389594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.016 [2024-11-19 11:27:53.389606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.016 [2024-11-19 11:27:53.389635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.016 qpair failed and we were unable to recover it. 00:25:58.016 [2024-11-19 11:27:53.399510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.016 [2024-11-19 11:27:53.399597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.016 [2024-11-19 11:27:53.399621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.016 [2024-11-19 11:27:53.399635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.016 [2024-11-19 11:27:53.399647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.016 [2024-11-19 11:27:53.399675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.016 qpair failed and we were unable to recover it. 00:25:58.016 [2024-11-19 11:27:53.409542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.016 [2024-11-19 11:27:53.409656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.016 [2024-11-19 11:27:53.409681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.016 [2024-11-19 11:27:53.409702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.016 [2024-11-19 11:27:53.409715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.016 [2024-11-19 11:27:53.409743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.016 qpair failed and we were unable to recover it. 00:25:58.016 [2024-11-19 11:27:53.419595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.016 [2024-11-19 11:27:53.419682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.016 [2024-11-19 11:27:53.419707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.016 [2024-11-19 11:27:53.419720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.016 [2024-11-19 11:27:53.419732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.016 [2024-11-19 11:27:53.419760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.016 qpair failed and we were unable to recover it. 00:25:58.016 [2024-11-19 11:27:53.429606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.016 [2024-11-19 11:27:53.429689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.016 [2024-11-19 11:27:53.429712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.016 [2024-11-19 11:27:53.429726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.016 [2024-11-19 11:27:53.429738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.016 [2024-11-19 11:27:53.429767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.016 qpair failed and we were unable to recover it. 00:25:58.016 [2024-11-19 11:27:53.439708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.016 [2024-11-19 11:27:53.439819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.016 [2024-11-19 11:27:53.439844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.016 [2024-11-19 11:27:53.439858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.016 [2024-11-19 11:27:53.439870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.016 [2024-11-19 11:27:53.439898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.016 qpair failed and we were unable to recover it. 00:25:58.016 [2024-11-19 11:27:53.449698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.016 [2024-11-19 11:27:53.449804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.016 [2024-11-19 11:27:53.449829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.016 [2024-11-19 11:27:53.449843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.016 [2024-11-19 11:27:53.449856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.016 [2024-11-19 11:27:53.449889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.016 qpair failed and we were unable to recover it. 00:25:58.016 [2024-11-19 11:27:53.459697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.016 [2024-11-19 11:27:53.459794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.016 [2024-11-19 11:27:53.459817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.017 [2024-11-19 11:27:53.459831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.017 [2024-11-19 11:27:53.459843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.017 [2024-11-19 11:27:53.459872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.017 qpair failed and we were unable to recover it. 00:25:58.017 [2024-11-19 11:27:53.469733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.017 [2024-11-19 11:27:53.469858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.017 [2024-11-19 11:27:53.469883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.017 [2024-11-19 11:27:53.469898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.017 [2024-11-19 11:27:53.469910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.017 [2024-11-19 11:27:53.469939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.017 qpair failed and we were unable to recover it. 00:25:58.017 [2024-11-19 11:27:53.479738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.017 [2024-11-19 11:27:53.479836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.017 [2024-11-19 11:27:53.479865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.017 [2024-11-19 11:27:53.479880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.017 [2024-11-19 11:27:53.479892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.017 [2024-11-19 11:27:53.479921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.017 qpair failed and we were unable to recover it. 00:25:58.017 [2024-11-19 11:27:53.489778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.017 [2024-11-19 11:27:53.489930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.017 [2024-11-19 11:27:53.489955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.017 [2024-11-19 11:27:53.489970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.017 [2024-11-19 11:27:53.489982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.017 [2024-11-19 11:27:53.490021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.017 qpair failed and we were unable to recover it. 00:25:58.017 [2024-11-19 11:27:53.499818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.017 [2024-11-19 11:27:53.499936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.017 [2024-11-19 11:27:53.499961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.017 [2024-11-19 11:27:53.499976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.017 [2024-11-19 11:27:53.499988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.017 [2024-11-19 11:27:53.500017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.017 qpair failed and we were unable to recover it. 00:25:58.017 [2024-11-19 11:27:53.509839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.017 [2024-11-19 11:27:53.509937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.017 [2024-11-19 11:27:53.509966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.017 [2024-11-19 11:27:53.509980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.017 [2024-11-19 11:27:53.509992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.017 [2024-11-19 11:27:53.510022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.017 qpair failed and we were unable to recover it. 00:25:58.276 [2024-11-19 11:27:53.519866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.276 [2024-11-19 11:27:53.519964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.276 [2024-11-19 11:27:53.519988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.276 [2024-11-19 11:27:53.520002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.276 [2024-11-19 11:27:53.520014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.276 [2024-11-19 11:27:53.520043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.276 qpair failed and we were unable to recover it. 00:25:58.276 [2024-11-19 11:27:53.529890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.276 [2024-11-19 11:27:53.530033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.276 [2024-11-19 11:27:53.530058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.276 [2024-11-19 11:27:53.530073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.276 [2024-11-19 11:27:53.530085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.276 [2024-11-19 11:27:53.530116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.276 qpair failed and we were unable to recover it. 00:25:58.276 [2024-11-19 11:27:53.539911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.276 [2024-11-19 11:27:53.540016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.276 [2024-11-19 11:27:53.540041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.276 [2024-11-19 11:27:53.540061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.276 [2024-11-19 11:27:53.540074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.276 [2024-11-19 11:27:53.540103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.276 qpair failed and we were unable to recover it. 00:25:58.276 [2024-11-19 11:27:53.549924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.276 [2024-11-19 11:27:53.550026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.276 [2024-11-19 11:27:53.550050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.276 [2024-11-19 11:27:53.550063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.276 [2024-11-19 11:27:53.550075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.276 [2024-11-19 11:27:53.550104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.276 qpair failed and we were unable to recover it. 00:25:58.276 [2024-11-19 11:27:53.559979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.276 [2024-11-19 11:27:53.560097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.276 [2024-11-19 11:27:53.560122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.276 [2024-11-19 11:27:53.560137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.276 [2024-11-19 11:27:53.560149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.276 [2024-11-19 11:27:53.560177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.276 qpair failed and we were unable to recover it. 00:25:58.276 [2024-11-19 11:27:53.570138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.276 [2024-11-19 11:27:53.570295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.276 [2024-11-19 11:27:53.570320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.276 [2024-11-19 11:27:53.570335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.276 [2024-11-19 11:27:53.570347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.276 [2024-11-19 11:27:53.570383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.276 qpair failed and we were unable to recover it. 00:25:58.276 [2024-11-19 11:27:53.580051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.276 [2024-11-19 11:27:53.580173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.276 [2024-11-19 11:27:53.580196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.276 [2024-11-19 11:27:53.580211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.276 [2024-11-19 11:27:53.580223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.276 [2024-11-19 11:27:53.580265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.276 qpair failed and we were unable to recover it. 00:25:58.276 [2024-11-19 11:27:53.590115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.276 [2024-11-19 11:27:53.590220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.276 [2024-11-19 11:27:53.590245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.276 [2024-11-19 11:27:53.590260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.276 [2024-11-19 11:27:53.590271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.276 [2024-11-19 11:27:53.590300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.276 qpair failed and we were unable to recover it. 00:25:58.276 [2024-11-19 11:27:53.600119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.277 [2024-11-19 11:27:53.600218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.277 [2024-11-19 11:27:53.600243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.277 [2024-11-19 11:27:53.600257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.277 [2024-11-19 11:27:53.600269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.277 [2024-11-19 11:27:53.600298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.277 qpair failed and we were unable to recover it. 00:25:58.277 [2024-11-19 11:27:53.610079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.277 [2024-11-19 11:27:53.610183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.277 [2024-11-19 11:27:53.610207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.277 [2024-11-19 11:27:53.610221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.277 [2024-11-19 11:27:53.610233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.277 [2024-11-19 11:27:53.610261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.277 qpair failed and we were unable to recover it. 00:25:58.277 [2024-11-19 11:27:53.620108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.277 [2024-11-19 11:27:53.620219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.277 [2024-11-19 11:27:53.620244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.277 [2024-11-19 11:27:53.620258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.277 [2024-11-19 11:27:53.620270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.277 [2024-11-19 11:27:53.620299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.277 qpair failed and we were unable to recover it. 00:25:58.277 [2024-11-19 11:27:53.630168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.277 [2024-11-19 11:27:53.630288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.277 [2024-11-19 11:27:53.630313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.277 [2024-11-19 11:27:53.630328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.277 [2024-11-19 11:27:53.630340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.277 [2024-11-19 11:27:53.630374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.277 qpair failed and we were unable to recover it. 00:25:58.277 [2024-11-19 11:27:53.640198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.277 [2024-11-19 11:27:53.640318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.277 [2024-11-19 11:27:53.640344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.277 [2024-11-19 11:27:53.640358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.277 [2024-11-19 11:27:53.640382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.277 [2024-11-19 11:27:53.640412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.277 qpair failed and we were unable to recover it. 00:25:58.277 [2024-11-19 11:27:53.650241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.277 [2024-11-19 11:27:53.650347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.277 [2024-11-19 11:27:53.650384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.277 [2024-11-19 11:27:53.650400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.277 [2024-11-19 11:27:53.650412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.277 [2024-11-19 11:27:53.650440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.277 qpair failed and we were unable to recover it. 00:25:58.277 [2024-11-19 11:27:53.660216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.277 [2024-11-19 11:27:53.660323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.277 [2024-11-19 11:27:53.660349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.277 [2024-11-19 11:27:53.660371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.277 [2024-11-19 11:27:53.660386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.277 [2024-11-19 11:27:53.660415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.277 qpair failed and we were unable to recover it. 00:25:58.277 [2024-11-19 11:27:53.670259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.277 [2024-11-19 11:27:53.670359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.277 [2024-11-19 11:27:53.670398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.277 [2024-11-19 11:27:53.670414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.277 [2024-11-19 11:27:53.670426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.277 [2024-11-19 11:27:53.670455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.277 qpair failed and we were unable to recover it. 00:25:58.277 [2024-11-19 11:27:53.680302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.277 [2024-11-19 11:27:53.680425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.277 [2024-11-19 11:27:53.680451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.277 [2024-11-19 11:27:53.680466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.277 [2024-11-19 11:27:53.680478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.277 [2024-11-19 11:27:53.680508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.277 qpair failed and we were unable to recover it. 00:25:58.277 [2024-11-19 11:27:53.690305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.277 [2024-11-19 11:27:53.690420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.277 [2024-11-19 11:27:53.690446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.277 [2024-11-19 11:27:53.690460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.277 [2024-11-19 11:27:53.690472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.277 [2024-11-19 11:27:53.690502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.277 qpair failed and we were unable to recover it. 00:25:58.277 [2024-11-19 11:27:53.700378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.277 [2024-11-19 11:27:53.700470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.277 [2024-11-19 11:27:53.700494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.277 [2024-11-19 11:27:53.700508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.277 [2024-11-19 11:27:53.700520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.277 [2024-11-19 11:27:53.700549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.277 qpair failed and we were unable to recover it. 00:25:58.277 [2024-11-19 11:27:53.710415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.277 [2024-11-19 11:27:53.710524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.277 [2024-11-19 11:27:53.710549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.277 [2024-11-19 11:27:53.710563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.277 [2024-11-19 11:27:53.710575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.277 [2024-11-19 11:27:53.710612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.277 qpair failed and we were unable to recover it. 00:25:58.277 [2024-11-19 11:27:53.720421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.277 [2024-11-19 11:27:53.720527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.277 [2024-11-19 11:27:53.720553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.277 [2024-11-19 11:27:53.720567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.278 [2024-11-19 11:27:53.720578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.278 [2024-11-19 11:27:53.720608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.278 qpair failed and we were unable to recover it. 00:25:58.278 [2024-11-19 11:27:53.730517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.278 [2024-11-19 11:27:53.730629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.278 [2024-11-19 11:27:53.730654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.278 [2024-11-19 11:27:53.730668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.278 [2024-11-19 11:27:53.730680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.278 [2024-11-19 11:27:53.730709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.278 qpair failed and we were unable to recover it. 00:25:58.278 [2024-11-19 11:27:53.740481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.278 [2024-11-19 11:27:53.740572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.278 [2024-11-19 11:27:53.740596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.278 [2024-11-19 11:27:53.740610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.278 [2024-11-19 11:27:53.740622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.278 [2024-11-19 11:27:53.740650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.278 qpair failed and we were unable to recover it. 00:25:58.278 [2024-11-19 11:27:53.750584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.278 [2024-11-19 11:27:53.750715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.278 [2024-11-19 11:27:53.750740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.278 [2024-11-19 11:27:53.750755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.278 [2024-11-19 11:27:53.750767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.278 [2024-11-19 11:27:53.750795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.278 qpair failed and we were unable to recover it. 00:25:58.278 [2024-11-19 11:27:53.760543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.278 [2024-11-19 11:27:53.760624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.278 [2024-11-19 11:27:53.760647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.278 [2024-11-19 11:27:53.760661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.278 [2024-11-19 11:27:53.760673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.278 [2024-11-19 11:27:53.760701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.278 qpair failed and we were unable to recover it. 00:25:58.278 [2024-11-19 11:27:53.770623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.278 [2024-11-19 11:27:53.770747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.278 [2024-11-19 11:27:53.770770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.278 [2024-11-19 11:27:53.770785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.278 [2024-11-19 11:27:53.770797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.278 [2024-11-19 11:27:53.770829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.278 qpair failed and we were unable to recover it. 00:25:58.537 [2024-11-19 11:27:53.780645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.537 [2024-11-19 11:27:53.780772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.537 [2024-11-19 11:27:53.780798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.537 [2024-11-19 11:27:53.780812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.537 [2024-11-19 11:27:53.780824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.537 [2024-11-19 11:27:53.780852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.537 qpair failed and we were unable to recover it. 00:25:58.537 [2024-11-19 11:27:53.790640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.537 [2024-11-19 11:27:53.790752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.537 [2024-11-19 11:27:53.790776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.537 [2024-11-19 11:27:53.790790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.537 [2024-11-19 11:27:53.790803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.537 [2024-11-19 11:27:53.790831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.537 qpair failed and we were unable to recover it. 00:25:58.537 [2024-11-19 11:27:53.800718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.537 [2024-11-19 11:27:53.800823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.537 [2024-11-19 11:27:53.800863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.537 [2024-11-19 11:27:53.800879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.537 [2024-11-19 11:27:53.800891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.537 [2024-11-19 11:27:53.800920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.537 qpair failed and we were unable to recover it. 00:25:58.537 [2024-11-19 11:27:53.810773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.537 [2024-11-19 11:27:53.810884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.537 [2024-11-19 11:27:53.810909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.537 [2024-11-19 11:27:53.810923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.537 [2024-11-19 11:27:53.810935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.537 [2024-11-19 11:27:53.810972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.537 qpair failed and we were unable to recover it. 00:25:58.537 [2024-11-19 11:27:53.820717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.537 [2024-11-19 11:27:53.820856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.537 [2024-11-19 11:27:53.820882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.537 [2024-11-19 11:27:53.820897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.537 [2024-11-19 11:27:53.820909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.537 [2024-11-19 11:27:53.820937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.537 qpair failed and we were unable to recover it. 00:25:58.537 [2024-11-19 11:27:53.830764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.537 [2024-11-19 11:27:53.830871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.537 [2024-11-19 11:27:53.830895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.537 [2024-11-19 11:27:53.830910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.537 [2024-11-19 11:27:53.830923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.537 [2024-11-19 11:27:53.830951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.537 qpair failed and we were unable to recover it. 00:25:58.537 [2024-11-19 11:27:53.840782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.538 [2024-11-19 11:27:53.840902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.538 [2024-11-19 11:27:53.840927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.538 [2024-11-19 11:27:53.840942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.538 [2024-11-19 11:27:53.840953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.538 [2024-11-19 11:27:53.840987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.538 [2024-11-19 11:27:53.850897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.538 [2024-11-19 11:27:53.851046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.538 [2024-11-19 11:27:53.851071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.538 [2024-11-19 11:27:53.851086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.538 [2024-11-19 11:27:53.851098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.538 [2024-11-19 11:27:53.851126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.538 [2024-11-19 11:27:53.860859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.538 [2024-11-19 11:27:53.860966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.538 [2024-11-19 11:27:53.860991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.538 [2024-11-19 11:27:53.861005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.538 [2024-11-19 11:27:53.861018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.538 [2024-11-19 11:27:53.861047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.538 [2024-11-19 11:27:53.870921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.538 [2024-11-19 11:27:53.871048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.538 [2024-11-19 11:27:53.871073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.538 [2024-11-19 11:27:53.871088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.538 [2024-11-19 11:27:53.871100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.538 [2024-11-19 11:27:53.871128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.538 [2024-11-19 11:27:53.880894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.538 [2024-11-19 11:27:53.881019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.538 [2024-11-19 11:27:53.881044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.538 [2024-11-19 11:27:53.881059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.538 [2024-11-19 11:27:53.881071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.538 [2024-11-19 11:27:53.881100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.538 [2024-11-19 11:27:53.891010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.538 [2024-11-19 11:27:53.891155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.538 [2024-11-19 11:27:53.891181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.538 [2024-11-19 11:27:53.891196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.538 [2024-11-19 11:27:53.891208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.538 [2024-11-19 11:27:53.891237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.538 [2024-11-19 11:27:53.900989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.538 [2024-11-19 11:27:53.901095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.538 [2024-11-19 11:27:53.901120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.538 [2024-11-19 11:27:53.901135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.538 [2024-11-19 11:27:53.901147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.538 [2024-11-19 11:27:53.901175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.538 [2024-11-19 11:27:53.910972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.538 [2024-11-19 11:27:53.911088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.538 [2024-11-19 11:27:53.911113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.538 [2024-11-19 11:27:53.911127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.538 [2024-11-19 11:27:53.911139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.538 [2024-11-19 11:27:53.911167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.538 [2024-11-19 11:27:53.921005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.538 [2024-11-19 11:27:53.921117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.538 [2024-11-19 11:27:53.921141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.538 [2024-11-19 11:27:53.921155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.538 [2024-11-19 11:27:53.921168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.538 [2024-11-19 11:27:53.921196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.538 [2024-11-19 11:27:53.931048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.538 [2024-11-19 11:27:53.931172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.538 [2024-11-19 11:27:53.931202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.538 [2024-11-19 11:27:53.931217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.538 [2024-11-19 11:27:53.931229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.538 [2024-11-19 11:27:53.931257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.538 [2024-11-19 11:27:53.941101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.538 [2024-11-19 11:27:53.941226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.538 [2024-11-19 11:27:53.941251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.538 [2024-11-19 11:27:53.941265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.538 [2024-11-19 11:27:53.941278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.538 [2024-11-19 11:27:53.941306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.538 [2024-11-19 11:27:53.951068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.538 [2024-11-19 11:27:53.951171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.538 [2024-11-19 11:27:53.951195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.538 [2024-11-19 11:27:53.951209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.538 [2024-11-19 11:27:53.951221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.538 [2024-11-19 11:27:53.951249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.538 [2024-11-19 11:27:53.961094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.538 [2024-11-19 11:27:53.961192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.538 [2024-11-19 11:27:53.961216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.538 [2024-11-19 11:27:53.961230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.539 [2024-11-19 11:27:53.961242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.539 [2024-11-19 11:27:53.961271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.539 qpair failed and we were unable to recover it. 00:25:58.539 [2024-11-19 11:27:53.971142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.539 [2024-11-19 11:27:53.971282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.539 [2024-11-19 11:27:53.971307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.539 [2024-11-19 11:27:53.971322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.539 [2024-11-19 11:27:53.971334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.539 [2024-11-19 11:27:53.971374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.539 qpair failed and we were unable to recover it. 00:25:58.539 [2024-11-19 11:27:53.981134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.539 [2024-11-19 11:27:53.981267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.539 [2024-11-19 11:27:53.981292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.539 [2024-11-19 11:27:53.981307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.539 [2024-11-19 11:27:53.981320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.539 [2024-11-19 11:27:53.981349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.539 qpair failed and we were unable to recover it. 00:25:58.539 [2024-11-19 11:27:53.991186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.539 [2024-11-19 11:27:53.991314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.539 [2024-11-19 11:27:53.991339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.539 [2024-11-19 11:27:53.991354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.539 [2024-11-19 11:27:53.991374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.539 [2024-11-19 11:27:53.991405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.539 qpair failed and we were unable to recover it. 00:25:58.539 [2024-11-19 11:27:54.001214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.539 [2024-11-19 11:27:54.001313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.539 [2024-11-19 11:27:54.001338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.539 [2024-11-19 11:27:54.001353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.539 [2024-11-19 11:27:54.001372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.539 [2024-11-19 11:27:54.001403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.539 qpair failed and we were unable to recover it. 00:25:58.539 [2024-11-19 11:27:54.011285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.539 [2024-11-19 11:27:54.011408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.539 [2024-11-19 11:27:54.011431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.539 [2024-11-19 11:27:54.011445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.539 [2024-11-19 11:27:54.011456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.539 [2024-11-19 11:27:54.011485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.539 qpair failed and we were unable to recover it. 00:25:58.539 [2024-11-19 11:27:54.021254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.539 [2024-11-19 11:27:54.021394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.539 [2024-11-19 11:27:54.021430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.539 [2024-11-19 11:27:54.021445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.539 [2024-11-19 11:27:54.021458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.539 [2024-11-19 11:27:54.021486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.539 qpair failed and we were unable to recover it. 00:25:58.539 [2024-11-19 11:27:54.031267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.539 [2024-11-19 11:27:54.031399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.539 [2024-11-19 11:27:54.031425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.539 [2024-11-19 11:27:54.031440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.539 [2024-11-19 11:27:54.031452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.539 [2024-11-19 11:27:54.031481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.539 qpair failed and we were unable to recover it. 00:25:58.799 [2024-11-19 11:27:54.041271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.799 [2024-11-19 11:27:54.041418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.799 [2024-11-19 11:27:54.041443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.799 [2024-11-19 11:27:54.041458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.799 [2024-11-19 11:27:54.041470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.799 [2024-11-19 11:27:54.041498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.799 qpair failed and we were unable to recover it. 00:25:58.799 [2024-11-19 11:27:54.051392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.799 [2024-11-19 11:27:54.051485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.799 [2024-11-19 11:27:54.051509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.799 [2024-11-19 11:27:54.051523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.799 [2024-11-19 11:27:54.051534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.799 [2024-11-19 11:27:54.051564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.799 qpair failed and we were unable to recover it. 00:25:58.799 [2024-11-19 11:27:54.061351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.799 [2024-11-19 11:27:54.061462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.799 [2024-11-19 11:27:54.061492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.799 [2024-11-19 11:27:54.061507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.799 [2024-11-19 11:27:54.061519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.799 [2024-11-19 11:27:54.061548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.799 qpair failed and we were unable to recover it. 00:25:58.799 [2024-11-19 11:27:54.071402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.799 [2024-11-19 11:27:54.071494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.799 [2024-11-19 11:27:54.071522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.799 [2024-11-19 11:27:54.071536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.799 [2024-11-19 11:27:54.071548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.799 [2024-11-19 11:27:54.071577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.799 qpair failed and we were unable to recover it. 00:25:58.799 [2024-11-19 11:27:54.081455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.799 [2024-11-19 11:27:54.081566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.799 [2024-11-19 11:27:54.081591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.799 [2024-11-19 11:27:54.081606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.799 [2024-11-19 11:27:54.081618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.799 [2024-11-19 11:27:54.081646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.799 qpair failed and we were unable to recover it. 00:25:58.799 [2024-11-19 11:27:54.091509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.799 [2024-11-19 11:27:54.091643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.799 [2024-11-19 11:27:54.091669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.799 [2024-11-19 11:27:54.091684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.799 [2024-11-19 11:27:54.091696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.799 [2024-11-19 11:27:54.091725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.799 qpair failed and we were unable to recover it. 00:25:58.799 [2024-11-19 11:27:54.101497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.799 [2024-11-19 11:27:54.101588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.799 [2024-11-19 11:27:54.101612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.799 [2024-11-19 11:27:54.101626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.799 [2024-11-19 11:27:54.101639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.799 [2024-11-19 11:27:54.101672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.799 qpair failed and we were unable to recover it. 00:25:58.799 [2024-11-19 11:27:54.111571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.799 [2024-11-19 11:27:54.111658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.799 [2024-11-19 11:27:54.111683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.799 [2024-11-19 11:27:54.111697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.799 [2024-11-19 11:27:54.111710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.799 [2024-11-19 11:27:54.111738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.799 qpair failed and we were unable to recover it. 00:25:58.799 [2024-11-19 11:27:54.121548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.799 [2024-11-19 11:27:54.121627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.799 [2024-11-19 11:27:54.121651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.799 [2024-11-19 11:27:54.121664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.799 [2024-11-19 11:27:54.121676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.799 [2024-11-19 11:27:54.121705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.799 qpair failed and we were unable to recover it. 00:25:58.799 [2024-11-19 11:27:54.131576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.799 [2024-11-19 11:27:54.131668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.799 [2024-11-19 11:27:54.131691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.799 [2024-11-19 11:27:54.131704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.799 [2024-11-19 11:27:54.131717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.799 [2024-11-19 11:27:54.131744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.799 qpair failed and we were unable to recover it. 00:25:58.800 [2024-11-19 11:27:54.141612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.800 [2024-11-19 11:27:54.141695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.800 [2024-11-19 11:27:54.141719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.800 [2024-11-19 11:27:54.141732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.800 [2024-11-19 11:27:54.141744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.800 [2024-11-19 11:27:54.141772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.800 qpair failed and we were unable to recover it. 00:25:58.800 [2024-11-19 11:27:54.151696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.800 [2024-11-19 11:27:54.151816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.800 [2024-11-19 11:27:54.151842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.800 [2024-11-19 11:27:54.151856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.800 [2024-11-19 11:27:54.151868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.800 [2024-11-19 11:27:54.151897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.800 qpair failed and we were unable to recover it. 00:25:58.800 [2024-11-19 11:27:54.161708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.800 [2024-11-19 11:27:54.161810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.800 [2024-11-19 11:27:54.161836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.800 [2024-11-19 11:27:54.161850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.800 [2024-11-19 11:27:54.161862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.800 [2024-11-19 11:27:54.161897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.800 qpair failed and we were unable to recover it. 00:25:58.800 [2024-11-19 11:27:54.171749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.800 [2024-11-19 11:27:54.171858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.800 [2024-11-19 11:27:54.171884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.800 [2024-11-19 11:27:54.171898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.800 [2024-11-19 11:27:54.171911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.800 [2024-11-19 11:27:54.171939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.800 qpair failed and we were unable to recover it. 00:25:58.800 [2024-11-19 11:27:54.181908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.800 [2024-11-19 11:27:54.182013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.800 [2024-11-19 11:27:54.182043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.800 [2024-11-19 11:27:54.182058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.800 [2024-11-19 11:27:54.182070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.800 [2024-11-19 11:27:54.182099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.800 qpair failed and we were unable to recover it. 00:25:58.800 [2024-11-19 11:27:54.191804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.800 [2024-11-19 11:27:54.191920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.800 [2024-11-19 11:27:54.191949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.800 [2024-11-19 11:27:54.191965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.800 [2024-11-19 11:27:54.191977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.800 [2024-11-19 11:27:54.192005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.800 qpair failed and we were unable to recover it. 00:25:58.800 [2024-11-19 11:27:54.201799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.800 [2024-11-19 11:27:54.201899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.800 [2024-11-19 11:27:54.201923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.800 [2024-11-19 11:27:54.201937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.800 [2024-11-19 11:27:54.201949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.800 [2024-11-19 11:27:54.201978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.800 qpair failed and we were unable to recover it. 00:25:58.800 [2024-11-19 11:27:54.211830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.800 [2024-11-19 11:27:54.211968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.800 [2024-11-19 11:27:54.211993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.800 [2024-11-19 11:27:54.212008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.800 [2024-11-19 11:27:54.212020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.800 [2024-11-19 11:27:54.212048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.800 qpair failed and we were unable to recover it. 00:25:58.800 [2024-11-19 11:27:54.221841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.800 [2024-11-19 11:27:54.221944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.800 [2024-11-19 11:27:54.221970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.800 [2024-11-19 11:27:54.221985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.800 [2024-11-19 11:27:54.221997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.800 [2024-11-19 11:27:54.222026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.800 qpair failed and we were unable to recover it. 00:25:58.800 [2024-11-19 11:27:54.231873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.800 [2024-11-19 11:27:54.231977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.800 [2024-11-19 11:27:54.232002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.800 [2024-11-19 11:27:54.232017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.800 [2024-11-19 11:27:54.232034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.800 [2024-11-19 11:27:54.232063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.800 qpair failed and we were unable to recover it. 00:25:58.800 [2024-11-19 11:27:54.241918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.800 [2024-11-19 11:27:54.242046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.800 [2024-11-19 11:27:54.242073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.800 [2024-11-19 11:27:54.242087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.800 [2024-11-19 11:27:54.242099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.800 [2024-11-19 11:27:54.242127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.800 qpair failed and we were unable to recover it. 00:25:58.800 [2024-11-19 11:27:54.251950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.800 [2024-11-19 11:27:54.252092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.800 [2024-11-19 11:27:54.252118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.800 [2024-11-19 11:27:54.252133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.800 [2024-11-19 11:27:54.252145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.800 [2024-11-19 11:27:54.252173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.800 qpair failed and we were unable to recover it. 00:25:58.800 [2024-11-19 11:27:54.261949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.800 [2024-11-19 11:27:54.262053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.801 [2024-11-19 11:27:54.262077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.801 [2024-11-19 11:27:54.262092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.801 [2024-11-19 11:27:54.262104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.801 [2024-11-19 11:27:54.262132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.801 qpair failed and we were unable to recover it. 00:25:58.801 [2024-11-19 11:27:54.271975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.801 [2024-11-19 11:27:54.272080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.801 [2024-11-19 11:27:54.272104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.801 [2024-11-19 11:27:54.272118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.801 [2024-11-19 11:27:54.272130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.801 [2024-11-19 11:27:54.272158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.801 qpair failed and we were unable to recover it. 00:25:58.801 [2024-11-19 11:27:54.282042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.801 [2024-11-19 11:27:54.282145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.801 [2024-11-19 11:27:54.282170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.801 [2024-11-19 11:27:54.282184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.801 [2024-11-19 11:27:54.282196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.801 [2024-11-19 11:27:54.282224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.801 qpair failed and we were unable to recover it. 00:25:58.801 [2024-11-19 11:27:54.292037] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.801 [2024-11-19 11:27:54.292166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.801 [2024-11-19 11:27:54.292191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.801 [2024-11-19 11:27:54.292205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.801 [2024-11-19 11:27:54.292218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:58.801 [2024-11-19 11:27:54.292247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:58.801 qpair failed and we were unable to recover it. 00:25:59.060 [2024-11-19 11:27:54.302125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.060 [2024-11-19 11:27:54.302269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.060 [2024-11-19 11:27:54.302294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.060 [2024-11-19 11:27:54.302308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.060 [2024-11-19 11:27:54.302321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.060 [2024-11-19 11:27:54.302349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.060 qpair failed and we were unable to recover it. 00:25:59.060 [2024-11-19 11:27:54.312071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.060 [2024-11-19 11:27:54.312191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.060 [2024-11-19 11:27:54.312216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.060 [2024-11-19 11:27:54.312230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.060 [2024-11-19 11:27:54.312243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.060 [2024-11-19 11:27:54.312271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.060 qpair failed and we were unable to recover it. 00:25:59.060 [2024-11-19 11:27:54.322095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.060 [2024-11-19 11:27:54.322190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.060 [2024-11-19 11:27:54.322222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.060 [2024-11-19 11:27:54.322238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.060 [2024-11-19 11:27:54.322249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.060 [2024-11-19 11:27:54.322278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.060 qpair failed and we were unable to recover it. 00:25:59.060 [2024-11-19 11:27:54.332159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.060 [2024-11-19 11:27:54.332272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.060 [2024-11-19 11:27:54.332296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.060 [2024-11-19 11:27:54.332311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.060 [2024-11-19 11:27:54.332323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.060 [2024-11-19 11:27:54.332352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.060 qpair failed and we were unable to recover it. 00:25:59.060 [2024-11-19 11:27:54.342231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.060 [2024-11-19 11:27:54.342381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.060 [2024-11-19 11:27:54.342419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.060 [2024-11-19 11:27:54.342433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.060 [2024-11-19 11:27:54.342446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.060 [2024-11-19 11:27:54.342475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.060 qpair failed and we were unable to recover it. 00:25:59.060 [2024-11-19 11:27:54.352166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.060 [2024-11-19 11:27:54.352267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.060 [2024-11-19 11:27:54.352295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.060 [2024-11-19 11:27:54.352310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.060 [2024-11-19 11:27:54.352322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.060 [2024-11-19 11:27:54.352351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.060 qpair failed and we were unable to recover it. 00:25:59.060 [2024-11-19 11:27:54.362245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.060 [2024-11-19 11:27:54.362372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.060 [2024-11-19 11:27:54.362405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.060 [2024-11-19 11:27:54.362419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.060 [2024-11-19 11:27:54.362440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.060 [2024-11-19 11:27:54.362470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.060 qpair failed and we were unable to recover it. 00:25:59.060 [2024-11-19 11:27:54.372295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.060 [2024-11-19 11:27:54.372455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.060 [2024-11-19 11:27:54.372481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.060 [2024-11-19 11:27:54.372496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.060 [2024-11-19 11:27:54.372508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.060 [2024-11-19 11:27:54.372537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.060 qpair failed and we were unable to recover it. 00:25:59.060 [2024-11-19 11:27:54.382309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.060 [2024-11-19 11:27:54.382430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.060 [2024-11-19 11:27:54.382454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.060 [2024-11-19 11:27:54.382468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.060 [2024-11-19 11:27:54.382479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.060 [2024-11-19 11:27:54.382509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.060 qpair failed and we were unable to recover it. 00:25:59.061 [2024-11-19 11:27:54.392324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.061 [2024-11-19 11:27:54.392436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.061 [2024-11-19 11:27:54.392462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.061 [2024-11-19 11:27:54.392476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.061 [2024-11-19 11:27:54.392488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.061 [2024-11-19 11:27:54.392517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.061 qpair failed and we were unable to recover it. 00:25:59.061 [2024-11-19 11:27:54.402345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.061 [2024-11-19 11:27:54.402458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.061 [2024-11-19 11:27:54.402483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.061 [2024-11-19 11:27:54.402497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.061 [2024-11-19 11:27:54.402509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.061 [2024-11-19 11:27:54.402538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.061 qpair failed and we were unable to recover it. 00:25:59.061 [2024-11-19 11:27:54.412434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.061 [2024-11-19 11:27:54.412525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.061 [2024-11-19 11:27:54.412548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.061 [2024-11-19 11:27:54.412562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.061 [2024-11-19 11:27:54.412575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.061 [2024-11-19 11:27:54.412604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.061 qpair failed and we were unable to recover it. 00:25:59.061 [2024-11-19 11:27:54.422412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.061 [2024-11-19 11:27:54.422517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.061 [2024-11-19 11:27:54.422542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.061 [2024-11-19 11:27:54.422557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.061 [2024-11-19 11:27:54.422568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.061 [2024-11-19 11:27:54.422597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.061 qpair failed and we were unable to recover it. 00:25:59.061 [2024-11-19 11:27:54.432459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.061 [2024-11-19 11:27:54.432595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.061 [2024-11-19 11:27:54.432621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.061 [2024-11-19 11:27:54.432636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.061 [2024-11-19 11:27:54.432648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.061 [2024-11-19 11:27:54.432675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.061 qpair failed and we were unable to recover it. 00:25:59.061 [2024-11-19 11:27:54.442436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.061 [2024-11-19 11:27:54.442522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.061 [2024-11-19 11:27:54.442546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.061 [2024-11-19 11:27:54.442560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.061 [2024-11-19 11:27:54.442572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.061 [2024-11-19 11:27:54.442600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.061 qpair failed and we were unable to recover it. 00:25:59.061 [2024-11-19 11:27:54.452547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.061 [2024-11-19 11:27:54.452635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.061 [2024-11-19 11:27:54.452664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.061 [2024-11-19 11:27:54.452679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.061 [2024-11-19 11:27:54.452691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.061 [2024-11-19 11:27:54.452719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.061 qpair failed and we were unable to recover it. 00:25:59.061 [2024-11-19 11:27:54.462550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.061 [2024-11-19 11:27:54.462643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.061 [2024-11-19 11:27:54.462667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.061 [2024-11-19 11:27:54.462681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.061 [2024-11-19 11:27:54.462693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.061 [2024-11-19 11:27:54.462721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.061 qpair failed and we were unable to recover it. 00:25:59.061 [2024-11-19 11:27:54.472576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.061 [2024-11-19 11:27:54.472661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.061 [2024-11-19 11:27:54.472685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.061 [2024-11-19 11:27:54.472699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.061 [2024-11-19 11:27:54.472711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.061 [2024-11-19 11:27:54.472740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.061 qpair failed and we were unable to recover it. 00:25:59.061 [2024-11-19 11:27:54.482621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.061 [2024-11-19 11:27:54.482719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.061 [2024-11-19 11:27:54.482745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.061 [2024-11-19 11:27:54.482759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.061 [2024-11-19 11:27:54.482771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.061 [2024-11-19 11:27:54.482800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.061 qpair failed and we were unable to recover it. 00:25:59.061 [2024-11-19 11:27:54.492645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.061 [2024-11-19 11:27:54.492750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.061 [2024-11-19 11:27:54.492774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.061 [2024-11-19 11:27:54.492788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.061 [2024-11-19 11:27:54.492806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.061 [2024-11-19 11:27:54.492835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.061 qpair failed and we were unable to recover it. 00:25:59.061 [2024-11-19 11:27:54.502708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.061 [2024-11-19 11:27:54.502827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.061 [2024-11-19 11:27:54.502853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.061 [2024-11-19 11:27:54.502867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.061 [2024-11-19 11:27:54.502880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.061 [2024-11-19 11:27:54.502914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.061 qpair failed and we were unable to recover it. 00:25:59.061 [2024-11-19 11:27:54.512726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.061 [2024-11-19 11:27:54.512824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.061 [2024-11-19 11:27:54.512847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.062 [2024-11-19 11:27:54.512861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.062 [2024-11-19 11:27:54.512873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.062 [2024-11-19 11:27:54.512901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.062 qpair failed and we were unable to recover it. 00:25:59.062 [2024-11-19 11:27:54.522753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.062 [2024-11-19 11:27:54.522854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.062 [2024-11-19 11:27:54.522879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.062 [2024-11-19 11:27:54.522894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.062 [2024-11-19 11:27:54.522906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.062 [2024-11-19 11:27:54.522935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.062 qpair failed and we were unable to recover it. 00:25:59.062 [2024-11-19 11:27:54.532755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.062 [2024-11-19 11:27:54.532910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.062 [2024-11-19 11:27:54.532935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.062 [2024-11-19 11:27:54.532949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.062 [2024-11-19 11:27:54.532962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.062 [2024-11-19 11:27:54.532990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.062 qpair failed and we were unable to recover it. 00:25:59.062 [2024-11-19 11:27:54.542783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.062 [2024-11-19 11:27:54.542885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.062 [2024-11-19 11:27:54.542913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.062 [2024-11-19 11:27:54.542928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.062 [2024-11-19 11:27:54.542940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.062 [2024-11-19 11:27:54.542968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.062 qpair failed and we were unable to recover it. 00:25:59.062 [2024-11-19 11:27:54.552824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.062 [2024-11-19 11:27:54.552920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.062 [2024-11-19 11:27:54.552944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.062 [2024-11-19 11:27:54.552958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.062 [2024-11-19 11:27:54.552970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.062 [2024-11-19 11:27:54.552999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.062 qpair failed and we were unable to recover it. 00:25:59.321 [2024-11-19 11:27:54.562833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.321 [2024-11-19 11:27:54.562936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.321 [2024-11-19 11:27:54.562961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.321 [2024-11-19 11:27:54.562976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.321 [2024-11-19 11:27:54.562988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.321 [2024-11-19 11:27:54.563017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.321 qpair failed and we were unable to recover it. 00:25:59.321 [2024-11-19 11:27:54.572888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.321 [2024-11-19 11:27:54.572998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.321 [2024-11-19 11:27:54.573023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.321 [2024-11-19 11:27:54.573037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.321 [2024-11-19 11:27:54.573050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.321 [2024-11-19 11:27:54.573078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.321 qpair failed and we were unable to recover it. 00:25:59.321 [2024-11-19 11:27:54.582924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.321 [2024-11-19 11:27:54.583027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.321 [2024-11-19 11:27:54.583056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.321 [2024-11-19 11:27:54.583071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.321 [2024-11-19 11:27:54.583084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.321 [2024-11-19 11:27:54.583113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.321 qpair failed and we were unable to recover it. 00:25:59.321 [2024-11-19 11:27:54.592928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.321 [2024-11-19 11:27:54.593046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.321 [2024-11-19 11:27:54.593071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.321 [2024-11-19 11:27:54.593085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.321 [2024-11-19 11:27:54.593097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.321 [2024-11-19 11:27:54.593126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.321 qpair failed and we were unable to recover it. 00:25:59.321 [2024-11-19 11:27:54.602957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.321 [2024-11-19 11:27:54.603056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.321 [2024-11-19 11:27:54.603085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.321 [2024-11-19 11:27:54.603099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.321 [2024-11-19 11:27:54.603111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.321 [2024-11-19 11:27:54.603140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.321 qpair failed and we were unable to recover it. 00:25:59.321 [2024-11-19 11:27:54.613007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.321 [2024-11-19 11:27:54.613115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.321 [2024-11-19 11:27:54.613144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.321 [2024-11-19 11:27:54.613158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.321 [2024-11-19 11:27:54.613171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.321 [2024-11-19 11:27:54.613199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.321 qpair failed and we were unable to recover it. 00:25:59.321 [2024-11-19 11:27:54.622993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.321 [2024-11-19 11:27:54.623096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.321 [2024-11-19 11:27:54.623120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.321 [2024-11-19 11:27:54.623135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.321 [2024-11-19 11:27:54.623153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.321 [2024-11-19 11:27:54.623183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.321 qpair failed and we were unable to recover it. 00:25:59.321 [2024-11-19 11:27:54.633046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.321 [2024-11-19 11:27:54.633143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.321 [2024-11-19 11:27:54.633167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.322 [2024-11-19 11:27:54.633181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.322 [2024-11-19 11:27:54.633193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.322 [2024-11-19 11:27:54.633221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.322 qpair failed and we were unable to recover it. 00:25:59.322 [2024-11-19 11:27:54.643056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.322 [2024-11-19 11:27:54.643157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.322 [2024-11-19 11:27:54.643182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.322 [2024-11-19 11:27:54.643197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.322 [2024-11-19 11:27:54.643209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.322 [2024-11-19 11:27:54.643238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.322 qpair failed and we were unable to recover it. 00:25:59.322 [2024-11-19 11:27:54.653104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.322 [2024-11-19 11:27:54.653213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.322 [2024-11-19 11:27:54.653238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.322 [2024-11-19 11:27:54.653252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.322 [2024-11-19 11:27:54.653265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.322 [2024-11-19 11:27:54.653294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.322 qpair failed and we were unable to recover it. 00:25:59.322 [2024-11-19 11:27:54.663106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.322 [2024-11-19 11:27:54.663210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.322 [2024-11-19 11:27:54.663235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.322 [2024-11-19 11:27:54.663249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.322 [2024-11-19 11:27:54.663261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.322 [2024-11-19 11:27:54.663290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.322 qpair failed and we were unable to recover it. 00:25:59.322 [2024-11-19 11:27:54.673111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.322 [2024-11-19 11:27:54.673210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.322 [2024-11-19 11:27:54.673233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.322 [2024-11-19 11:27:54.673247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.322 [2024-11-19 11:27:54.673260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.322 [2024-11-19 11:27:54.673288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.322 qpair failed and we were unable to recover it. 00:25:59.322 [2024-11-19 11:27:54.683123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.322 [2024-11-19 11:27:54.683219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.322 [2024-11-19 11:27:54.683244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.322 [2024-11-19 11:27:54.683258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.322 [2024-11-19 11:27:54.683270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.322 [2024-11-19 11:27:54.683298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.322 qpair failed and we were unable to recover it. 00:25:59.322 [2024-11-19 11:27:54.693269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.322 [2024-11-19 11:27:54.693394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.322 [2024-11-19 11:27:54.693419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.322 [2024-11-19 11:27:54.693434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.322 [2024-11-19 11:27:54.693446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.322 [2024-11-19 11:27:54.693476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.322 qpair failed and we were unable to recover it. 00:25:59.322 [2024-11-19 11:27:54.703240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.322 [2024-11-19 11:27:54.703337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.322 [2024-11-19 11:27:54.703370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.322 [2024-11-19 11:27:54.703387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.322 [2024-11-19 11:27:54.703400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.322 [2024-11-19 11:27:54.703429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.322 qpair failed and we were unable to recover it. 00:25:59.322 [2024-11-19 11:27:54.713253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.322 [2024-11-19 11:27:54.713379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.322 [2024-11-19 11:27:54.713410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.322 [2024-11-19 11:27:54.713426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.322 [2024-11-19 11:27:54.713438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.322 [2024-11-19 11:27:54.713467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.322 qpair failed and we were unable to recover it. 00:25:59.322 [2024-11-19 11:27:54.723334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.322 [2024-11-19 11:27:54.723443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.322 [2024-11-19 11:27:54.723467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.322 [2024-11-19 11:27:54.723481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.322 [2024-11-19 11:27:54.723493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.322 [2024-11-19 11:27:54.723522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.322 qpair failed and we were unable to recover it. 00:25:59.322 [2024-11-19 11:27:54.733412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.322 [2024-11-19 11:27:54.733505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.322 [2024-11-19 11:27:54.733529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.322 [2024-11-19 11:27:54.733543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.322 [2024-11-19 11:27:54.733555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.322 [2024-11-19 11:27:54.733584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.322 qpair failed and we were unable to recover it. 00:25:59.322 [2024-11-19 11:27:54.743333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.322 [2024-11-19 11:27:54.743453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.322 [2024-11-19 11:27:54.743479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.322 [2024-11-19 11:27:54.743493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.322 [2024-11-19 11:27:54.743505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.322 [2024-11-19 11:27:54.743534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.322 qpair failed and we were unable to recover it. 00:25:59.322 [2024-11-19 11:27:54.753336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.322 [2024-11-19 11:27:54.753449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.322 [2024-11-19 11:27:54.753473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.322 [2024-11-19 11:27:54.753487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.322 [2024-11-19 11:27:54.753505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.322 [2024-11-19 11:27:54.753534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.322 qpair failed and we were unable to recover it. 00:25:59.323 [2024-11-19 11:27:54.763387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.323 [2024-11-19 11:27:54.763472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.323 [2024-11-19 11:27:54.763496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.323 [2024-11-19 11:27:54.763510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.323 [2024-11-19 11:27:54.763522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.323 [2024-11-19 11:27:54.763551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.323 qpair failed and we were unable to recover it. 00:25:59.323 [2024-11-19 11:27:54.773428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.323 [2024-11-19 11:27:54.773520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.323 [2024-11-19 11:27:54.773548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.323 [2024-11-19 11:27:54.773562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.323 [2024-11-19 11:27:54.773574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.323 [2024-11-19 11:27:54.773602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.323 qpair failed and we were unable to recover it. 00:25:59.323 [2024-11-19 11:27:54.783438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.323 [2024-11-19 11:27:54.783525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.323 [2024-11-19 11:27:54.783549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.323 [2024-11-19 11:27:54.783563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.323 [2024-11-19 11:27:54.783575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.323 [2024-11-19 11:27:54.783603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.323 qpair failed and we were unable to recover it. 00:25:59.323 [2024-11-19 11:27:54.793487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.323 [2024-11-19 11:27:54.793577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.323 [2024-11-19 11:27:54.793602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.323 [2024-11-19 11:27:54.793615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.323 [2024-11-19 11:27:54.793627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.323 [2024-11-19 11:27:54.793656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.323 qpair failed and we were unable to recover it. 00:25:59.323 [2024-11-19 11:27:54.803523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.323 [2024-11-19 11:27:54.803608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.323 [2024-11-19 11:27:54.803633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.323 [2024-11-19 11:27:54.803647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.323 [2024-11-19 11:27:54.803658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.323 [2024-11-19 11:27:54.803687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.323 qpair failed and we were unable to recover it. 00:25:59.323 [2024-11-19 11:27:54.813591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.323 [2024-11-19 11:27:54.813704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.323 [2024-11-19 11:27:54.813730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.323 [2024-11-19 11:27:54.813744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.323 [2024-11-19 11:27:54.813756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.323 [2024-11-19 11:27:54.813784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.323 qpair failed and we were unable to recover it. 00:25:59.582 [2024-11-19 11:27:54.823563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.582 [2024-11-19 11:27:54.823672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.582 [2024-11-19 11:27:54.823698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.582 [2024-11-19 11:27:54.823712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.582 [2024-11-19 11:27:54.823724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.582 [2024-11-19 11:27:54.823751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.582 qpair failed and we were unable to recover it. 00:25:59.582 [2024-11-19 11:27:54.833595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.582 [2024-11-19 11:27:54.833679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.582 [2024-11-19 11:27:54.833702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.582 [2024-11-19 11:27:54.833717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.582 [2024-11-19 11:27:54.833729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.582 [2024-11-19 11:27:54.833758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.582 qpair failed and we were unable to recover it. 00:25:59.582 [2024-11-19 11:27:54.843654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.582 [2024-11-19 11:27:54.843740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.582 [2024-11-19 11:27:54.843770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.582 [2024-11-19 11:27:54.843785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.582 [2024-11-19 11:27:54.843797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.582 [2024-11-19 11:27:54.843825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.582 qpair failed and we were unable to recover it. 00:25:59.582 [2024-11-19 11:27:54.853712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.582 [2024-11-19 11:27:54.853821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.582 [2024-11-19 11:27:54.853847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.582 [2024-11-19 11:27:54.853861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.582 [2024-11-19 11:27:54.853874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.582 [2024-11-19 11:27:54.853902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.582 qpair failed and we were unable to recover it. 00:25:59.582 [2024-11-19 11:27:54.863723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.582 [2024-11-19 11:27:54.863829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.582 [2024-11-19 11:27:54.863854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.582 [2024-11-19 11:27:54.863868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.582 [2024-11-19 11:27:54.863880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.582 [2024-11-19 11:27:54.863908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.582 qpair failed and we were unable to recover it. 00:25:59.582 [2024-11-19 11:27:54.873738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.582 [2024-11-19 11:27:54.873845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.582 [2024-11-19 11:27:54.873870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.582 [2024-11-19 11:27:54.873885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.582 [2024-11-19 11:27:54.873897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.582 [2024-11-19 11:27:54.873925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.582 qpair failed and we were unable to recover it. 00:25:59.582 [2024-11-19 11:27:54.883735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.582 [2024-11-19 11:27:54.883837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.582 [2024-11-19 11:27:54.883861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.582 [2024-11-19 11:27:54.883876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.582 [2024-11-19 11:27:54.883894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.582 [2024-11-19 11:27:54.883923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.582 qpair failed and we were unable to recover it. 00:25:59.582 [2024-11-19 11:27:54.893771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.582 [2024-11-19 11:27:54.893876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.582 [2024-11-19 11:27:54.893900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.582 [2024-11-19 11:27:54.893914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.582 [2024-11-19 11:27:54.893926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.582 [2024-11-19 11:27:54.893955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.582 qpair failed and we were unable to recover it. 00:25:59.582 [2024-11-19 11:27:54.903799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.582 [2024-11-19 11:27:54.903909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.582 [2024-11-19 11:27:54.903935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.582 [2024-11-19 11:27:54.903949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.582 [2024-11-19 11:27:54.903961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.582 [2024-11-19 11:27:54.903990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.582 qpair failed and we were unable to recover it. 00:25:59.583 [2024-11-19 11:27:54.913829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.583 [2024-11-19 11:27:54.913932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.583 [2024-11-19 11:27:54.913956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.583 [2024-11-19 11:27:54.913970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.583 [2024-11-19 11:27:54.913982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.583 [2024-11-19 11:27:54.914010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.583 qpair failed and we were unable to recover it. 00:25:59.583 [2024-11-19 11:27:54.923830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.583 [2024-11-19 11:27:54.923947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.583 [2024-11-19 11:27:54.923972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.583 [2024-11-19 11:27:54.923986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.583 [2024-11-19 11:27:54.923998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.583 [2024-11-19 11:27:54.924027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.583 qpair failed and we were unable to recover it. 00:25:59.583 [2024-11-19 11:27:54.933911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.583 [2024-11-19 11:27:54.934054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.583 [2024-11-19 11:27:54.934079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.583 [2024-11-19 11:27:54.934093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.583 [2024-11-19 11:27:54.934105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.583 [2024-11-19 11:27:54.934133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.583 qpair failed and we were unable to recover it. 00:25:59.583 [2024-11-19 11:27:54.943917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.583 [2024-11-19 11:27:54.944020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.583 [2024-11-19 11:27:54.944045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.583 [2024-11-19 11:27:54.944059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.583 [2024-11-19 11:27:54.944071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.583 [2024-11-19 11:27:54.944099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.583 qpair failed and we were unable to recover it. 00:25:59.583 [2024-11-19 11:27:54.953956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.583 [2024-11-19 11:27:54.954106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.583 [2024-11-19 11:27:54.954130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.583 [2024-11-19 11:27:54.954145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.583 [2024-11-19 11:27:54.954157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.583 [2024-11-19 11:27:54.954186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.583 qpair failed and we were unable to recover it. 00:25:59.583 [2024-11-19 11:27:54.963964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.583 [2024-11-19 11:27:54.964063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.583 [2024-11-19 11:27:54.964087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.583 [2024-11-19 11:27:54.964101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.583 [2024-11-19 11:27:54.964112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.583 [2024-11-19 11:27:54.964141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.583 qpair failed and we were unable to recover it. 00:25:59.583 [2024-11-19 11:27:54.974092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.583 [2024-11-19 11:27:54.974198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.583 [2024-11-19 11:27:54.974228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.583 [2024-11-19 11:27:54.974243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.583 [2024-11-19 11:27:54.974255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.583 [2024-11-19 11:27:54.974284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.583 qpair failed and we were unable to recover it. 00:25:59.583 [2024-11-19 11:27:54.984031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.583 [2024-11-19 11:27:54.984132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.583 [2024-11-19 11:27:54.984155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.583 [2024-11-19 11:27:54.984169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.583 [2024-11-19 11:27:54.984181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.583 [2024-11-19 11:27:54.984210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.583 qpair failed and we were unable to recover it. 00:25:59.583 [2024-11-19 11:27:54.994018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.583 [2024-11-19 11:27:54.994118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.583 [2024-11-19 11:27:54.994147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.583 [2024-11-19 11:27:54.994162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.583 [2024-11-19 11:27:54.994174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.583 [2024-11-19 11:27:54.994202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.583 qpair failed and we were unable to recover it. 00:25:59.583 [2024-11-19 11:27:55.004057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.583 [2024-11-19 11:27:55.004157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.583 [2024-11-19 11:27:55.004186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.583 [2024-11-19 11:27:55.004200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.583 [2024-11-19 11:27:55.004212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.583 [2024-11-19 11:27:55.004241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.583 qpair failed and we were unable to recover it. 00:25:59.583 [2024-11-19 11:27:55.014092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.583 [2024-11-19 11:27:55.014202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.583 [2024-11-19 11:27:55.014227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.583 [2024-11-19 11:27:55.014242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.583 [2024-11-19 11:27:55.014261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.583 [2024-11-19 11:27:55.014291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.583 qpair failed and we were unable to recover it. 00:25:59.583 [2024-11-19 11:27:55.024117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.583 [2024-11-19 11:27:55.024220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.583 [2024-11-19 11:27:55.024245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.583 [2024-11-19 11:27:55.024260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.583 [2024-11-19 11:27:55.024272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.583 [2024-11-19 11:27:55.024300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.583 qpair failed and we were unable to recover it. 00:25:59.583 [2024-11-19 11:27:55.034161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.583 [2024-11-19 11:27:55.034259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.583 [2024-11-19 11:27:55.034284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.584 [2024-11-19 11:27:55.034298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.584 [2024-11-19 11:27:55.034310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.584 [2024-11-19 11:27:55.034337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.584 qpair failed and we were unable to recover it. 00:25:59.584 [2024-11-19 11:27:55.044153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.584 [2024-11-19 11:27:55.044252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.584 [2024-11-19 11:27:55.044277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.584 [2024-11-19 11:27:55.044291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.584 [2024-11-19 11:27:55.044304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.584 [2024-11-19 11:27:55.044332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.584 qpair failed and we were unable to recover it. 00:25:59.584 [2024-11-19 11:27:55.054203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.584 [2024-11-19 11:27:55.054308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.584 [2024-11-19 11:27:55.054333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.584 [2024-11-19 11:27:55.054347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.584 [2024-11-19 11:27:55.054360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.584 [2024-11-19 11:27:55.054399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.584 qpair failed and we were unable to recover it. 00:25:59.584 [2024-11-19 11:27:55.064227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.584 [2024-11-19 11:27:55.064331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.584 [2024-11-19 11:27:55.064357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.584 [2024-11-19 11:27:55.064380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.584 [2024-11-19 11:27:55.064393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.584 [2024-11-19 11:27:55.064422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.584 qpair failed and we were unable to recover it. 00:25:59.584 [2024-11-19 11:27:55.074350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.584 [2024-11-19 11:27:55.074445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.584 [2024-11-19 11:27:55.074468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.584 [2024-11-19 11:27:55.074482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.584 [2024-11-19 11:27:55.074494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.584 [2024-11-19 11:27:55.074523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.584 qpair failed and we were unable to recover it. 00:25:59.843 [2024-11-19 11:27:55.084289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.843 [2024-11-19 11:27:55.084407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.843 [2024-11-19 11:27:55.084433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.843 [2024-11-19 11:27:55.084448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.843 [2024-11-19 11:27:55.084461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.843 [2024-11-19 11:27:55.084490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.843 qpair failed and we were unable to recover it. 00:25:59.843 [2024-11-19 11:27:55.094305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.843 [2024-11-19 11:27:55.094422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.843 [2024-11-19 11:27:55.094447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.843 [2024-11-19 11:27:55.094462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.843 [2024-11-19 11:27:55.094474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.843 [2024-11-19 11:27:55.094504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.843 qpair failed and we were unable to recover it. 00:25:59.843 [2024-11-19 11:27:55.104387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.843 [2024-11-19 11:27:55.104526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.843 [2024-11-19 11:27:55.104557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.843 [2024-11-19 11:27:55.104572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.843 [2024-11-19 11:27:55.104584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.843 [2024-11-19 11:27:55.104613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.843 qpair failed and we were unable to recover it. 00:25:59.843 [2024-11-19 11:27:55.114405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.843 [2024-11-19 11:27:55.114501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.843 [2024-11-19 11:27:55.114527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.843 [2024-11-19 11:27:55.114541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.843 [2024-11-19 11:27:55.114553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.843 [2024-11-19 11:27:55.114582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.843 qpair failed and we were unable to recover it. 00:25:59.843 [2024-11-19 11:27:55.124416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.843 [2024-11-19 11:27:55.124507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.843 [2024-11-19 11:27:55.124531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.843 [2024-11-19 11:27:55.124545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.843 [2024-11-19 11:27:55.124557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.843 [2024-11-19 11:27:55.124586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.843 qpair failed and we were unable to recover it. 00:25:59.843 [2024-11-19 11:27:55.134453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.843 [2024-11-19 11:27:55.134583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.843 [2024-11-19 11:27:55.134608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.843 [2024-11-19 11:27:55.134623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.843 [2024-11-19 11:27:55.134635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.843 [2024-11-19 11:27:55.134663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.843 qpair failed and we were unable to recover it. 00:25:59.843 [2024-11-19 11:27:55.144469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.843 [2024-11-19 11:27:55.144561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.844 [2024-11-19 11:27:55.144585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.844 [2024-11-19 11:27:55.144599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.844 [2024-11-19 11:27:55.144616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.844 [2024-11-19 11:27:55.144645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.844 qpair failed and we were unable to recover it. 00:25:59.844 [2024-11-19 11:27:55.154499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.844 [2024-11-19 11:27:55.154587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.844 [2024-11-19 11:27:55.154610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.844 [2024-11-19 11:27:55.154624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.844 [2024-11-19 11:27:55.154636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.844 [2024-11-19 11:27:55.154665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.844 qpair failed and we were unable to recover it. 00:25:59.844 [2024-11-19 11:27:55.164515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.844 [2024-11-19 11:27:55.164614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.844 [2024-11-19 11:27:55.164640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.844 [2024-11-19 11:27:55.164654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.844 [2024-11-19 11:27:55.164666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.844 [2024-11-19 11:27:55.164694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.844 qpair failed and we were unable to recover it. 00:25:59.844 [2024-11-19 11:27:55.174601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.844 [2024-11-19 11:27:55.174697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.844 [2024-11-19 11:27:55.174722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.844 [2024-11-19 11:27:55.174737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.844 [2024-11-19 11:27:55.174749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.844 [2024-11-19 11:27:55.174777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.844 qpair failed and we were unable to recover it. 00:25:59.844 [2024-11-19 11:27:55.184593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.844 [2024-11-19 11:27:55.184684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.844 [2024-11-19 11:27:55.184707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.844 [2024-11-19 11:27:55.184722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.844 [2024-11-19 11:27:55.184734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.844 [2024-11-19 11:27:55.184762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.844 qpair failed and we were unable to recover it. 00:25:59.844 [2024-11-19 11:27:55.194681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.844 [2024-11-19 11:27:55.194780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.844 [2024-11-19 11:27:55.194804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.844 [2024-11-19 11:27:55.194818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.844 [2024-11-19 11:27:55.194830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.844 [2024-11-19 11:27:55.194859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.844 qpair failed and we were unable to recover it. 00:25:59.844 [2024-11-19 11:27:55.204716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.844 [2024-11-19 11:27:55.204821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.844 [2024-11-19 11:27:55.204846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.844 [2024-11-19 11:27:55.204861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.844 [2024-11-19 11:27:55.204873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.844 [2024-11-19 11:27:55.204902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.844 qpair failed and we were unable to recover it. 00:25:59.844 [2024-11-19 11:27:55.214789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.844 [2024-11-19 11:27:55.214893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.844 [2024-11-19 11:27:55.214918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.844 [2024-11-19 11:27:55.214932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.844 [2024-11-19 11:27:55.214944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.844 [2024-11-19 11:27:55.214973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.844 qpair failed and we were unable to recover it. 00:25:59.844 [2024-11-19 11:27:55.224726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.844 [2024-11-19 11:27:55.224826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.844 [2024-11-19 11:27:55.224850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.844 [2024-11-19 11:27:55.224864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.844 [2024-11-19 11:27:55.224876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.844 [2024-11-19 11:27:55.224905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.844 qpair failed and we were unable to recover it. 00:25:59.844 [2024-11-19 11:27:55.234781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.844 [2024-11-19 11:27:55.234881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.844 [2024-11-19 11:27:55.234910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.844 [2024-11-19 11:27:55.234924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.844 [2024-11-19 11:27:55.234936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.844 [2024-11-19 11:27:55.234964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.844 qpair failed and we were unable to recover it. 00:25:59.844 [2024-11-19 11:27:55.244782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.844 [2024-11-19 11:27:55.244901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.844 [2024-11-19 11:27:55.244926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.844 [2024-11-19 11:27:55.244941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.844 [2024-11-19 11:27:55.244953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.844 [2024-11-19 11:27:55.244981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.844 qpair failed and we were unable to recover it. 00:25:59.844 [2024-11-19 11:27:55.254874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.844 [2024-11-19 11:27:55.254996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.844 [2024-11-19 11:27:55.255022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.844 [2024-11-19 11:27:55.255036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.844 [2024-11-19 11:27:55.255048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.844 [2024-11-19 11:27:55.255077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.844 qpair failed and we were unable to recover it. 00:25:59.844 [2024-11-19 11:27:55.264820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.845 [2024-11-19 11:27:55.264924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.845 [2024-11-19 11:27:55.264950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.845 [2024-11-19 11:27:55.264964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.845 [2024-11-19 11:27:55.264977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.845 [2024-11-19 11:27:55.265006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.845 qpair failed and we were unable to recover it. 00:25:59.845 [2024-11-19 11:27:55.274905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.845 [2024-11-19 11:27:55.275001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.845 [2024-11-19 11:27:55.275026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.845 [2024-11-19 11:27:55.275040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.845 [2024-11-19 11:27:55.275058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.845 [2024-11-19 11:27:55.275088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.845 qpair failed and we were unable to recover it. 00:25:59.845 [2024-11-19 11:27:55.284929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.845 [2024-11-19 11:27:55.285030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.845 [2024-11-19 11:27:55.285055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.845 [2024-11-19 11:27:55.285070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.845 [2024-11-19 11:27:55.285082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.845 [2024-11-19 11:27:55.285111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.845 qpair failed and we were unable to recover it. 00:25:59.845 [2024-11-19 11:27:55.294980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.845 [2024-11-19 11:27:55.295095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.845 [2024-11-19 11:27:55.295121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.845 [2024-11-19 11:27:55.295135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.845 [2024-11-19 11:27:55.295147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.845 [2024-11-19 11:27:55.295175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.845 qpair failed and we were unable to recover it. 00:25:59.845 [2024-11-19 11:27:55.305053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.845 [2024-11-19 11:27:55.305192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.845 [2024-11-19 11:27:55.305218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.845 [2024-11-19 11:27:55.305232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.845 [2024-11-19 11:27:55.305245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.845 [2024-11-19 11:27:55.305273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.845 qpair failed and we were unable to recover it. 00:25:59.845 [2024-11-19 11:27:55.314994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.845 [2024-11-19 11:27:55.315094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.845 [2024-11-19 11:27:55.315118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.845 [2024-11-19 11:27:55.315133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.845 [2024-11-19 11:27:55.315144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.845 [2024-11-19 11:27:55.315173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.845 qpair failed and we were unable to recover it. 00:25:59.845 [2024-11-19 11:27:55.325045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.845 [2024-11-19 11:27:55.325145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.845 [2024-11-19 11:27:55.325170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.845 [2024-11-19 11:27:55.325184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.845 [2024-11-19 11:27:55.325196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.845 [2024-11-19 11:27:55.325224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.845 qpair failed and we were unable to recover it. 00:25:59.845 [2024-11-19 11:27:55.335074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.845 [2024-11-19 11:27:55.335177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.845 [2024-11-19 11:27:55.335201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.845 [2024-11-19 11:27:55.335215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.845 [2024-11-19 11:27:55.335228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:25:59.845 [2024-11-19 11:27:55.335257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:59.845 qpair failed and we were unable to recover it. 00:26:00.104 [2024-11-19 11:27:55.345126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.104 [2024-11-19 11:27:55.345231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.104 [2024-11-19 11:27:55.345258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.104 [2024-11-19 11:27:55.345273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.104 [2024-11-19 11:27:55.345285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.104 [2024-11-19 11:27:55.345313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.104 qpair failed and we were unable to recover it. 00:26:00.104 [2024-11-19 11:27:55.355061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.104 [2024-11-19 11:27:55.355162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.104 [2024-11-19 11:27:55.355187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.104 [2024-11-19 11:27:55.355201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.104 [2024-11-19 11:27:55.355213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.104 [2024-11-19 11:27:55.355242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.104 qpair failed and we were unable to recover it. 00:26:00.104 [2024-11-19 11:27:55.365103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.104 [2024-11-19 11:27:55.365217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.104 [2024-11-19 11:27:55.365248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.104 [2024-11-19 11:27:55.365264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.104 [2024-11-19 11:27:55.365276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.104 [2024-11-19 11:27:55.365304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.104 qpair failed and we were unable to recover it. 00:26:00.104 [2024-11-19 11:27:55.375165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.104 [2024-11-19 11:27:55.375265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.104 [2024-11-19 11:27:55.375291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.104 [2024-11-19 11:27:55.375305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.104 [2024-11-19 11:27:55.375317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.105 [2024-11-19 11:27:55.375345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.105 qpair failed and we were unable to recover it. 00:26:00.105 [2024-11-19 11:27:55.385210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.105 [2024-11-19 11:27:55.385320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.105 [2024-11-19 11:27:55.385344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.105 [2024-11-19 11:27:55.385357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.105 [2024-11-19 11:27:55.385379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.105 [2024-11-19 11:27:55.385408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.105 qpair failed and we were unable to recover it. 00:26:00.105 [2024-11-19 11:27:55.395209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.105 [2024-11-19 11:27:55.395310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.105 [2024-11-19 11:27:55.395336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.105 [2024-11-19 11:27:55.395350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.105 [2024-11-19 11:27:55.395374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.105 [2024-11-19 11:27:55.395406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.105 qpair failed and we were unable to recover it. 00:26:00.105 [2024-11-19 11:27:55.405226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.105 [2024-11-19 11:27:55.405329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.105 [2024-11-19 11:27:55.405353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.105 [2024-11-19 11:27:55.405376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.105 [2024-11-19 11:27:55.405396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.105 [2024-11-19 11:27:55.405425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.105 qpair failed and we were unable to recover it. 00:26:00.105 [2024-11-19 11:27:55.415272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.105 [2024-11-19 11:27:55.415386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.105 [2024-11-19 11:27:55.415411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.105 [2024-11-19 11:27:55.415426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.105 [2024-11-19 11:27:55.415438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.105 [2024-11-19 11:27:55.415467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.105 qpair failed and we were unable to recover it. 00:26:00.105 [2024-11-19 11:27:55.425307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.105 [2024-11-19 11:27:55.425467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.105 [2024-11-19 11:27:55.425492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.105 [2024-11-19 11:27:55.425506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.105 [2024-11-19 11:27:55.425518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.105 [2024-11-19 11:27:55.425547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.105 qpair failed and we were unable to recover it. 00:26:00.105 [2024-11-19 11:27:55.435289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.105 [2024-11-19 11:27:55.435401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.105 [2024-11-19 11:27:55.435426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.105 [2024-11-19 11:27:55.435441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.105 [2024-11-19 11:27:55.435453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.105 [2024-11-19 11:27:55.435481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.105 qpair failed and we were unable to recover it. 00:26:00.105 [2024-11-19 11:27:55.445376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.105 [2024-11-19 11:27:55.445461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.105 [2024-11-19 11:27:55.445485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.105 [2024-11-19 11:27:55.445499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.105 [2024-11-19 11:27:55.445511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.105 [2024-11-19 11:27:55.445539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.105 qpair failed and we were unable to recover it. 00:26:00.105 [2024-11-19 11:27:55.455424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.105 [2024-11-19 11:27:55.455552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.105 [2024-11-19 11:27:55.455578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.105 [2024-11-19 11:27:55.455593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.105 [2024-11-19 11:27:55.455605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.105 [2024-11-19 11:27:55.455633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.105 qpair failed and we were unable to recover it. 00:26:00.105 [2024-11-19 11:27:55.465442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.105 [2024-11-19 11:27:55.465536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.105 [2024-11-19 11:27:55.465561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.105 [2024-11-19 11:27:55.465575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.105 [2024-11-19 11:27:55.465587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.105 [2024-11-19 11:27:55.465616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.105 qpair failed and we were unable to recover it. 00:26:00.105 [2024-11-19 11:27:55.475405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.105 [2024-11-19 11:27:55.475490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.105 [2024-11-19 11:27:55.475513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.105 [2024-11-19 11:27:55.475527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.105 [2024-11-19 11:27:55.475539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.105 [2024-11-19 11:27:55.475568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.105 qpair failed and we were unable to recover it. 00:26:00.105 [2024-11-19 11:27:55.485460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.105 [2024-11-19 11:27:55.485547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.105 [2024-11-19 11:27:55.485570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.105 [2024-11-19 11:27:55.485584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.105 [2024-11-19 11:27:55.485596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.105 [2024-11-19 11:27:55.485625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.105 qpair failed and we were unable to recover it. 00:26:00.105 [2024-11-19 11:27:55.495503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.105 [2024-11-19 11:27:55.495594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.105 [2024-11-19 11:27:55.495627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.105 [2024-11-19 11:27:55.495642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.105 [2024-11-19 11:27:55.495654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.105 [2024-11-19 11:27:55.495683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.105 qpair failed and we were unable to recover it. 00:26:00.106 [2024-11-19 11:27:55.505459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.106 [2024-11-19 11:27:55.505557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.106 [2024-11-19 11:27:55.505581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.106 [2024-11-19 11:27:55.505595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.106 [2024-11-19 11:27:55.505607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.106 [2024-11-19 11:27:55.505636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.106 qpair failed and we were unable to recover it. 00:26:00.106 [2024-11-19 11:27:55.515532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.106 [2024-11-19 11:27:55.515618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.106 [2024-11-19 11:27:55.515642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.106 [2024-11-19 11:27:55.515656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.106 [2024-11-19 11:27:55.515668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.106 [2024-11-19 11:27:55.515707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.106 qpair failed and we were unable to recover it. 00:26:00.106 [2024-11-19 11:27:55.525592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.106 [2024-11-19 11:27:55.525677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.106 [2024-11-19 11:27:55.525701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.106 [2024-11-19 11:27:55.525714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.106 [2024-11-19 11:27:55.525726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.106 [2024-11-19 11:27:55.525755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.106 qpair failed and we were unable to recover it. 00:26:00.106 [2024-11-19 11:27:55.535697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.106 [2024-11-19 11:27:55.535805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.106 [2024-11-19 11:27:55.535830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.106 [2024-11-19 11:27:55.535844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.106 [2024-11-19 11:27:55.535862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.106 [2024-11-19 11:27:55.535892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.106 qpair failed and we were unable to recover it. 00:26:00.106 [2024-11-19 11:27:55.545620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.106 [2024-11-19 11:27:55.545707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.106 [2024-11-19 11:27:55.545732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.106 [2024-11-19 11:27:55.545746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.106 [2024-11-19 11:27:55.545758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.106 [2024-11-19 11:27:55.545786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.106 qpair failed and we were unable to recover it. 00:26:00.106 [2024-11-19 11:27:55.555673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.106 [2024-11-19 11:27:55.555799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.106 [2024-11-19 11:27:55.555823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.106 [2024-11-19 11:27:55.555837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.106 [2024-11-19 11:27:55.555849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.106 [2024-11-19 11:27:55.555877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.106 qpair failed and we were unable to recover it. 00:26:00.106 [2024-11-19 11:27:55.565681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.106 [2024-11-19 11:27:55.565779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.106 [2024-11-19 11:27:55.565803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.106 [2024-11-19 11:27:55.565817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.106 [2024-11-19 11:27:55.565828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.106 [2024-11-19 11:27:55.565856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.106 qpair failed and we were unable to recover it. 00:26:00.106 [2024-11-19 11:27:55.575717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.106 [2024-11-19 11:27:55.575848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.106 [2024-11-19 11:27:55.575874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.106 [2024-11-19 11:27:55.575888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.106 [2024-11-19 11:27:55.575901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.106 [2024-11-19 11:27:55.575929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.106 qpair failed and we were unable to recover it. 00:26:00.106 [2024-11-19 11:27:55.585719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.106 [2024-11-19 11:27:55.585827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.106 [2024-11-19 11:27:55.585851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.106 [2024-11-19 11:27:55.585865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.106 [2024-11-19 11:27:55.585877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.106 [2024-11-19 11:27:55.585906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.106 qpair failed and we were unable to recover it. 00:26:00.106 [2024-11-19 11:27:55.595812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.106 [2024-11-19 11:27:55.595912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.106 [2024-11-19 11:27:55.595936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.106 [2024-11-19 11:27:55.595950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.106 [2024-11-19 11:27:55.595962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.106 [2024-11-19 11:27:55.595990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.106 qpair failed and we were unable to recover it. 00:26:00.366 [2024-11-19 11:27:55.605760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.366 [2024-11-19 11:27:55.605843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.366 [2024-11-19 11:27:55.605867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.366 [2024-11-19 11:27:55.605881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.366 [2024-11-19 11:27:55.605893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.366 [2024-11-19 11:27:55.605921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.366 qpair failed and we were unable to recover it. 00:26:00.366 [2024-11-19 11:27:55.615847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.366 [2024-11-19 11:27:55.615959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.366 [2024-11-19 11:27:55.615985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.366 [2024-11-19 11:27:55.615999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.366 [2024-11-19 11:27:55.616011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.366 [2024-11-19 11:27:55.616039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.366 qpair failed and we were unable to recover it. 00:26:00.366 [2024-11-19 11:27:55.625818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.366 [2024-11-19 11:27:55.625920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.366 [2024-11-19 11:27:55.625950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.366 [2024-11-19 11:27:55.625965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.366 [2024-11-19 11:27:55.625977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.366 [2024-11-19 11:27:55.626006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.366 qpair failed and we were unable to recover it. 00:26:00.366 [2024-11-19 11:27:55.635952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.366 [2024-11-19 11:27:55.636046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.366 [2024-11-19 11:27:55.636070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.366 [2024-11-19 11:27:55.636085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.366 [2024-11-19 11:27:55.636097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.366 [2024-11-19 11:27:55.636136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.366 qpair failed and we were unable to recover it. 00:26:00.366 [2024-11-19 11:27:55.645895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.366 [2024-11-19 11:27:55.646045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.366 [2024-11-19 11:27:55.646071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.366 [2024-11-19 11:27:55.646086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.366 [2024-11-19 11:27:55.646098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.366 [2024-11-19 11:27:55.646127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.366 qpair failed and we were unable to recover it. 00:26:00.366 [2024-11-19 11:27:55.656004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.366 [2024-11-19 11:27:55.656114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.366 [2024-11-19 11:27:55.656139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.366 [2024-11-19 11:27:55.656154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.366 [2024-11-19 11:27:55.656166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.366 [2024-11-19 11:27:55.656199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.366 qpair failed and we were unable to recover it. 00:26:00.366 [2024-11-19 11:27:55.665989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.366 [2024-11-19 11:27:55.666093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.366 [2024-11-19 11:27:55.666118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.366 [2024-11-19 11:27:55.666133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.366 [2024-11-19 11:27:55.666153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.366 [2024-11-19 11:27:55.666183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.366 qpair failed and we were unable to recover it. 00:26:00.366 [2024-11-19 11:27:55.676005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.366 [2024-11-19 11:27:55.676107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.366 [2024-11-19 11:27:55.676132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.366 [2024-11-19 11:27:55.676146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.366 [2024-11-19 11:27:55.676158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.366 [2024-11-19 11:27:55.676187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.366 qpair failed and we were unable to recover it. 00:26:00.366 [2024-11-19 11:27:55.686036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.366 [2024-11-19 11:27:55.686168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.366 [2024-11-19 11:27:55.686193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.366 [2024-11-19 11:27:55.686208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.366 [2024-11-19 11:27:55.686220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.366 [2024-11-19 11:27:55.686248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.366 qpair failed and we were unable to recover it. 00:26:00.366 [2024-11-19 11:27:55.696131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.366 [2024-11-19 11:27:55.696264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.366 [2024-11-19 11:27:55.696290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.366 [2024-11-19 11:27:55.696304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.367 [2024-11-19 11:27:55.696316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.367 [2024-11-19 11:27:55.696345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.367 qpair failed and we were unable to recover it. 00:26:00.367 [2024-11-19 11:27:55.706039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.367 [2024-11-19 11:27:55.706145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.367 [2024-11-19 11:27:55.706171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.367 [2024-11-19 11:27:55.706186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.367 [2024-11-19 11:27:55.706198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.367 [2024-11-19 11:27:55.706227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.367 qpair failed and we were unable to recover it. 00:26:00.367 [2024-11-19 11:27:55.716166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.367 [2024-11-19 11:27:55.716270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.367 [2024-11-19 11:27:55.716294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.367 [2024-11-19 11:27:55.716308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.367 [2024-11-19 11:27:55.716321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.367 [2024-11-19 11:27:55.716349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.367 qpair failed and we were unable to recover it. 00:26:00.367 [2024-11-19 11:27:55.726188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.367 [2024-11-19 11:27:55.726295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.367 [2024-11-19 11:27:55.726320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.367 [2024-11-19 11:27:55.726335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.367 [2024-11-19 11:27:55.726347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.367 [2024-11-19 11:27:55.726394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.367 qpair failed and we were unable to recover it. 00:26:00.367 [2024-11-19 11:27:55.736181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.367 [2024-11-19 11:27:55.736289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.367 [2024-11-19 11:27:55.736313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.367 [2024-11-19 11:27:55.736327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.367 [2024-11-19 11:27:55.736339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.367 [2024-11-19 11:27:55.736375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.367 qpair failed and we were unable to recover it. 00:26:00.367 [2024-11-19 11:27:55.746263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.367 [2024-11-19 11:27:55.746401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.367 [2024-11-19 11:27:55.746426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.367 [2024-11-19 11:27:55.746440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.367 [2024-11-19 11:27:55.746452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.367 [2024-11-19 11:27:55.746480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.367 qpair failed and we were unable to recover it. 00:26:00.367 [2024-11-19 11:27:55.756242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.367 [2024-11-19 11:27:55.756342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.367 [2024-11-19 11:27:55.756381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.367 [2024-11-19 11:27:55.756397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.367 [2024-11-19 11:27:55.756409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.367 [2024-11-19 11:27:55.756438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.367 qpair failed and we were unable to recover it. 00:26:00.367 [2024-11-19 11:27:55.766281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.367 [2024-11-19 11:27:55.766392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.367 [2024-11-19 11:27:55.766416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.367 [2024-11-19 11:27:55.766429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.367 [2024-11-19 11:27:55.766441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.367 [2024-11-19 11:27:55.766469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.367 qpair failed and we were unable to recover it. 00:26:00.367 [2024-11-19 11:27:55.776303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.367 [2024-11-19 11:27:55.776417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.367 [2024-11-19 11:27:55.776442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.367 [2024-11-19 11:27:55.776456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.367 [2024-11-19 11:27:55.776468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.367 [2024-11-19 11:27:55.776496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.367 qpair failed and we were unable to recover it. 00:26:00.367 [2024-11-19 11:27:55.786308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.367 [2024-11-19 11:27:55.786422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.367 [2024-11-19 11:27:55.786448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.367 [2024-11-19 11:27:55.786462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.367 [2024-11-19 11:27:55.786474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.367 [2024-11-19 11:27:55.786503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.367 qpair failed and we were unable to recover it. 00:26:00.367 [2024-11-19 11:27:55.796408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.367 [2024-11-19 11:27:55.796491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.367 [2024-11-19 11:27:55.796514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.367 [2024-11-19 11:27:55.796528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.367 [2024-11-19 11:27:55.796545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.367 [2024-11-19 11:27:55.796576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.367 qpair failed and we were unable to recover it. 00:26:00.367 [2024-11-19 11:27:55.806413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.367 [2024-11-19 11:27:55.806529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.367 [2024-11-19 11:27:55.806554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.367 [2024-11-19 11:27:55.806568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.367 [2024-11-19 11:27:55.806581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.367 [2024-11-19 11:27:55.806609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.367 qpair failed and we were unable to recover it. 00:26:00.367 [2024-11-19 11:27:55.816445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.367 [2024-11-19 11:27:55.816552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.367 [2024-11-19 11:27:55.816578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.367 [2024-11-19 11:27:55.816592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.367 [2024-11-19 11:27:55.816605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.367 [2024-11-19 11:27:55.816633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.367 qpair failed and we were unable to recover it. 00:26:00.367 [2024-11-19 11:27:55.826448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.368 [2024-11-19 11:27:55.826531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.368 [2024-11-19 11:27:55.826555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.368 [2024-11-19 11:27:55.826568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.368 [2024-11-19 11:27:55.826580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.368 [2024-11-19 11:27:55.826609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.368 qpair failed and we were unable to recover it. 00:26:00.368 [2024-11-19 11:27:55.836443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.368 [2024-11-19 11:27:55.836531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.368 [2024-11-19 11:27:55.836554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.368 [2024-11-19 11:27:55.836568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.368 [2024-11-19 11:27:55.836580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.368 [2024-11-19 11:27:55.836609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.368 qpair failed and we were unable to recover it. 00:26:00.368 [2024-11-19 11:27:55.846523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.368 [2024-11-19 11:27:55.846647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.368 [2024-11-19 11:27:55.846672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.368 [2024-11-19 11:27:55.846687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.368 [2024-11-19 11:27:55.846699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.368 [2024-11-19 11:27:55.846727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.368 qpair failed and we were unable to recover it. 00:26:00.368 [2024-11-19 11:27:55.856525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.368 [2024-11-19 11:27:55.856622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.368 [2024-11-19 11:27:55.856645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.368 [2024-11-19 11:27:55.856659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.368 [2024-11-19 11:27:55.856671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.368 [2024-11-19 11:27:55.856699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.368 qpair failed and we were unable to recover it. 00:26:00.627 [2024-11-19 11:27:55.866566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.627 [2024-11-19 11:27:55.866692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.627 [2024-11-19 11:27:55.866717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.627 [2024-11-19 11:27:55.866732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.627 [2024-11-19 11:27:55.866744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.627 [2024-11-19 11:27:55.866772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.627 qpair failed and we were unable to recover it. 00:26:00.627 [2024-11-19 11:27:55.876571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.627 [2024-11-19 11:27:55.876661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.628 [2024-11-19 11:27:55.876685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.628 [2024-11-19 11:27:55.876699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.628 [2024-11-19 11:27:55.876712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.628 [2024-11-19 11:27:55.876740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.628 qpair failed and we were unable to recover it. 00:26:00.628 [2024-11-19 11:27:55.886590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.628 [2024-11-19 11:27:55.886674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.628 [2024-11-19 11:27:55.886703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.628 [2024-11-19 11:27:55.886718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.628 [2024-11-19 11:27:55.886730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.628 [2024-11-19 11:27:55.886758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.628 qpair failed and we were unable to recover it. 00:26:00.628 [2024-11-19 11:27:55.896670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.628 [2024-11-19 11:27:55.896780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.628 [2024-11-19 11:27:55.896804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.628 [2024-11-19 11:27:55.896818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.628 [2024-11-19 11:27:55.896830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.628 [2024-11-19 11:27:55.896859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.628 qpair failed and we were unable to recover it. 00:26:00.628 [2024-11-19 11:27:55.906683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.628 [2024-11-19 11:27:55.906783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.628 [2024-11-19 11:27:55.906806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.628 [2024-11-19 11:27:55.906821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.628 [2024-11-19 11:27:55.906833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.628 [2024-11-19 11:27:55.906861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.628 qpair failed and we were unable to recover it. 00:26:00.628 [2024-11-19 11:27:55.916751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.628 [2024-11-19 11:27:55.916870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.628 [2024-11-19 11:27:55.916895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.628 [2024-11-19 11:27:55.916910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.628 [2024-11-19 11:27:55.916921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.628 [2024-11-19 11:27:55.916950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.628 qpair failed and we were unable to recover it. 00:26:00.628 [2024-11-19 11:27:55.926737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.628 [2024-11-19 11:27:55.926840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.628 [2024-11-19 11:27:55.926865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.628 [2024-11-19 11:27:55.926880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.628 [2024-11-19 11:27:55.926897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.628 [2024-11-19 11:27:55.926926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.628 qpair failed and we were unable to recover it. 00:26:00.628 [2024-11-19 11:27:55.936827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.628 [2024-11-19 11:27:55.936940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.628 [2024-11-19 11:27:55.936965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.628 [2024-11-19 11:27:55.936980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.628 [2024-11-19 11:27:55.936992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.628 [2024-11-19 11:27:55.937020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.628 qpair failed and we were unable to recover it. 00:26:00.628 [2024-11-19 11:27:55.946770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.628 [2024-11-19 11:27:55.946868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.628 [2024-11-19 11:27:55.946891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.628 [2024-11-19 11:27:55.946905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.628 [2024-11-19 11:27:55.946918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.628 [2024-11-19 11:27:55.946946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.628 qpair failed and we were unable to recover it. 00:26:00.628 [2024-11-19 11:27:55.956790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.628 [2024-11-19 11:27:55.956934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.628 [2024-11-19 11:27:55.956959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.628 [2024-11-19 11:27:55.956973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.628 [2024-11-19 11:27:55.956985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.628 [2024-11-19 11:27:55.957014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.628 qpair failed and we were unable to recover it. 00:26:00.628 [2024-11-19 11:27:55.966835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.628 [2024-11-19 11:27:55.966937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.628 [2024-11-19 11:27:55.966962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.628 [2024-11-19 11:27:55.966976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.628 [2024-11-19 11:27:55.966988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.628 [2024-11-19 11:27:55.967017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.628 qpair failed and we were unable to recover it. 00:26:00.628 [2024-11-19 11:27:55.976861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.628 [2024-11-19 11:27:55.976971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.628 [2024-11-19 11:27:55.976996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.628 [2024-11-19 11:27:55.977011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.628 [2024-11-19 11:27:55.977023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.628 [2024-11-19 11:27:55.977052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.628 qpair failed and we were unable to recover it. 00:26:00.628 [2024-11-19 11:27:55.986869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.628 [2024-11-19 11:27:55.986984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.628 [2024-11-19 11:27:55.987009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.628 [2024-11-19 11:27:55.987023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.628 [2024-11-19 11:27:55.987036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.628 [2024-11-19 11:27:55.987064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.628 qpair failed and we were unable to recover it. 00:26:00.628 [2024-11-19 11:27:55.996903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.628 [2024-11-19 11:27:55.997001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.628 [2024-11-19 11:27:55.997025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.628 [2024-11-19 11:27:55.997039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.629 [2024-11-19 11:27:55.997051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.629 [2024-11-19 11:27:55.997080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.629 qpair failed and we were unable to recover it. 00:26:00.629 [2024-11-19 11:27:56.006943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.629 [2024-11-19 11:27:56.007090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.629 [2024-11-19 11:27:56.007115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.629 [2024-11-19 11:27:56.007129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.629 [2024-11-19 11:27:56.007142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.629 [2024-11-19 11:27:56.007170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.629 qpair failed and we were unable to recover it. 00:26:00.629 [2024-11-19 11:27:56.016985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.629 [2024-11-19 11:27:56.017088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.629 [2024-11-19 11:27:56.017118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.629 [2024-11-19 11:27:56.017133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.629 [2024-11-19 11:27:56.017145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.629 [2024-11-19 11:27:56.017184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.629 qpair failed and we were unable to recover it. 00:26:00.629 [2024-11-19 11:27:56.027041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.629 [2024-11-19 11:27:56.027179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.629 [2024-11-19 11:27:56.027204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.629 [2024-11-19 11:27:56.027219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.629 [2024-11-19 11:27:56.027231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.629 [2024-11-19 11:27:56.027260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.629 qpair failed and we were unable to recover it. 00:26:00.629 [2024-11-19 11:27:56.037022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.629 [2024-11-19 11:27:56.037128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.629 [2024-11-19 11:27:56.037153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.629 [2024-11-19 11:27:56.037167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.629 [2024-11-19 11:27:56.037179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.629 [2024-11-19 11:27:56.037208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.629 qpair failed and we were unable to recover it. 00:26:00.629 [2024-11-19 11:27:56.047054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.629 [2024-11-19 11:27:56.047186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.629 [2024-11-19 11:27:56.047211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.629 [2024-11-19 11:27:56.047226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.629 [2024-11-19 11:27:56.047238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.629 [2024-11-19 11:27:56.047266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.629 qpair failed and we were unable to recover it. 00:26:00.629 [2024-11-19 11:27:56.057114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.629 [2024-11-19 11:27:56.057223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.629 [2024-11-19 11:27:56.057249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.629 [2024-11-19 11:27:56.057268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.629 [2024-11-19 11:27:56.057281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.629 [2024-11-19 11:27:56.057310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.629 qpair failed and we were unable to recover it. 00:26:00.629 [2024-11-19 11:27:56.067116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.629 [2024-11-19 11:27:56.067219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.629 [2024-11-19 11:27:56.067244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.629 [2024-11-19 11:27:56.067258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.629 [2024-11-19 11:27:56.067270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.629 [2024-11-19 11:27:56.067299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.629 qpair failed and we were unable to recover it. 00:26:00.629 [2024-11-19 11:27:56.077140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.629 [2024-11-19 11:27:56.077241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.629 [2024-11-19 11:27:56.077266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.629 [2024-11-19 11:27:56.077281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.629 [2024-11-19 11:27:56.077293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.629 [2024-11-19 11:27:56.077321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.629 qpair failed and we were unable to recover it. 00:26:00.629 [2024-11-19 11:27:56.087164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.629 [2024-11-19 11:27:56.087309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.629 [2024-11-19 11:27:56.087335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.629 [2024-11-19 11:27:56.087350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.629 [2024-11-19 11:27:56.087373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.629 [2024-11-19 11:27:56.087404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.629 qpair failed and we were unable to recover it. 00:26:00.629 [2024-11-19 11:27:56.097208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.629 [2024-11-19 11:27:56.097317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.629 [2024-11-19 11:27:56.097341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.629 [2024-11-19 11:27:56.097355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.629 [2024-11-19 11:27:56.097376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.629 [2024-11-19 11:27:56.097406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.629 qpair failed and we were unable to recover it. 00:26:00.629 [2024-11-19 11:27:56.107245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.629 [2024-11-19 11:27:56.107383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.629 [2024-11-19 11:27:56.107409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.629 [2024-11-19 11:27:56.107423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.629 [2024-11-19 11:27:56.107435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.629 [2024-11-19 11:27:56.107464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.629 qpair failed and we were unable to recover it. 00:26:00.629 [2024-11-19 11:27:56.117259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.629 [2024-11-19 11:27:56.117369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.629 [2024-11-19 11:27:56.117395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.629 [2024-11-19 11:27:56.117410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.629 [2024-11-19 11:27:56.117422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.629 [2024-11-19 11:27:56.117451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.629 qpair failed and we were unable to recover it. 00:26:00.889 [2024-11-19 11:27:56.127288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.889 [2024-11-19 11:27:56.127423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.889 [2024-11-19 11:27:56.127449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.889 [2024-11-19 11:27:56.127464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.889 [2024-11-19 11:27:56.127476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.889 [2024-11-19 11:27:56.127504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.889 qpair failed and we were unable to recover it. 00:26:00.889 [2024-11-19 11:27:56.137313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.889 [2024-11-19 11:27:56.137433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.889 [2024-11-19 11:27:56.137458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.889 [2024-11-19 11:27:56.137473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.889 [2024-11-19 11:27:56.137485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.889 [2024-11-19 11:27:56.137514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.889 qpair failed and we were unable to recover it. 00:26:00.889 [2024-11-19 11:27:56.147377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.889 [2024-11-19 11:27:56.147471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.889 [2024-11-19 11:27:56.147502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.889 [2024-11-19 11:27:56.147517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.889 [2024-11-19 11:27:56.147529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.889 [2024-11-19 11:27:56.147557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.889 qpair failed and we were unable to recover it. 00:26:00.889 [2024-11-19 11:27:56.157415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.889 [2024-11-19 11:27:56.157521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.889 [2024-11-19 11:27:56.157547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.889 [2024-11-19 11:27:56.157562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.889 [2024-11-19 11:27:56.157574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.889 [2024-11-19 11:27:56.157602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.889 qpair failed and we were unable to recover it. 00:26:00.889 [2024-11-19 11:27:56.167428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.889 [2024-11-19 11:27:56.167545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.889 [2024-11-19 11:27:56.167570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.889 [2024-11-19 11:27:56.167584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.889 [2024-11-19 11:27:56.167596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.889 [2024-11-19 11:27:56.167625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.889 qpair failed and we were unable to recover it. 00:26:00.889 [2024-11-19 11:27:56.177478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.889 [2024-11-19 11:27:56.177568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.889 [2024-11-19 11:27:56.177592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.889 [2024-11-19 11:27:56.177606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.889 [2024-11-19 11:27:56.177618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.889 [2024-11-19 11:27:56.177646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.889 qpair failed and we were unable to recover it. 00:26:00.889 [2024-11-19 11:27:56.187457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.889 [2024-11-19 11:27:56.187556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.889 [2024-11-19 11:27:56.187580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.890 [2024-11-19 11:27:56.187600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.890 [2024-11-19 11:27:56.187613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.890 [2024-11-19 11:27:56.187642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.890 qpair failed and we were unable to recover it. 00:26:00.890 [2024-11-19 11:27:56.197500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.890 [2024-11-19 11:27:56.197620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.890 [2024-11-19 11:27:56.197646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.890 [2024-11-19 11:27:56.197660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.890 [2024-11-19 11:27:56.197672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.890 [2024-11-19 11:27:56.197701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.890 qpair failed and we were unable to recover it. 00:26:00.890 [2024-11-19 11:27:56.207526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.890 [2024-11-19 11:27:56.207617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.890 [2024-11-19 11:27:56.207641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.890 [2024-11-19 11:27:56.207655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.890 [2024-11-19 11:27:56.207667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.890 [2024-11-19 11:27:56.207695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.890 qpair failed and we were unable to recover it. 00:26:00.890 [2024-11-19 11:27:56.217582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.890 [2024-11-19 11:27:56.217677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.890 [2024-11-19 11:27:56.217704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.890 [2024-11-19 11:27:56.217719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.890 [2024-11-19 11:27:56.217731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.890 [2024-11-19 11:27:56.217760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.890 qpair failed and we were unable to recover it. 00:26:00.890 [2024-11-19 11:27:56.227591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.890 [2024-11-19 11:27:56.227685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.890 [2024-11-19 11:27:56.227712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.890 [2024-11-19 11:27:56.227726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.890 [2024-11-19 11:27:56.227739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.890 [2024-11-19 11:27:56.227768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.890 qpair failed and we were unable to recover it. 00:26:00.890 [2024-11-19 11:27:56.237618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.890 [2024-11-19 11:27:56.237713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.890 [2024-11-19 11:27:56.237739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.890 [2024-11-19 11:27:56.237754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.890 [2024-11-19 11:27:56.237765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.890 [2024-11-19 11:27:56.237794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.890 qpair failed and we were unable to recover it. 00:26:00.890 [2024-11-19 11:27:56.247652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.890 [2024-11-19 11:27:56.247798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.890 [2024-11-19 11:27:56.247821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.890 [2024-11-19 11:27:56.247835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.890 [2024-11-19 11:27:56.247847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.890 [2024-11-19 11:27:56.247875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.890 qpair failed and we were unable to recover it. 00:26:00.890 [2024-11-19 11:27:56.257709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.890 [2024-11-19 11:27:56.257818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.890 [2024-11-19 11:27:56.257842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.890 [2024-11-19 11:27:56.257856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.890 [2024-11-19 11:27:56.257869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.890 [2024-11-19 11:27:56.257897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.890 qpair failed and we were unable to recover it. 00:26:00.890 [2024-11-19 11:27:56.267717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.890 [2024-11-19 11:27:56.267821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.890 [2024-11-19 11:27:56.267846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.890 [2024-11-19 11:27:56.267860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.890 [2024-11-19 11:27:56.267872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.890 [2024-11-19 11:27:56.267901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.890 qpair failed and we were unable to recover it. 00:26:00.890 [2024-11-19 11:27:56.277748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.890 [2024-11-19 11:27:56.277866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.890 [2024-11-19 11:27:56.277897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.890 [2024-11-19 11:27:56.277912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.890 [2024-11-19 11:27:56.277925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.890 [2024-11-19 11:27:56.277955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.890 qpair failed and we were unable to recover it. 00:26:00.890 [2024-11-19 11:27:56.287746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.890 [2024-11-19 11:27:56.287887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.890 [2024-11-19 11:27:56.287912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.890 [2024-11-19 11:27:56.287926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.890 [2024-11-19 11:27:56.287939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.890 [2024-11-19 11:27:56.287967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.890 qpair failed and we were unable to recover it. 00:26:00.891 [2024-11-19 11:27:56.297809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.891 [2024-11-19 11:27:56.297921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.891 [2024-11-19 11:27:56.297946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.891 [2024-11-19 11:27:56.297960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.891 [2024-11-19 11:27:56.297972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.891 [2024-11-19 11:27:56.298000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.891 qpair failed and we were unable to recover it. 00:26:00.891 [2024-11-19 11:27:56.307875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.891 [2024-11-19 11:27:56.308009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.891 [2024-11-19 11:27:56.308035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.891 [2024-11-19 11:27:56.308049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.891 [2024-11-19 11:27:56.308061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.891 [2024-11-19 11:27:56.308090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.891 qpair failed and we were unable to recover it. 00:26:00.891 [2024-11-19 11:27:56.317830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.891 [2024-11-19 11:27:56.317931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.891 [2024-11-19 11:27:56.317957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.891 [2024-11-19 11:27:56.317979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.891 [2024-11-19 11:27:56.317992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.891 [2024-11-19 11:27:56.318021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.891 qpair failed and we were unable to recover it. 00:26:00.891 [2024-11-19 11:27:56.327884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.891 [2024-11-19 11:27:56.327981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.891 [2024-11-19 11:27:56.328006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.891 [2024-11-19 11:27:56.328020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.891 [2024-11-19 11:27:56.328032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.891 [2024-11-19 11:27:56.328060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.891 qpair failed and we were unable to recover it. 00:26:00.891 [2024-11-19 11:27:56.337888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.891 [2024-11-19 11:27:56.337994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.891 [2024-11-19 11:27:56.338018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.891 [2024-11-19 11:27:56.338033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.891 [2024-11-19 11:27:56.338045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.891 [2024-11-19 11:27:56.338074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.891 qpair failed and we were unable to recover it. 00:26:00.891 [2024-11-19 11:27:56.347895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.891 [2024-11-19 11:27:56.347997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.891 [2024-11-19 11:27:56.348021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.891 [2024-11-19 11:27:56.348036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.891 [2024-11-19 11:27:56.348048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.891 [2024-11-19 11:27:56.348077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.891 qpair failed and we were unable to recover it. 00:26:00.891 [2024-11-19 11:27:56.357949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.891 [2024-11-19 11:27:56.358097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.891 [2024-11-19 11:27:56.358123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.891 [2024-11-19 11:27:56.358138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.891 [2024-11-19 11:27:56.358150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.891 [2024-11-19 11:27:56.358178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.891 qpair failed and we were unable to recover it. 00:26:00.891 [2024-11-19 11:27:56.367963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.891 [2024-11-19 11:27:56.368060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.891 [2024-11-19 11:27:56.368084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.891 [2024-11-19 11:27:56.368099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.891 [2024-11-19 11:27:56.368112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.891 [2024-11-19 11:27:56.368141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.891 qpair failed and we were unable to recover it. 00:26:00.891 [2024-11-19 11:27:56.377984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.891 [2024-11-19 11:27:56.378090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.891 [2024-11-19 11:27:56.378115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.891 [2024-11-19 11:27:56.378130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.891 [2024-11-19 11:27:56.378142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:00.891 [2024-11-19 11:27:56.378170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:00.891 qpair failed and we were unable to recover it. 00:26:01.150 [2024-11-19 11:27:56.388046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.150 [2024-11-19 11:27:56.388152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.150 [2024-11-19 11:27:56.388175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.150 [2024-11-19 11:27:56.388189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.150 [2024-11-19 11:27:56.388201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:01.150 [2024-11-19 11:27:56.388229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.150 qpair failed and we were unable to recover it. 00:26:01.150 [2024-11-19 11:27:56.398026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.150 [2024-11-19 11:27:56.398127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.150 [2024-11-19 11:27:56.398154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.150 [2024-11-19 11:27:56.398169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.150 [2024-11-19 11:27:56.398182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:01.150 [2024-11-19 11:27:56.398211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.150 qpair failed and we were unable to recover it. 00:26:01.150 [2024-11-19 11:27:56.408048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.150 [2024-11-19 11:27:56.408155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.150 [2024-11-19 11:27:56.408180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.150 [2024-11-19 11:27:56.408195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.150 [2024-11-19 11:27:56.408207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:01.150 [2024-11-19 11:27:56.408235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.150 qpair failed and we were unable to recover it. 00:26:01.150 [2024-11-19 11:27:56.418101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.150 [2024-11-19 11:27:56.418218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.150 [2024-11-19 11:27:56.418244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.150 [2024-11-19 11:27:56.418258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.150 [2024-11-19 11:27:56.418271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:01.150 [2024-11-19 11:27:56.418301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.150 qpair failed and we were unable to recover it. 00:26:01.150 [2024-11-19 11:27:56.428134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.150 [2024-11-19 11:27:56.428256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.150 [2024-11-19 11:27:56.428281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.150 [2024-11-19 11:27:56.428295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.150 [2024-11-19 11:27:56.428308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:01.150 [2024-11-19 11:27:56.428336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.150 qpair failed and we were unable to recover it. 00:26:01.150 [2024-11-19 11:27:56.438178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.150 [2024-11-19 11:27:56.438307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.150 [2024-11-19 11:27:56.438332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.150 [2024-11-19 11:27:56.438346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.150 [2024-11-19 11:27:56.438358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:01.150 [2024-11-19 11:27:56.438395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.150 qpair failed and we were unable to recover it. 00:26:01.150 [2024-11-19 11:27:56.448188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.150 [2024-11-19 11:27:56.448303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.150 [2024-11-19 11:27:56.448328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.150 [2024-11-19 11:27:56.448348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.150 [2024-11-19 11:27:56.448368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:01.150 [2024-11-19 11:27:56.448400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.150 qpair failed and we were unable to recover it. 00:26:01.150 [2024-11-19 11:27:56.458222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.151 [2024-11-19 11:27:56.458345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.151 [2024-11-19 11:27:56.458378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.151 [2024-11-19 11:27:56.458394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.151 [2024-11-19 11:27:56.458415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:01.151 [2024-11-19 11:27:56.458444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.151 qpair failed and we were unable to recover it. 00:26:01.151 [2024-11-19 11:27:56.468254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.151 [2024-11-19 11:27:56.468354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.151 [2024-11-19 11:27:56.468393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.151 [2024-11-19 11:27:56.468408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.151 [2024-11-19 11:27:56.468420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:01.151 [2024-11-19 11:27:56.468449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.151 qpair failed and we were unable to recover it. 00:26:01.151 [2024-11-19 11:27:56.478340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.151 [2024-11-19 11:27:56.478458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.151 [2024-11-19 11:27:56.478485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.151 [2024-11-19 11:27:56.478499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.151 [2024-11-19 11:27:56.478511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:01.151 [2024-11-19 11:27:56.478540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.151 qpair failed and we were unable to recover it. 00:26:01.151 [2024-11-19 11:27:56.488389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.151 [2024-11-19 11:27:56.488490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.151 [2024-11-19 11:27:56.488521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.151 [2024-11-19 11:27:56.488536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.151 [2024-11-19 11:27:56.488549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:01.151 [2024-11-19 11:27:56.488578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.151 qpair failed and we were unable to recover it. 00:26:01.151 [2024-11-19 11:27:56.498338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.151 [2024-11-19 11:27:56.498498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.151 [2024-11-19 11:27:56.498524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.151 [2024-11-19 11:27:56.498538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.151 [2024-11-19 11:27:56.498551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:01.151 [2024-11-19 11:27:56.498580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.151 qpair failed and we were unable to recover it. 00:26:01.151 [2024-11-19 11:27:56.508381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.151 [2024-11-19 11:27:56.508477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.151 [2024-11-19 11:27:56.508500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.151 [2024-11-19 11:27:56.508514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.151 [2024-11-19 11:27:56.508526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:01.151 [2024-11-19 11:27:56.508555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.151 qpair failed and we were unable to recover it. 00:26:01.151 [2024-11-19 11:27:56.518411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.151 [2024-11-19 11:27:56.518540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.151 [2024-11-19 11:27:56.518566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.151 [2024-11-19 11:27:56.518580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.151 [2024-11-19 11:27:56.518592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:01.151 [2024-11-19 11:27:56.518621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.151 qpair failed and we were unable to recover it. 00:26:01.151 [2024-11-19 11:27:56.528427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.151 [2024-11-19 11:27:56.528515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.151 [2024-11-19 11:27:56.528539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.151 [2024-11-19 11:27:56.528553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.151 [2024-11-19 11:27:56.528565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1045fa0 00:26:01.151 [2024-11-19 11:27:56.528594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.151 qpair failed and we were unable to recover it. 00:26:01.151 [2024-11-19 11:27:56.528725] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:26:01.151 A controller has encountered a failure and is being reset. 00:26:01.151 Controller properly reset. 00:26:01.151 Initializing NVMe Controllers 00:26:01.151 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:01.151 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:01.151 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:26:01.151 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:26:01.151 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:26:01.151 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:26:01.151 Initialization complete. Launching workers. 00:26:01.151 Starting thread on core 1 00:26:01.151 Starting thread on core 2 00:26:01.151 Starting thread on core 3 00:26:01.151 Starting thread on core 0 00:26:01.151 11:27:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:26:01.151 00:26:01.151 real 0m10.712s 00:26:01.151 user 0m18.977s 00:26:01.151 sys 0m5.345s 00:26:01.151 11:27:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:01.151 11:27:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:01.151 ************************************ 00:26:01.151 END TEST nvmf_target_disconnect_tc2 00:26:01.151 ************************************ 00:26:01.151 11:27:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:26:01.151 11:27:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:26:01.151 11:27:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:26:01.151 11:27:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:01.151 11:27:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:26:01.151 11:27:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:01.151 11:27:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:26:01.151 11:27:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:01.151 11:27:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:01.151 rmmod nvme_tcp 00:26:01.151 rmmod nvme_fabrics 00:26:01.409 rmmod nvme_keyring 00:26:01.409 11:27:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:01.409 11:27:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:26:01.409 11:27:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:26:01.409 11:27:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 2728081 ']' 00:26:01.409 11:27:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 2728081 00:26:01.409 11:27:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2728081 ']' 00:26:01.409 11:27:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 2728081 00:26:01.409 11:27:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:26:01.409 11:27:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:01.409 11:27:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2728081 00:26:01.409 11:27:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:26:01.409 11:27:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:26:01.409 11:27:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2728081' 00:26:01.409 killing process with pid 2728081 00:26:01.409 11:27:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 2728081 00:26:01.409 11:27:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 2728081 00:26:01.668 11:27:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:01.668 11:27:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:01.668 11:27:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:01.668 11:27:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:26:01.668 11:27:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:26:01.668 11:27:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:01.668 11:27:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:26:01.668 11:27:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:01.668 11:27:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:01.668 11:27:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:01.668 11:27:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:01.668 11:27:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:03.573 11:27:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:03.573 00:26:03.573 real 0m16.220s 00:26:03.573 user 0m45.171s 00:26:03.573 sys 0m7.802s 00:26:03.573 11:27:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:03.573 11:27:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:03.573 ************************************ 00:26:03.573 END TEST nvmf_target_disconnect 00:26:03.573 ************************************ 00:26:03.573 11:27:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:26:03.573 00:26:03.573 real 5m22.407s 00:26:03.573 user 11m8.370s 00:26:03.573 sys 1m23.167s 00:26:03.573 11:27:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:03.573 11:27:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.573 ************************************ 00:26:03.573 END TEST nvmf_host 00:26:03.573 ************************************ 00:26:03.573 11:27:59 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:26:03.573 11:27:59 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:26:03.573 11:27:59 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:26:03.573 11:27:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:03.573 11:27:59 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:03.573 11:27:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:03.573 ************************************ 00:26:03.573 START TEST nvmf_target_core_interrupt_mode 00:26:03.573 ************************************ 00:26:03.573 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:26:03.835 * Looking for test storage... 00:26:03.835 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:26:03.835 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:03.835 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:26:03.835 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:03.835 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:03.835 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:03.835 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:03.835 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:03.835 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:26:03.835 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:26:03.835 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:26:03.835 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:26:03.835 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:26:03.835 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:26:03.835 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:26:03.835 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:03.835 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:26:03.835 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:26:03.835 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:03.835 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:03.835 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:26:03.835 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:26:03.835 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:03.835 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:26:03.835 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:26:03.835 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:26:03.835 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:26:03.835 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:03.835 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:26:03.835 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:26:03.835 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:03.835 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:03.835 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:26:03.835 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:03.835 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:03.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:03.835 --rc genhtml_branch_coverage=1 00:26:03.835 --rc genhtml_function_coverage=1 00:26:03.835 --rc genhtml_legend=1 00:26:03.835 --rc geninfo_all_blocks=1 00:26:03.835 --rc geninfo_unexecuted_blocks=1 00:26:03.835 00:26:03.835 ' 00:26:03.835 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:03.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:03.835 --rc genhtml_branch_coverage=1 00:26:03.835 --rc genhtml_function_coverage=1 00:26:03.835 --rc genhtml_legend=1 00:26:03.835 --rc geninfo_all_blocks=1 00:26:03.835 --rc geninfo_unexecuted_blocks=1 00:26:03.835 00:26:03.835 ' 00:26:03.835 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:03.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:03.835 --rc genhtml_branch_coverage=1 00:26:03.835 --rc genhtml_function_coverage=1 00:26:03.835 --rc genhtml_legend=1 00:26:03.835 --rc geninfo_all_blocks=1 00:26:03.835 --rc geninfo_unexecuted_blocks=1 00:26:03.835 00:26:03.835 ' 00:26:03.835 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:03.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:03.835 --rc genhtml_branch_coverage=1 00:26:03.836 --rc genhtml_function_coverage=1 00:26:03.836 --rc genhtml_legend=1 00:26:03.836 --rc geninfo_all_blocks=1 00:26:03.836 --rc geninfo_unexecuted_blocks=1 00:26:03.836 00:26:03.836 ' 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:03.836 ************************************ 00:26:03.836 START TEST nvmf_abort 00:26:03.836 ************************************ 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:26:03.836 * Looking for test storage... 00:26:03.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:26:03.836 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:04.096 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:04.096 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:04.096 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:04.096 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:04.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.097 --rc genhtml_branch_coverage=1 00:26:04.097 --rc genhtml_function_coverage=1 00:26:04.097 --rc genhtml_legend=1 00:26:04.097 --rc geninfo_all_blocks=1 00:26:04.097 --rc geninfo_unexecuted_blocks=1 00:26:04.097 00:26:04.097 ' 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:04.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.097 --rc genhtml_branch_coverage=1 00:26:04.097 --rc genhtml_function_coverage=1 00:26:04.097 --rc genhtml_legend=1 00:26:04.097 --rc geninfo_all_blocks=1 00:26:04.097 --rc geninfo_unexecuted_blocks=1 00:26:04.097 00:26:04.097 ' 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:04.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.097 --rc genhtml_branch_coverage=1 00:26:04.097 --rc genhtml_function_coverage=1 00:26:04.097 --rc genhtml_legend=1 00:26:04.097 --rc geninfo_all_blocks=1 00:26:04.097 --rc geninfo_unexecuted_blocks=1 00:26:04.097 00:26:04.097 ' 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:04.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.097 --rc genhtml_branch_coverage=1 00:26:04.097 --rc genhtml_function_coverage=1 00:26:04.097 --rc genhtml_legend=1 00:26:04.097 --rc geninfo_all_blocks=1 00:26:04.097 --rc geninfo_unexecuted_blocks=1 00:26:04.097 00:26:04.097 ' 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:04.097 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:04.098 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:04.098 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:04.098 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:04.098 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:04.098 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:04.098 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:04.098 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:04.098 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:04.098 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:26:04.098 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:26:04.098 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:04.098 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:04.098 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:04.098 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:04.098 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:04.098 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:04.098 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:04.098 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:04.098 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:04.098 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:04.098 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:26:04.098 11:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:06.631 11:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:06.631 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:26:06.631 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:06.631 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:06.631 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:06.631 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:06.631 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:06.631 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:26:06.631 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:06.631 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:26:06.631 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:26:06.631 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:26:06.631 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:26:06.631 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:26:06.631 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:26:06.631 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:06.631 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:06.631 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:06.631 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:26:06.632 Found 0000:82:00.0 (0x8086 - 0x159b) 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:26:06.632 Found 0000:82:00.1 (0x8086 - 0x159b) 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:26:06.632 Found net devices under 0000:82:00.0: cvl_0_0 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:26:06.632 Found net devices under 0000:82:00.1: cvl_0_1 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:06.632 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:06.891 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:06.891 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:06.891 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:06.891 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:06.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:06.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:26:06.891 00:26:06.891 --- 10.0.0.2 ping statistics --- 00:26:06.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:06.891 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:26:06.891 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:06.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:06.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:26:06.891 00:26:06.891 --- 10.0.0.1 ping statistics --- 00:26:06.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:06.891 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:26:06.891 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:06.891 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:26:06.891 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:06.891 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:06.891 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:06.891 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:06.891 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:06.891 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:06.891 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:06.891 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:26:06.891 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:06.891 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:06.891 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:06.891 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2731185 00:26:06.891 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:26:06.891 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2731185 00:26:06.891 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2731185 ']' 00:26:06.891 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:06.891 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:06.891 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:06.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:06.891 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:06.891 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:06.891 [2024-11-19 11:28:02.234288] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:06.891 [2024-11-19 11:28:02.235449] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:26:06.891 [2024-11-19 11:28:02.235520] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:06.891 [2024-11-19 11:28:02.319052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:06.891 [2024-11-19 11:28:02.377738] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:06.891 [2024-11-19 11:28:02.377797] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:06.891 [2024-11-19 11:28:02.377825] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:06.891 [2024-11-19 11:28:02.377837] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:06.891 [2024-11-19 11:28:02.377847] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:06.891 [2024-11-19 11:28:02.379428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:06.891 [2024-11-19 11:28:02.379520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:06.891 [2024-11-19 11:28:02.379524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:07.150 [2024-11-19 11:28:02.475111] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:07.150 [2024-11-19 11:28:02.475296] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:07.150 [2024-11-19 11:28:02.475313] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:07.150 [2024-11-19 11:28:02.475577] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:07.150 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:07.150 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:26:07.150 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:07.150 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:07.150 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:07.150 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:07.150 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:26:07.150 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.150 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:07.150 [2024-11-19 11:28:02.524252] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:07.150 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.150 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:26:07.150 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.150 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:07.150 Malloc0 00:26:07.150 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.150 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:26:07.150 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.150 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:07.150 Delay0 00:26:07.150 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.150 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:26:07.150 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.150 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:07.150 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.150 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:26:07.150 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.150 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:07.150 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.150 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:07.150 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.150 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:07.150 [2024-11-19 11:28:02.596396] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:07.150 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.150 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:07.150 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.150 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:07.150 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.150 11:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:26:07.408 [2024-11-19 11:28:02.742531] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:26:09.308 Initializing NVMe Controllers 00:26:09.308 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:26:09.308 controller IO queue size 128 less than required 00:26:09.308 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:26:09.308 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:26:09.308 Initialization complete. Launching workers. 00:26:09.308 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28409 00:26:09.308 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28466, failed to submit 66 00:26:09.308 success 28409, unsuccessful 57, failed 0 00:26:09.308 11:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:09.308 11:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.308 11:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:09.308 11:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.308 11:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:26:09.308 11:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:26:09.308 11:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:09.308 11:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:26:09.308 11:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:09.308 11:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:26:09.308 11:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:09.308 11:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:09.308 rmmod nvme_tcp 00:26:09.566 rmmod nvme_fabrics 00:26:09.566 rmmod nvme_keyring 00:26:09.566 11:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:09.566 11:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:26:09.566 11:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:26:09.566 11:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2731185 ']' 00:26:09.566 11:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2731185 00:26:09.567 11:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2731185 ']' 00:26:09.567 11:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2731185 00:26:09.567 11:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:26:09.567 11:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:09.567 11:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2731185 00:26:09.567 11:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:09.567 11:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:09.567 11:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2731185' 00:26:09.567 killing process with pid 2731185 00:26:09.567 11:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2731185 00:26:09.567 11:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2731185 00:26:09.860 11:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:09.860 11:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:09.860 11:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:09.860 11:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:26:09.860 11:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:26:09.860 11:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:09.860 11:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:26:09.860 11:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:09.860 11:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:09.860 11:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:09.860 11:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:09.860 11:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:11.766 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:11.766 00:26:11.766 real 0m7.921s 00:26:11.766 user 0m9.479s 00:26:11.766 sys 0m3.340s 00:26:11.766 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:11.766 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:11.766 ************************************ 00:26:11.766 END TEST nvmf_abort 00:26:11.766 ************************************ 00:26:11.766 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:26:11.766 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:11.766 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:11.766 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:11.766 ************************************ 00:26:11.766 START TEST nvmf_ns_hotplug_stress 00:26:11.766 ************************************ 00:26:11.766 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:26:11.766 * Looking for test storage... 00:26:11.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:11.766 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:11.766 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:26:11.766 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:12.026 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:12.026 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:12.026 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:12.026 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:12.026 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:26:12.026 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:26:12.026 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:26:12.026 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:26:12.026 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:26:12.026 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:26:12.026 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:26:12.026 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:12.026 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:26:12.026 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:26:12.026 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:12.026 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:12.026 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:26:12.026 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:26:12.026 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:12.026 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:26:12.026 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:26:12.026 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:26:12.026 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:26:12.026 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:12.026 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:26:12.026 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:26:12.026 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:12.026 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:12.026 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:26:12.026 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:12.026 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:12.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.026 --rc genhtml_branch_coverage=1 00:26:12.026 --rc genhtml_function_coverage=1 00:26:12.026 --rc genhtml_legend=1 00:26:12.026 --rc geninfo_all_blocks=1 00:26:12.026 --rc geninfo_unexecuted_blocks=1 00:26:12.026 00:26:12.026 ' 00:26:12.026 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:12.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.026 --rc genhtml_branch_coverage=1 00:26:12.026 --rc genhtml_function_coverage=1 00:26:12.026 --rc genhtml_legend=1 00:26:12.026 --rc geninfo_all_blocks=1 00:26:12.026 --rc geninfo_unexecuted_blocks=1 00:26:12.026 00:26:12.026 ' 00:26:12.026 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:12.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.026 --rc genhtml_branch_coverage=1 00:26:12.026 --rc genhtml_function_coverage=1 00:26:12.026 --rc genhtml_legend=1 00:26:12.026 --rc geninfo_all_blocks=1 00:26:12.026 --rc geninfo_unexecuted_blocks=1 00:26:12.026 00:26:12.026 ' 00:26:12.026 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:12.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.026 --rc genhtml_branch_coverage=1 00:26:12.026 --rc genhtml_function_coverage=1 00:26:12.026 --rc genhtml_legend=1 00:26:12.026 --rc geninfo_all_blocks=1 00:26:12.026 --rc geninfo_unexecuted_blocks=1 00:26:12.026 00:26:12.026 ' 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:26:12.027 11:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:14.573 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:14.573 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:26:14.573 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:14.573 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:14.573 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:14.573 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:14.573 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:14.573 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:26:14.573 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:14.573 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:26:14.573 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:26:14.573 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:26:14.573 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:26:14.573 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:26:14.573 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:26:14.573 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:14.573 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:14.573 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:14.573 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:14.573 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:14.573 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:14.573 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:14.573 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:14.573 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:14.573 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:14.573 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:14.573 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:14.573 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:14.573 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:14.573 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:14.573 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:14.573 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:14.573 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:26:14.574 Found 0000:82:00.0 (0x8086 - 0x159b) 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:26:14.574 Found 0000:82:00.1 (0x8086 - 0x159b) 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:26:14.574 Found net devices under 0000:82:00.0: cvl_0_0 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:26:14.574 Found net devices under 0000:82:00.1: cvl_0_1 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:14.574 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:14.833 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:14.833 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:14.833 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:14.833 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:14.833 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:14.833 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:14.833 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:14.833 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:14.833 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:14.833 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:26:14.833 00:26:14.833 --- 10.0.0.2 ping statistics --- 00:26:14.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:14.833 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:26:14.833 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:14.833 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:14.834 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:26:14.834 00:26:14.834 --- 10.0.0.1 ping statistics --- 00:26:14.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:14.834 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:26:14.834 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:14.834 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:26:14.834 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:14.834 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:14.834 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:14.834 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:14.834 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:14.834 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:14.834 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:14.834 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:26:14.834 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:14.834 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:14.834 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:14.834 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2733827 00:26:14.834 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:26:14.834 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2733827 00:26:14.834 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2733827 ']' 00:26:14.834 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:14.834 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:14.834 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:14.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:14.834 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:14.834 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:14.834 [2024-11-19 11:28:10.240187] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:14.834 [2024-11-19 11:28:10.241257] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:26:14.834 [2024-11-19 11:28:10.241325] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:15.093 [2024-11-19 11:28:10.331138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:15.093 [2024-11-19 11:28:10.394083] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:15.093 [2024-11-19 11:28:10.394159] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:15.093 [2024-11-19 11:28:10.394174] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:15.093 [2024-11-19 11:28:10.394186] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:15.093 [2024-11-19 11:28:10.394211] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:15.093 [2024-11-19 11:28:10.395887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:15.093 [2024-11-19 11:28:10.395949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:15.093 [2024-11-19 11:28:10.395952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:15.093 [2024-11-19 11:28:10.495096] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:15.093 [2024-11-19 11:28:10.495310] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:15.093 [2024-11-19 11:28:10.495325] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:15.093 [2024-11-19 11:28:10.495587] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:15.093 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:15.093 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:26:15.093 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:15.093 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:15.093 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:15.093 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:15.093 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:26:15.093 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:15.351 [2024-11-19 11:28:10.796648] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:15.351 11:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:26:15.609 11:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:15.867 [2024-11-19 11:28:11.356975] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:16.125 11:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:16.384 11:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:26:16.642 Malloc0 00:26:16.642 11:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:26:16.900 Delay0 00:26:16.900 11:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:17.158 11:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:26:17.416 NULL1 00:26:17.416 11:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:26:17.674 11:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2734234 00:26:17.674 11:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2734234 00:26:17.674 11:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:17.674 11:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:26:19.049 Read completed with error (sct=0, sc=11) 00:26:19.049 11:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:19.049 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:19.049 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:19.049 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:19.049 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:19.049 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:19.049 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:19.049 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:19.307 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:19.307 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:19.307 11:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:26:19.307 11:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:26:19.566 true 00:26:19.566 11:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2734234 00:26:19.566 11:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:20.501 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:20.501 11:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:20.501 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:20.501 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:20.501 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:20.501 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:20.501 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:20.501 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:20.501 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:20.501 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:20.501 [2024-11-19 11:28:15.991264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.501 [2024-11-19 11:28:15.991388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.501 [2024-11-19 11:28:15.991454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.501 [2024-11-19 11:28:15.991507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.501 [2024-11-19 11:28:15.991566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.501 [2024-11-19 11:28:15.991625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.501 [2024-11-19 11:28:15.991692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.501 [2024-11-19 11:28:15.991772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.501 [2024-11-19 11:28:15.991832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.991893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.991951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.992011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.992061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.992122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.992182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.992240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.992298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.992379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.992440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.992505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.992569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.992632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.992714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.992803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.992882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.992962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.993037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.993100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.993159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.993220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.993276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.993337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.993408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.993485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.993565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.993635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.993695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.993758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.993823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.993898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.993965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.994025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.994104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.994168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.994230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.994296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.994381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.994439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.994500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.994559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.994623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.994699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.994758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.994815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.994910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.994976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.995045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.995114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.995192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.995256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.995313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.995385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.995447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.995504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.995745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.995809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.995869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.995930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.995988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.996040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.996101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.996152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.996213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.996291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.996376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.996441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.996504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.997293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.997378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.997449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.997508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.502 [2024-11-19 11:28:15.997571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.783 [2024-11-19 11:28:15.997632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.783 [2024-11-19 11:28:15.997694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.783 [2024-11-19 11:28:15.997774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.783 [2024-11-19 11:28:15.997856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.783 [2024-11-19 11:28:15.997922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.783 [2024-11-19 11:28:15.997984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.783 [2024-11-19 11:28:15.998046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.783 [2024-11-19 11:28:15.998107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.783 [2024-11-19 11:28:15.998203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.783 [2024-11-19 11:28:15.998263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.783 [2024-11-19 11:28:15.998324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.783 [2024-11-19 11:28:15.998394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.783 [2024-11-19 11:28:15.998456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.783 [2024-11-19 11:28:15.998522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.783 [2024-11-19 11:28:15.998586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.783 [2024-11-19 11:28:15.998662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.783 [2024-11-19 11:28:15.998718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.783 [2024-11-19 11:28:15.998774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.783 [2024-11-19 11:28:15.998830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.783 [2024-11-19 11:28:15.998893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.783 [2024-11-19 11:28:15.998952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.783 [2024-11-19 11:28:15.999010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.783 [2024-11-19 11:28:15.999071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.783 [2024-11-19 11:28:15.999129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.783 [2024-11-19 11:28:15.999187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.783 [2024-11-19 11:28:15.999244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.783 [2024-11-19 11:28:15.999301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.783 [2024-11-19 11:28:15.999385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.783 [2024-11-19 11:28:15.999449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.783 [2024-11-19 11:28:15.999516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.783 [2024-11-19 11:28:15.999576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.783 [2024-11-19 11:28:15.999635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.783 [2024-11-19 11:28:15.999710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.783 [2024-11-19 11:28:15.999768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.783 [2024-11-19 11:28:15.999828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.783 [2024-11-19 11:28:15.999884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.783 [2024-11-19 11:28:15.999944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.783 [2024-11-19 11:28:16.000003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.783 [2024-11-19 11:28:16.000062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.783 [2024-11-19 11:28:16.000128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.783 [2024-11-19 11:28:16.000219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.783 [2024-11-19 11:28:16.000288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.783 [2024-11-19 11:28:16.000377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.000442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.000510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.000576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.000639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.000716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.000780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.000869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.000944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.001011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.001085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.001156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.001216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.001281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.001339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.001448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.001503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.001731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.001793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.001858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.001927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.001993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.002055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.002115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.002192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.002251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.002300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.002378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.002439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.002500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.002565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.002627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.002703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.002767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.002830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.002890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.002949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.003010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.003066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.003129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.003182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.003240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.003304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.003382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.003446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.003508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.003571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.003634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.003710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.003770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.003832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.003892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.003964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.004024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.004085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.004148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.004209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.004270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.004330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.004415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.004479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.004540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.004604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.004688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.004759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.004815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.004878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.005401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.005474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.005542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.005595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.005651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.005715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.005779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.005838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.005900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.005959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.006018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.006080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.006144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.006213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.006284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.006352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.006424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.006484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.006550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.006613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.006675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.006739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.006803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.006866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.006928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.006992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.784 [2024-11-19 11:28:16.007054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.007119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.007178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.007240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.007300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.007370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.007425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.007488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.007543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.007601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.007660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.007720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.007783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.007841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.007902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.007960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.008019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.008076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.008140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.008200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.008258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.008319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.008390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.008459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.008518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.008585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.008646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.008704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.008763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.008823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.008889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.008953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.009015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.009076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.009135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.009198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.009263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.009328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.009559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.009623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.009685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.009748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.009812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.009875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.009936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.009995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.010055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.010118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.010182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.010245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.010306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.011046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.011110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.011170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.011235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.011292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.011352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.011420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.011496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.011556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.011621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.011683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.011743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.011803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.011857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.011914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.011977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.012042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.012102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.012170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.012228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.012289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.012352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.012419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.012479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.012548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.012610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.012670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.012729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.012787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.012848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.012910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.012963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.013025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.013076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.013139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.013204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.013268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.013331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.013401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.013466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.013534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.013597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.785 [2024-11-19 11:28:16.013657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.013719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.013782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.013847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.013910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.013972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.014033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.014096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.014160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.014226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.014287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.014353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.014423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.014483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.014544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.014603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.014661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.014722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.014784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.014842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.014907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.014961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.015168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.015224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.015281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.015343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.015410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.015473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.015532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.015592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.015656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.015715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.015775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.015838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.015902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.015967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.016028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.016088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.016150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.016215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.016275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.016349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.016421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.016482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.016542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.016602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.016673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.016739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.016803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.016867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.016928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.016992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.017044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.017101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.017158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.017218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.017284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.017348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.017417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.017473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.017534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.017597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.017661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.017730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.017788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.017859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.017921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.017981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.018041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.018103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.018162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.018224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.018276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.018334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.018399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.018463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.018523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.018594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.018661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.018727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.018787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.018849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.018913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.018975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.019037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.019624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.019701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.019765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.019830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.019891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.019964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.020030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.020091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.786 [2024-11-19 11:28:16.020154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.020217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.020277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.020342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.020413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.020482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.020542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.020601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.020665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.020721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.020780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.020844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.020905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.020964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.021024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.021090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.021156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.021216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.021269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.021330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.021396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.021460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.021521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.021580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.021638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.021698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.021755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.021817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.021881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.021945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.022011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.022075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.022137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.022201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.022264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.022335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.022409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.022472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.022535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.022598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.022672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.022732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.022794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.022859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.022919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 11:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:26:20.787 [2024-11-19 11:28:16.022982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.023051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.023117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 11:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:26:20.787 [2024-11-19 11:28:16.023173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.023244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.023317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.023387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.023450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.023510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.023565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.023627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.024575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.024638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.024710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.024769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.024828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.024890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.024954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.025017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.025081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.025145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.025211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.025269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.025325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.025404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.025463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.025523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.025582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.025641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.025714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.025772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.025829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.025891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.025958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.026019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.026081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.026142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.026201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.026261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.026324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.026410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.026476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.026539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.026600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.026663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.787 [2024-11-19 11:28:16.026730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.026805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.026866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.026926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.026986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.027053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.027119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.027179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.027245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.027306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.027395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.027459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.027524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.027587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.027657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.027737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.027799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.027859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.027916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.027975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.028033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.028096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.028154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.028213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.028277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.028353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.028424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.028480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.028544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.029111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.029175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.029234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.029294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.029374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.029433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.029491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.029555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.029616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.029692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.029753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.029815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.029876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.029939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.029999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.030064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.030126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.030186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.030246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.030307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.030402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.030466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.030527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.030589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.030651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.030727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.030789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.030852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.030914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.030977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.031038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.031097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.031158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.031217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.031278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.031332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.031416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.031475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.031534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.031596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.031670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.031732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.031789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.031846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.031898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.031970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.032031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.032091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.788 [2024-11-19 11:28:16.032160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.032218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.032280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.032338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.032422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.032497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.032560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.032623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.032682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.032755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.032805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.032862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.032913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.032967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.033022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.033078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.033623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.033705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.033768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.033829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.033887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.033955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.034020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.034078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.034137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.034197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.034256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.034318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.034420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.034485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.034550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.034612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.034690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.034753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.034819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.034877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.034937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.034999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.035061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.035127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.035190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.035245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.035305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.035389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.035451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.035503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.035572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.035635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.035716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.035780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.035840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.035905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.035967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.036023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.036078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.036132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.036194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.036252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.036309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.036389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.036449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.036507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.036566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.036630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.036704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.036764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.036824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.036881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.036944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.037003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.037065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.037127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.037194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.037261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.037320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.037404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.037467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.037534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.037597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.037869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.037935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.037997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.038066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.038126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.038188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.038251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.038315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.038401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.038467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.038530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.038593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.038651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.038718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.789 [2024-11-19 11:28:16.038776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.038829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.038892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.038957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.039019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.039078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.039137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.039202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.039253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.039308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.039393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.039471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.039530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.039596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.039672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.039731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.039793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.039864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.039920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.039977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.040037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.040095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.040152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.040208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.040261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.040321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.040407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.040475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.040539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.040600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.040665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.040741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.040803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.040869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.040933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.040996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.041060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.041119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.041185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.041252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.041312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.041398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.041468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.041531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.041594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.041655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.041731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.041791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.041856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.041923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:26:20.790 [2024-11-19 11:28:16.042474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.042541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.042594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.042653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.042731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.042788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.042850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.042910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.042969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.043035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.043092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.043147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.043203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.043261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.043327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.043412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.043479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.043546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.043605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.043686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.043749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.043804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.043861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.043921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.043970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.044032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.044080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.044136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.044190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.044249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.044309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.044389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.044454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.044518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.044578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.044633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.044712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.044773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.044834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.044895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.044955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.045015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.790 [2024-11-19 11:28:16.045073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.045134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.045201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.045267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.045324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.045408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.045472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.045533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.045601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.045678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.045736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.045798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.045855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.045914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.045971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.046032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.046090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.046148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.046203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.046260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.046318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.046894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.046962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.047023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.047083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.047145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.047207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.047268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.047325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.047409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.047475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.047540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.047604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.047684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.047749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.047810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.047872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.047935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.047994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.048053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.048113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.048183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.048246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.048305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.048390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.048463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.048523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.048586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.048663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.048740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.048799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.048860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.048920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.048989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.049047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.049108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.049160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.049218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.049275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.049334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.049422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.049486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.049546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.049608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.049669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.049741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.049795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.049852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.049917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.049980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.050040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.050117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.050177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.050228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.050298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.050387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.050452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.050516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.050579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.050648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.050725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.050779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.050838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.050897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.050957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.051533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.051605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.051664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.051723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.051785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.051847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.051910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.051969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.791 [2024-11-19 11:28:16.052037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.052102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.052165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.052228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.052298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.052385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.052444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.052506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.052567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.052626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.052708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.052768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.052823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.052890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.052948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.053008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.053076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.053136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.053198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.053258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.053316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.053397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.053460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.053519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.053577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.053646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.053733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.053801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.053863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.053926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.053991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.054050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.054121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.054186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.054254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.054318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.054408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.054470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.054532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.054597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.054660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.054739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.054800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.054863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.054927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.054991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.055053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.055113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.055175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.055240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.055299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.055356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.055447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.055516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.055585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.056130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.056193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.056254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.056318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.056417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.056483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.056545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.056607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.056664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.056745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.056803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.056860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.056918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.056978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.057038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.057096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.057155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.057220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.057280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.057343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.057431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.057500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.057572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.057634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.057714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.057777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.057839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.057899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.057961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.058025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.058089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.058150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.058208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.058267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.058330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.058418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.792 [2024-11-19 11:28:16.058482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.058545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.058604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.058660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.058736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.058794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.058855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.058919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.058977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.059035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.059094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.059146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.059208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.059271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.059335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.059420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.059485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.059549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.059613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.059688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.059755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.059819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.059881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.059942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.059997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.060056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.060106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.060164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.061028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.061091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.061154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.061216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.061279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.061358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.061430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.061493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.061554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.061616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.061696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.061755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.061816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.061875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.061940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.062001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.062061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.062121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.062183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.062249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.062311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.062399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.062462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.062524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.062591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.062660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.062735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.062793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.062851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.062913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.062975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.063027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.063090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.063150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.063209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.063268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.063326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.063411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.063477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.063536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.063595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.063658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.063732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.063791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.063846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.063904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.063967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.064026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.064088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.064146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.064200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.064257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.064315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.793 [2024-11-19 11:28:16.064406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.064468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.064529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.064592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.064656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.064736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.064797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.064858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.064921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.064985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.065218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.065285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.065360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.065442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.065503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.065561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.065628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.065702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.065765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.065824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.065884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.065941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.066000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.066057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.066115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.066174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.066230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.066280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.066329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.066423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.066491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.066553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.066614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.066678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.066756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.066815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.066875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.066946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.067021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.067084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.067149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.067208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.067268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.067330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.067418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.067481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.067552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.067613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.067691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.067753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.067813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.067876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.067956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.068017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.068081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.068142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.068205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.068286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.068345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.068431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.068493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.068560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.068620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.068695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.068752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.068816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.068869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.068928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.068985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.069052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.069114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.069181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.069242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.069298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.070206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.070271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.070329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.070414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.070478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.070537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.070602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.070665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.070745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.070805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.070867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.070929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.070989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.071053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.071117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.071182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.071241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.071300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.071369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.794 [2024-11-19 11:28:16.071435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.071492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.071543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.071608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.071665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.071728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.071790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.071850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.071905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.071963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.072018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.072080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.072140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.072206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.072267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.072327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.072414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.072475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.072536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.072598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.072660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.072735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.072794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.072851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.072901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.072969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.073031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.073086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.073143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.073199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.073258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.073318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.073411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.073487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.073551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.073613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.073674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.073751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.073815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.073873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.073931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.073991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.074051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.074120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.074402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.074470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.074532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.074594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.074677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.074740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.074801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.074862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.074924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.074984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.075045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.075104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.075164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.075224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.075282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.075358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.075442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.075505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.075571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.075633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.075708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.075767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.075823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.075877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.075934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.075994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.076054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.076117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.076176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.076235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.076285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.076345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.076438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.076506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.076567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.076630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.076691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.076746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.076813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.076890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.076948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.077003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.077063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.077117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.077179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.077234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.077296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.077387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.077451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.795 [2024-11-19 11:28:16.077513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.077577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.077641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.077723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.077786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.077847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.077905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.077964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.078027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.078091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.078150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.078212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.078271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.078329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.078420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.079251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.079319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.079399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.079457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.079519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.079578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.079639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.079716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.079779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.079830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.079886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.079943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.080003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.080059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.080117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.080177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.080231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.080292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.080370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.080432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.080485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.080548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.080605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.080667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.080754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.080820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.080886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.080951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.081009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.081068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.081128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.081195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.081256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.081317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.081417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.081483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.081546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.081610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.081688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.081750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.081816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.081877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.081937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.081998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.082057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.082123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.082187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.082249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.082308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.082395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.082460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.082524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.082586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.082663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.082722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.082778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.082836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.082897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.082956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.083021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.083083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.083138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.083189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.083244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.083469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.083538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.083607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.083667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.083739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.083799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.083856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.083914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.083972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.084033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.084092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.084154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.796 [2024-11-19 11:28:16.084222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.084282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.084370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.084443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.084914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.084983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.085047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.085108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.085173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.085238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.085298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.085383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.085459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.085528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.085598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.085680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.085740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.085800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.085860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.085918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.085980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.086034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.086090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.086151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.086215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.086274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.086333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.086421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.086489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.086554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.086608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.086669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.086745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.086807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.086865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.086923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.086985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.087046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.087103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.087164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.087222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.087280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.087341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.087423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.087486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.087541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.087603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.087654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.087727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.087785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.087842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.087902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.087963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.088020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.088094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.088154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.088219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.088278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.088340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.088428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.088497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.088556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.088618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.088695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.088758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.088820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.088880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.088938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.089138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.089205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.089265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.089325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.089409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.089470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.089541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.089603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.089677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.089740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.089801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.089860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.089919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.089975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.090034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.090091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.090143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.090199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.090260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.090319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.090404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.797 [2024-11-19 11:28:16.090468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.090534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.090596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.090674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.090735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.090796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.090857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.090914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.090969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.091028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.091088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.091148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.091224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.091282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.091340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.091414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.091476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.091546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.091606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.091681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.091764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.091823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.091883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.091945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.092005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.092070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.092904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.092967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.093028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.093088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.093156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.093224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.093294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.093372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.093436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.093495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.093555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.093618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.093706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.093765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.093822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.093882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.093945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.093999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.094055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.094114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.094173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.094234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.094292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.094374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.094437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.094490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.094548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.094615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.094696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.094756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.094814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.094870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.094933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.094993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.095049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.095112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.095171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.095232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.095302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.095396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.095464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.095529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.095593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.095654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.095732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.095797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.095860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.095919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.095980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.096044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.096107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.096170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.096230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.096290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.096349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.096436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.096505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.096569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.096629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.096706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.096767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.096830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.096893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.096951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.097154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.097219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.097283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.097355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.097430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.097493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.097559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.097618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.097707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.097776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.097838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.798 [2024-11-19 11:28:16.097899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.097956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.098013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.098076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.098138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.098191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.098246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.098304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.098396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.098458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.098517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.098575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.098634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.098713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.098772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.098823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.098887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.098940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.098998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.099056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.099113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.099182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.099735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.099802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.099864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.099925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.099996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.100055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.100115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.100175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.100239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.100304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.100393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.100472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.100532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.100595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.100669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.100726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.100784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.100844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.100902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.100959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.101018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.101084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.101152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.101211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.101270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.101328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.101415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.101486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.101548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.101611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.101687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.101748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.101827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.101894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.101955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.102018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.102082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.102167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.102228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.102290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.102378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.102446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.102509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.102574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.102637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.102716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.102782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.102841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.102904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.102963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.103025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.103092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.103154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.103213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.103272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.103332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.103430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.103492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.103556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.103619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.103692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.103751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.103812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.103873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.104055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.104112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.104169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.104231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.104291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.104382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.104443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.104507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.104563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.104629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.104706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.104764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.104828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.799 [2024-11-19 11:28:16.104889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.104948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.105007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.105066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.105119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.105175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.105232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.105299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.105356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.105440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.105512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.105571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.105633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.105705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.105766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.105825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.105882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.106752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.106818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.106901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.106972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.107030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.107093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.107155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.107223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.107288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.107370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.107440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.107503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.107571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.107632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.107711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.107771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.107829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.107885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.107942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.108004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.108061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.108118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.108175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.108233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.108288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.108360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.108432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.108502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.108571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.108632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.108711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.108772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.108836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.108904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.108964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.109025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.109086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.109147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.109215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.109282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.109359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.109433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.109500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.109570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.109650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.109717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.109779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.109841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.109905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.109965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.110025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.110085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.110146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.110207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.110269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.110327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.110416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.110482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.110552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.110615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.110693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.110753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.110813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.110879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.111078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.111141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.111199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.111258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.111316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.111399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.111467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.111532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.111592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.111653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.111727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.111778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.111836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.111893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.111958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.112017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:26:20.800 [2024-11-19 11:28:16.112074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.112133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.112190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.112248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.112299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.112377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.112441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.112505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.112565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.112623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.112701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.800 [2024-11-19 11:28:16.112760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.112818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.112874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.112932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.112992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.113048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.113553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.113622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.113697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.113763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.113822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.113882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.113944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.114006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.114067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.114125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.114184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.114245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.114313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.114408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.114486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.114557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.114620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.114683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.114769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.114836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.114894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.114952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.115012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.115070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.115121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.115179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.115236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.115293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.115385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.115445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.115506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.115569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.115632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.115699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.115755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.115816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.115875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.115935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.115994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.116054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.116118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.116183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.116241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.116299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.116385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.116449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.116523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.116587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.116652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.116731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.116793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.116858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.116919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.116980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.117043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.117105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.117166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.117226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.117287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.117375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.117439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.117500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.117563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.117624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.117852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.117916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.117978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.118037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.118096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.118157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.118216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.118281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.118359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.118434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.118497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.118555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.118618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.118680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.118746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.118821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.118881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.118938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.118993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.119051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.119117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.119175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.119233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.119294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.119382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.119445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.119512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.119565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.119631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.119715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.120514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.120577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.120636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.120714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.120771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.120830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.801 [2024-11-19 11:28:16.120890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.120955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.121019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.121078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.121137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.121196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.121254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.121305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.121392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.121451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.121515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.121572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.121634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.121712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.121770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.121830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.121888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.121963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.122024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.122087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.122159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.122220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.122294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.122376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.122442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.122503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.122566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.122631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.122698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.122778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.122839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.122902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.122961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.123028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.123089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.123150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.123208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.123268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.123338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.123429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.123493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.123555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.123618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.123699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.123765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.123826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.123887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.123950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.124016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.124080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.124139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.124200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.124261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.124325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.124407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.124467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.124532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.124596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.124810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.124882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.124939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.125002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.125064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.125123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.125179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.125244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.125304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.125384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.125441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.125496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.125555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.125615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.125672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.125745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.125805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.125864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.125923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.125987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.126048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.126108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.126169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.126230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.126290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.126374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.126437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.126500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.126563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.126631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.126701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.126765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.126827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.127319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.127411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.127467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.127526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.127584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.802 [2024-11-19 11:28:16.127653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.127729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.127788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.127850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.127914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.127973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.128033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.128087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.128143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.128202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.128265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.128326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.128413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.128476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.128535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.128594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.128656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.128729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.128788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.128847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.128904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.128963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.129020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.129081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.129151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.129219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.129277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.129336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.129425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.129487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.129555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.129622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.129698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.129759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.129820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.129881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.129944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.130002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.130062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.130124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.130186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.130255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.130321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.130408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.130473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.130534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.130604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.130668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.130731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.130808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.130868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.130931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.130997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.131058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.131119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.131177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.131237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.131291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.131374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.131591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.131669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.131730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.131794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.131851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.131907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.131969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.132047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.132107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.132165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.132232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.132293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.132352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.132420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.132480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.132540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.132604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.132666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.132741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.132804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.132869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.132929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.132989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.133048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.133100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.133158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.133213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.133287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.133369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.133431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.134086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.134147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.134208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.134270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.134353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.134433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.134495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.134556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.134618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.134695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.134759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.134817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.134875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.134935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.135007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.135069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.135134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.135192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.135247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.803 [2024-11-19 11:28:16.135307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.135390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.135453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.135519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.135589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.135643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.135714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.135787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.135847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.135904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.135961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.136020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.136072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.136124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.136184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.136239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.136295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.136375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.136432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.136493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.136554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.136617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.136693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.136759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.136826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.136888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.136950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.137010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.137070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.137132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.137198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.137256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.137316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.137403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.137469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.137535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.137604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.137667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.137744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.137804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.137865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.137925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.137987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.138049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.138111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.138315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.138406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.138474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.138535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.138595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.138669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.138723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.138781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.138840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.138905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.138967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.139026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.139089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.139142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.139197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.139255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.139319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.139401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.139464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.139540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.139596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.139666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.139745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.139807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.139869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.139931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.139990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.140049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.140108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.140175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.140232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.140288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.140373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.141008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.141075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.141144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.141204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.141263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.141324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.141410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.141475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.141538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.141599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.141667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.141749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.141811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.141872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.141930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.141992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.142055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.142115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.142174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.142231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.142292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.142373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.142445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.142511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.142577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.142635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.142712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.804 [2024-11-19 11:28:16.142771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.142835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.142891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.142955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.143012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.143070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.143125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.143184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.143241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.143301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.143387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.143448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.143509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.143566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.143627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.143698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.143759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.143820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.143881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.143945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.144010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.144073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.144133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.144195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.144256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.144322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.144409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.144474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.144540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.144604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.144685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.144747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.144808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.144869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.144929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.144987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.145052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.145264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.145325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.145405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.145470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.145529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.145592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.145652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.145732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.145797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.145859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.145916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.145975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.146030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.146092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.146152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.146215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.146276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.146354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.146426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.146486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.146549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.146607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.146682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.146741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.146800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.146849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.146915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.146968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.147024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.147082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.147749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.147815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.147875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.147941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.148005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.148063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.148124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.148189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.148251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.148311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.148400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.148467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.148531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.148594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.148676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.148735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.148793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.148855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.148915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.148982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.149049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.149106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.149166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.149225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.149283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.149336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.149420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.149478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.149547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.149611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.149687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.149754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.149816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.149886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.149937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.149991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.150053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.150113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.150168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.150223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.150278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.150338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.150423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.150484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.150545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.150604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.150663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.150733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.805 [2024-11-19 11:28:16.150793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.150853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.150913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.150977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.151035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.151096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.151155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.151216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.151278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.151356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.151429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.151488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.151551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.151620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.151697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.151755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.151963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.152033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.152094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.152153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.152214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.152276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.152336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.152428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.152489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.152551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.152606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.152683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.152737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.152795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.152852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.152910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.152974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.153036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.153096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.153156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.153209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.153271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.153335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.153426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.153488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.153547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.153611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.153686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.153738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.153795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.153863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.153921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.153980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.154681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.154747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.154809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.154871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.154933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.155000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.155063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.155125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.155190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.155256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.155317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.155405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.155471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.155542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.155608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.155669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.155746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.155808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.155875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.155933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.155992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.156052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.156111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.156179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.156240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.156301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.156380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.156442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.156503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.156569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.156634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.156713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.156774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.156833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.156897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.156949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.157011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.157067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.157127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.157186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.157248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.157304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.157386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.157453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.157506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.157567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.157629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.157706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.157765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.157824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.157884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.157939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.157995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.806 [2024-11-19 11:28:16.158052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.158109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.158162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.158219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.158275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.158334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.158416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.158483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.158539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.158603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.158682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.158887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.158955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.159016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.159078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.159143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.159217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.159280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.159338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.159427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.159491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.159555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.159620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.159697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.159755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.159814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.159874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.159942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.160004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.160066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.160125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.160185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.160245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.160313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.160406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.160471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.160536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.160598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.160660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.160740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.160800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.161556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.161626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.161705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.161766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.161828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.161888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.161946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.162001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.162061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.162126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.162185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.162241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.162297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.162357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.162447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.162506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.162563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.162624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.162700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.162763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.162823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.162885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.162955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.163019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.163081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.163141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.163202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.163264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.163325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.163409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.163475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.163535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.163600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.163665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.163744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.163807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.163868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.163934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.163994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.164045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.164107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.164161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.164223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.164280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.164337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.164432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.164494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.164554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.164605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.164662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.164738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.164806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.164870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.164927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.164998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.165057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.165116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.165174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.165233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.165288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.165346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.165425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.165485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.165541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.165758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.165824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.165886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.165949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.166007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.166075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.166139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.166198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.166259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.166318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.166404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.166477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.166540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.166602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.166663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.166738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.166798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.166857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.166919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.807 [2024-11-19 11:28:16.166978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.167038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.167097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.167158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.167219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.167283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.167344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.167425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.167490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.167560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.167625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.167701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.167760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.167818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.167875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.167934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.167996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.168052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.168107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.168166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.168226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.168290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.168373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.168438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.168504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.168557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.168614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.168693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.168753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.168809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.168869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.168928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.168982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.169038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.169094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.169153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.169218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.169278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.169336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.169420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.169485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.169550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.169618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.169694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.170227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.170298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.170391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.170453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.170517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.170587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.170658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.170735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.170798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.170860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.170922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.170989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.171046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.171102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.171154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.171214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.171272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.171336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.171420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.171481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.171544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.171605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.171678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.171743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.171801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.171859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.171934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.172003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.172065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.172128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.172181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.172253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.172311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.172396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.172463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.172524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.172584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.172645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.172722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.172780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.172846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.172910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.172973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.173036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.173097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.173158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.173221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.173282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.173341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.173430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.173493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.173560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.173629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.173708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.173768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.173827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.173888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.173952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.174012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.174075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.174136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.174199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.174263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.174323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.175165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.175232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.175292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.175380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.175443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.175503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.175568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.175635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.175712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.175770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.175835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.175892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.175952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.176003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.176064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.808 [2024-11-19 11:28:16.176117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.176175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.176236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.176296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.176394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.176463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.176524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.176587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.176650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.176727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.176816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.176876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.176939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.177001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.177077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.177145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.177206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.177265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.177325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.177414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.177483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.177544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.177605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.177683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.177746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.177810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.177871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.177934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.178003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.178069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.178127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.178188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.178247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.178305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.178386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.178453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.178511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.178564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.178622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.178697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.178763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.178826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.178887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.178943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.179000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.179058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.179125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.179195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.179847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.179914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.179972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.180031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.180090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.180160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.180220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.180281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.180341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.180447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.180512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.180575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.180641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.180717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.180788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.180855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.180914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.180975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.181036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.181098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.181161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.181222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.181281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.181339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.181425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.181497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.181559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.181620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.181695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.181755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.181811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.181869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.181923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.181990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.182049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.182108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.182170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.182231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.182290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.182360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.182436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.182502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.182566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.182631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.182704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.182772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.182849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.182904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.182962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.183021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.183082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.183141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.183198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.183256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.183305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.183384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.183452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.183509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.183566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.183623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.183698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.183756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.809 [2024-11-19 11:28:16.183817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.183878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:26:20.810 [2024-11-19 11:28:16.184445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.184511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.184573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.184638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.184717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.184779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.184839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.184901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.184963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.185027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.185097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.185154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.185211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.185273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.185333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.185420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.185478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.185543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.185602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.185677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.185734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.185796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.185858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.185916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.185967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.186024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.186083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.186142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.186200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.186258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.186322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.186407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.186467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.186531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.186589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.186662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.186721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.186784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.186847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.186909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.186970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.187030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.187094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.187157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.187218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.187278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.187339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.187426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.187489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.187553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.187616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.187695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.187761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.187826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.187885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.187949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.188016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.188083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.188143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.188205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.188267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.188326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.188417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.188677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.188763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.188824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.188884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.188951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.189010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.189070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.189133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.189191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.189251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.189317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.189404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.189459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.189524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.189585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.189644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.189719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.189781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.189839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.189899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.189958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.190014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.190075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.190134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.190200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.190256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.190314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.190396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.190462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.190523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.190582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.190656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.190713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.190770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.190826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.190897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.190962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.191023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.191084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.191145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.191204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.191279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.191344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.191420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.191482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.191543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.191618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.191697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.191761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.191821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.191882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.191944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.192005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.192064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.192125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.192188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.192250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.192304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.192388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.810 [2024-11-19 11:28:16.192458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.192518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.192581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.192643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.192722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.193650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.193729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.193785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.193835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.193892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.193943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.194003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.194071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.194138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.194199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.194257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.194316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.194403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.194474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.194539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.194602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.194685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.194745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.194807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.194882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.194941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.194997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.195058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.195120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.195179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.195240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.195307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.195390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.195455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.195516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.195574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.195633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.195709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.195769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.195826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.195883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.195945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.196006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.196071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.196132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.196190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.196250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.196310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.196404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.196469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.196532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.196594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.196661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.196740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.196802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.196862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.196921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.196984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.197045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.197104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.197166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.197227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.197290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.197373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.197445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.197507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.197568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.197629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.197900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.197966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.198022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.198082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.198143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.198201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.198267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.198328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.198411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.198479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.198540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.198602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.198661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.198731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.198791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.198852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.198909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.198972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.199034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.199097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.199148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.199208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.199268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.199326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.199421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.199481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.199538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.199599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.199682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.199741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.199797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.199855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.199921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.199987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.200047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.200110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.200169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.200230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.200294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.200380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.200449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.200514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.200579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.200656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.200723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.200782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.200841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.200901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.200960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.201021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.201085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.201148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.201207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.201268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.201335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.201422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.201485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.201548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.201608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.201670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.201750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.201808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.201867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.811 [2024-11-19 11:28:16.201928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.202800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.202863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.202924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.202985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.203042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.203102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.203160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.203218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.203280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.203338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.203428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.203499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.203559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.203620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.203696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.203763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.203832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.203893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.203959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.204020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.204084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.204143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.204202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.204262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.204321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.204408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.204473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.204538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.204601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.204679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.204744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.204806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.204869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.204932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.204992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.205053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.205117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.205179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.205238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.205299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.205349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.205430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.205492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.205552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.205611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.205686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.205753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.205817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.205875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.205927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.205990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.206055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.206119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.206177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.206242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.206304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.206389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.206447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.206507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.206565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.206627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.206702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.206764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.206824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.207041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.207103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.207153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.207212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.207261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.207324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.207411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.207473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.207542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.207611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.207690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.207753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.207813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.207872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.207933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.207996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.208469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.208537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.208603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.208665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.208752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.208813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.208871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.208932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.208992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.209054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.209120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.209186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.209248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.209308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.209390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.209455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.209516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.209575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.209633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.209710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.209767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.209823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.209880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.209939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.210007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.210070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.210128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.210179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.210233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.210292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.210357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.210439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.210497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.210557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.210622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.210707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.210767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.210831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.210892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.210953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.211017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.211075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.211136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.211195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.211256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.211316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.211402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.211465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.211528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.211591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.812 [2024-11-19 11:28:16.211652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.211733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.211793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.211854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.211914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.211980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.212046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.212107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.212169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.212228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.212291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.212357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.212442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.212507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.213114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.213174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.213226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.213284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.213340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.213431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.213491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.213552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.213617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.213691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.213749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.213805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.213870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.213927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.213987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.214045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.214105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.214163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.214220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.214279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.214336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.214419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.214472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.214535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.214592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.214670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.214734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.214792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.214852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.214914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.214978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.215038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.215099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.215158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.215219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.215284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.215359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.215434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.215502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.215567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.215626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.215700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.215759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.215830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.215900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.215963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.216020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.216071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.216128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.216180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.216239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.216296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.216378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.216440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.216502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.216561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.216619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.216698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.216760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.216822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.216882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.216944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.217005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.217247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.217315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.217405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.217473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.217535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.217602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.217664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.217743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.217807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.217867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.217927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.217985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.218052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.218116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.218175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.218235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.218296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.218383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.218447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.218507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.218572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.218637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.218717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.218779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.218838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.218897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.218956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.219017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.219079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.219137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.219195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.219251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.219314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.219391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.219457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.219515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.219582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.219642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.219717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.219779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.219838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.813 [2024-11-19 11:28:16.219899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.219951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.220014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.220078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.220135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.220194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.220259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.220318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.220403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.220461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.220525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.220586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.220668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.220727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.220786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.220848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.220905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.220966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.221022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.221081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.221138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.221188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.221245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.222123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.222195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.222255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.222330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.222415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.222484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.222550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.222612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.222689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.222754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.222821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.222883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.222941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.222998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.223062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.223116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.223176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.223238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.223299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.223388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.223451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.223512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.223573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.223632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.223709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.223771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.223821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.223882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.223941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.224016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.224080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.224140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.224201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.224260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.224320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.224406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.224470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.224532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.224595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.224660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.224741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.224805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.224867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.224926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.224985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.225051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.225115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.225179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.225240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.225301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.225394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.225460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.225522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.225584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.225646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.225735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.225795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.225854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.225913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.225974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.226036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.226097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.226157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.226433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.226501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.226570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.226630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.226709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.226761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.226833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.226894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.226951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.227019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.227079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.227158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.227222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.227280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.227338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.227422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.227485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.227550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.227603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.227679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.227741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.227800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.227856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.227914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.227972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.228027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.228085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.228142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.228200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.228259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.228318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.228402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.228470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.228532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.228593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.228654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.228745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.228804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.228863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.228924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.228986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.229047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.229110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.229169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.229230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.229294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.229379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.229445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.229501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.229569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.229632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.229706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.229771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.814 [2024-11-19 11:28:16.229830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.229888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.229946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.230004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.230067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.230128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.230190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.230247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.230307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.230389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.230458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.231280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.231359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.231435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.231502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.231565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.231635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.231714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.231775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.231834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.231891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.231951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.232016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.232078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.232136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.232202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.232266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.232328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.232410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.232473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.232537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.232602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.232676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.232737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.232794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.232852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.232908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.232973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.233034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.233091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.233150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.233208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.233260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.233315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.233396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.233458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.233520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.233584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.233646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.233720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.233774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.233832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.233892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.233950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.234007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.234070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.234127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.234196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.234260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.234320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.234410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.234473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.234534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.234597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.234680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.234745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.234807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.234867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.234926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.234988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.235049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.235109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.235169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.235230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.235505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.235568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.235626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.235701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.235773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.235836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.235894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.235961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.236013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.236076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.236137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.236196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.236260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.236322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.236406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.236470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.236545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.236608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.236670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.236747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.236806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.236857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.236914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.236981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.237040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.237103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.237163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.237223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.237281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.237355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.237428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.237494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.237563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.237626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.237705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.237765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.237826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.237891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.237949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.238009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.238070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.238130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.238206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.238270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.238331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.238415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.238480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.238543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.238611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.238677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.238755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.238823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.238887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.238945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.239007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.239070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.239129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.239191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.239248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.239314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.239389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.239448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.239512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.239573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.240083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.815 [2024-11-19 11:28:16.240147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.240206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.240262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.240319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.240403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.240464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.240526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.240589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.240651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.240735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.240798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.240857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.240917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.240979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.241041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.241101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.241159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.241229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.241299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.241390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.241456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.241516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.241580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.241642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.241717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.241769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.241829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.241885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.241942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.242001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.242066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.242128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.242181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.242236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.242296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.242383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.242444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.242503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.242565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.242627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.242687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.242746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.242808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.242867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.242952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.243003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.243060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.243110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.243172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.243237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.243300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.243391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.243460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.243533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.243598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.243678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.243739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.243804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.243862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.243921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.243980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.244039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.244915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.244985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.245048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.245109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.245173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.245234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.245294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.245398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.245462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.245524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.245588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.245662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.245734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.245797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.245857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.245916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.245976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.246037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.246089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.246146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.246208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.246266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.246330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.246416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.246478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.246546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.246606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.246680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.246744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.246807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.246875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.246948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.247001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.247063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.247125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.247183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.247242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.247305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.247398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.247459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.247519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.247579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.247636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.247714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.247777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.247837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.247897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.247967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.248030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.248093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.248153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.248212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.248270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.248333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.248419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.248487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.248562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.248615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.248691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.248754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.248817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.248875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.248941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.816 [2024-11-19 11:28:16.249000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.249208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.249269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.249326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.249409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.249474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.249537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.249597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.249675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.249738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.249799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.249860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.249919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.249981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.250041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.250100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.250168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.250232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.250797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.250869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.250924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.250982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.251037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.251109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.251169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.251226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.251282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.251358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.251436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.251499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.251560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.251622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.251702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.251765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.251825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.251886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.251944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.252004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.252065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.252126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.252187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.252246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.252305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.252391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.252458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.252521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.252583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.252662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.252726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.252790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.252851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.252935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.253012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.253079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.253163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.253227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.253309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.253385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.253450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.253510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.253571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.253629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.253689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.253751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.253819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.253894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.253963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.254023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.254077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.254134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.254197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.254259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.254324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.254395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.254460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.254525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.254587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.254640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.254712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.254775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.254853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.254933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.255171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.255228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.255285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.255342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.255425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.255490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:26:20.817 [2024-11-19 11:28:16.255583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.255648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.255710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.255773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.255832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.255890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.255946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.256003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.256066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.256150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.256206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.256271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.256337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.256422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.256497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.256567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.256629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.256703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.256763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.256855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.256921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.256990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.257072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.257148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.257219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.257280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.257348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.257419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.257482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:20.817 [2024-11-19 11:28:16.257544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.103 [2024-11-19 11:28:16.257605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.103 [2024-11-19 11:28:16.257671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.103 [2024-11-19 11:28:16.257750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.103 [2024-11-19 11:28:16.257813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.103 [2024-11-19 11:28:16.257874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.103 [2024-11-19 11:28:16.257933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.103 [2024-11-19 11:28:16.257997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.103 [2024-11-19 11:28:16.258060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.103 [2024-11-19 11:28:16.258120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.103 [2024-11-19 11:28:16.258185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.103 [2024-11-19 11:28:16.259090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.103 [2024-11-19 11:28:16.259194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.103 [2024-11-19 11:28:16.259277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.103 [2024-11-19 11:28:16.259386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.103 [2024-11-19 11:28:16.259462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.103 [2024-11-19 11:28:16.259545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.103 [2024-11-19 11:28:16.259639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.103 [2024-11-19 11:28:16.259735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.103 [2024-11-19 11:28:16.259815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.103 [2024-11-19 11:28:16.259894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.103 [2024-11-19 11:28:16.259978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.103 [2024-11-19 11:28:16.260063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.103 [2024-11-19 11:28:16.260149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.103 [2024-11-19 11:28:16.260237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.103 [2024-11-19 11:28:16.260324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.103 [2024-11-19 11:28:16.260413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.103 [2024-11-19 11:28:16.260498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.103 [2024-11-19 11:28:16.260582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.103 [2024-11-19 11:28:16.260656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.103 [2024-11-19 11:28:16.260738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.103 [2024-11-19 11:28:16.260818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.103 [2024-11-19 11:28:16.260899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.103 [2024-11-19 11:28:16.260977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.103 [2024-11-19 11:28:16.261051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.103 [2024-11-19 11:28:16.261115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.103 [2024-11-19 11:28:16.261185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.103 [2024-11-19 11:28:16.261249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.261310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.261373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.261435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.261497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.261558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.261619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.261679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.261742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.261806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.261868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.261920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.261983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.262043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.262102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.262163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.262223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.262289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.262351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.262418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.262477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.262549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.262611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.262672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.262740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.262797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.262865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.262928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.262986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.263051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.263108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.263166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.263247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.263299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.263383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.263445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.263503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.263579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.263811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.263880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.263941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.264000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.264061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.264128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.264186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.264250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.264313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.264398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.264467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.264530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.264593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.264657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.264741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.264799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.264857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.265393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.265454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.265513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.265571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.265635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.265709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.265767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.265828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.265885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.265944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.266016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.266085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.266153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.266216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.266276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.266337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.266421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.266490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.266558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.266619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.266698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.266759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.266821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.266887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.266947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.267010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.267070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.267134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.267203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.267268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.267328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.267416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.267481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.267548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.267610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.267688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.267751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.104 [2024-11-19 11:28:16.267817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.267881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.267945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.268003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.268058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.268117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.268184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.268246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.268311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.268405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.268466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.268536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.268593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.268661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.268740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.268799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.268860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.268923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.268981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.269042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.269101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.269163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.269225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.269283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.269357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.269425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.269482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.269713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.269772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.269831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.269893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.269959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.270020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.270085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.270147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.270208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.270268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.270329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.270419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.270481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.270544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.270606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.270686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.270748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.270808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.270870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.270936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.271006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.271065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.271122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.271183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.271248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.271310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.271393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.271456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.271527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.271592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.271652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.271726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.271783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.271843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.271900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.271957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.272017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.272082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.272148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.272207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.272258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.272318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.272410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.272473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.272538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.272598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.273163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.273229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.273301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.273394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.273464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.273528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.273590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.273659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.273736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.273795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.273858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.273920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.273986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.274045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.274117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.274179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.274242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.274302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.274390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.274459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.105 [2024-11-19 11:28:16.274527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.274593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.274662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.274741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.274801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.274864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.274925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.274987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.275050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.275114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.275176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.275234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.275291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.275348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.275431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.275492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.275551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.275610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.275692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.275753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.275804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.275869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.275940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.276004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.276067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.276127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.276191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.276251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.276303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.276389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.276461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.276529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.276589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.276651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.276739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.276797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.276854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.276927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.276986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.277051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.277102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.277163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.277230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.277295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.277537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.277602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.277682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.277744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.277804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.277863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.277925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.277990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.278049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.278111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.278170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.278232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.278294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.278384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.278450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.278521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.278594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.279334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.279425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.279490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.279554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.279613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.279675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.279747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.279814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.279873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.279932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.279989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.280046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.280102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.280158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.280220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.280282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.280374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.280438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.280503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.280565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.280632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.280711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.280773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.280833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.280894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.280956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.281014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.281077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.281137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.106 [2024-11-19 11:28:16.281197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.281258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.281326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.281413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.281493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.281566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.281629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.281717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.281776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.281836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.281897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.281956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.282016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.282081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.282142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.282200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.282257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.282313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.282396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.282463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.282534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.282595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.282670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.282736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.282795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.282846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.282909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.282966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.283028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.283093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.283158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.283219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.283277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.283336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.283424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.283659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.283719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.283779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.283833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.283901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.283967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.284019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.284085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.284144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.284203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.284262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.284321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.284406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.284471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.284533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.284594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.284656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.284740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.284803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.284861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.284921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.284988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.285054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.285113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.285174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.285241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.285301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.285384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.285449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.285516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.285582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.285643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.285721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.285782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.285844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.285908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.285972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.286032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.286091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.286153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.286216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.286277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.286336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.286425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.286484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.286547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.286615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.286680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.286752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.286809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.286871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.286936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.286997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.287058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.287117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.287175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.287241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.287306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.107 [2024-11-19 11:28:16.287385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.287456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.287519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.287578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.287635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.288203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.288270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.288335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.288429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.288497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.288562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.288623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.288702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.288766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.288825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.288887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.288950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.289011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.289076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.289135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.289196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.289256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.289316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.289408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.289477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.289538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.289597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.289682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.289746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.289807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.289873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.289931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.289991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.290053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.290109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.290170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.290225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.290282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.290339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.290424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.290484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.290545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.290612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.290679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.290745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.290804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.290860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.290916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.290975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.291036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.291103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.291168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.291227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.291277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.291349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.291414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.291474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.291544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.291606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.291669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.291745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.291807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.291871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.291933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.291993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.292053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.292114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.292175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.292235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.293115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.293192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.293261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.293321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.293409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.293471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.293535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.293595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.293657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.108 [2024-11-19 11:28:16.293735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.293799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.293860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.293919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.293980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.294043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.294103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.294163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.294226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.294289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.294373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.294438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.294505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.294560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.294621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.294697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.294762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.294823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.294883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.294945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.295010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.295066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.295127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.295185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.295252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.295312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.295394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.295453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.295511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.295569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.295628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.295703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.295763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.295824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.295882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.295942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.296003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.296065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.296130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.296189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.296248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.296310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.296396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.296468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.296528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.296597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.296679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.296746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.296805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.296864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.296925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.296992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.297061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.297122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.297181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.297433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.297497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.297560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.297621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.297696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.297749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.297808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.297867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.297926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.297988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.298045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.298104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.298163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.298222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.298278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.298339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.298917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.298986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.299048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.299111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.299178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.299236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.299299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.299392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.299460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.299521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.299580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.299641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.299716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.299782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.299837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.299899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.299961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.300019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.300078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.300140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.300199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.300271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.109 [2024-11-19 11:28:16.300352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.300427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.300497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.300562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.300627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.300706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.300781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.300840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.300900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.300967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.301025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.301085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.301144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.301205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.301273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.301334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.301422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.301490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.301559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.301623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.301699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.301759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.301819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.301880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.301937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.301996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.302055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.302114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.302164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.302224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.302284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.302351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.302436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.302502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.302563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.302624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.302695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.302750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.302807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.302865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.302927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.302983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.303206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.303268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.303326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.303412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.303474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.303539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.303607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.303668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.303753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.303827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.303887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.303946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.304009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.304081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.304146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.304204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.304264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.304325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.304413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.304476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.304539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.304598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.304667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.304728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.304784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.304847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.304906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.304965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.305023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.305082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.305147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.305198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.305256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.305327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.305420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.305484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.305545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.305609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.305695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.305760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.305822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.305883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.305933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.305989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.306047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.306105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.306165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.307021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.307091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.307151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.110 [2024-11-19 11:28:16.307211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.307273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.307333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.307428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.307495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.307559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.307621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.307697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.307755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.307811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.307877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.307937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.307999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.308050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.308105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.308167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.308225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.308281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.308335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.308418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.308480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.308542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.308605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.308679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.308743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.308802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.308862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.308920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.308975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.309035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.309100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.309165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.309223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.309283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.309359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.309431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.309501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.309564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.309627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.309705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.309771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.309832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.309892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.309953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.310013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.310075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.310137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.310206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.310266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.310326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.310412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.310477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.310542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.310604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.310680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.310742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.310804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.310863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.310928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.310989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.311044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.311245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.311311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.311394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.311457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.311517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.311583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.311639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.311715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.311774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.311834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.311893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.311955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.312013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.312064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.312123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.312182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.312747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.312814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.312873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.312929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.312989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.313046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.313100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.313165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.313227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.313290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.313352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.313429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.313506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.313567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.313627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.313687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 [2024-11-19 11:28:16.313747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.111 true 00:26:21.112 [2024-11-19 11:28:16.313808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.313868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.313926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.313996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.314059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.314119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.314177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.314241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.314301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.314372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.314438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.314503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.314570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.314632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.314693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.314762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.314832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.314894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.314956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.315018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.315081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.315151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.315215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.315298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.315372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.315437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.315507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.315571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.315637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.315699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.315771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.315835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.315904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.315976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.316057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.316137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.316213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.316275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.316339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.316407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.316487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.316554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.316619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.316699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.316766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.316829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.316897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.317107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.317170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.317233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.317297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.317371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.317443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.317508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.317571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.317626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.317690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.317753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.317818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.317879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.317937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.318001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.318062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.318130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.318186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.318249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.318323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.318394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.318459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.318520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.318581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.318642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.318706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.318775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.318845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.318905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.318964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.319034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.319093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.319151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.319214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.319273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.319328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.319393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.319451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.319508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.319569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.319637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.319701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.319769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.319832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.319896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.112 [2024-11-19 11:28:16.319960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.320029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.320864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.320926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.320991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.321058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.321117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.321178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.321235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.321294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.321355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.321426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.321494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.321562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.321624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.321685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.321748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.321808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.321873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.321936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.321996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.322064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.322128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.322188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.322253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.322318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.322397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.322463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.322532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.322594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.322654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.322716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.322778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.322847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.322913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.322968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.323030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.323101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.323159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.323222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.323282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.323344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.323412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.323468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.323528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.323594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.323657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.323718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.323781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.323839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.323896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.323955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.324021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.324086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.324148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.324209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.324269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.324328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.324394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.324449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.324512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.324577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.324642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.324715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.324788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.324852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.325071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.325138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.325202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.325270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.325334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.325406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.325467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.325533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.325598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.325661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.325725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.325787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.325851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.325917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.325981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.113 [2024-11-19 11:28:16.326046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.326118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.326192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.326253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.326316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.326389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.326457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.326516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.326580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.326643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.326707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.326769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.326837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.326895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.326955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.327015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.327083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.327146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.327208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.327265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.327337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.327418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.327489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.327552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.327612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.327686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.327748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.327804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.327867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.327934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.327999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.328059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.328118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.328180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.328239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.328300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.328369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.328442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 11:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2734234 00:26:21.114 11:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:21.114 [2024-11-19 11:28:16.329003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.329076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 Message suppressed 999 times: [2024-11-19 11:28:16.329143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 Read completed with error (sct=0, sc=15) 00:26:21.114 [2024-11-19 11:28:16.329211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.329274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.329345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.329416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.329477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.329541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.329600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.329688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.329756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.329815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.329868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.329929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.329989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.330047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.330108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.330168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.330227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.330284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.330358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.330435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.330498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.330550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.330610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.330673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.330736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.330800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.330886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.330947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.331009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.331072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.331136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.331198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.331258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.331334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.331422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.331484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.331547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.331608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.331687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.331753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.331819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.331880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.331940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.332000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.332080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.332148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.332214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.332275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.114 [2024-11-19 11:28:16.332338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.332414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.332482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.332545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.332606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.332665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.332727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.332790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.332851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.332916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.332982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.333035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.333096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.333329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.333417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.333482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.333543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.333601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.333661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.333735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.333797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.333855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.333913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.334276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.334371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.334442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.334508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.334569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.334638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.334714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.334777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.334835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.334894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.334975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.335044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.335107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.335169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.335232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.335291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.335378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.335441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.335504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.335568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.335630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.335709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.335774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.335837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.335891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.335947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.336013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.336075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.336145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.336203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.336261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.336324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.336412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.336477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.336540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.336599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.336693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.336753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.336817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.336876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.336939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.337008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.337075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.337133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.337193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.337250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.337306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.337390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.337454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.337524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.337585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.337646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.337722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.338315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.338420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.338486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.338550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.338628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.338708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.338771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.338831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.338891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.338951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.339017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.339079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.339145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.339219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.339295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.339387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.339454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.115 [2024-11-19 11:28:16.339518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.339583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.339648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.339728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.339793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.339854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.339915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.339980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.340041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.340100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.340151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.340214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.340271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.340327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.340410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.340477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.340537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.340609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.340686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.340754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.340814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.340872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.340936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.340995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.341058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.341114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.341175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.341232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.341290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.341373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.341439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.341499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.341560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.341625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.341703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.341765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.341827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.341888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.341948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.342011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.342073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.342135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.342197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.342258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.342319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.342409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.342485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.342727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.342790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.342854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.342927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.342989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.343051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.343113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.343180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.343242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.343295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.343986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.344048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.344122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.344185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.344244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.344303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.344390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.344453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.344521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.344588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.344648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.344732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.344789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.344849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.344907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.344963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.345018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.345084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.345145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.345207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.345266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.345332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.345431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.345505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.345569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.345642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.345725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.345784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.345846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.345907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.345967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.346028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.346112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.346171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.346234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.116 [2024-11-19 11:28:16.346308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.346397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.346474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.346554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.346615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.346690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.346751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.346808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.346868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.346928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.346990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.347054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.347111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.347163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.347227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.347284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.347356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.347431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.347496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.347558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.347618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.347699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.347752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.347823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.347885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.347947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.348006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.348070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.348132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.348358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.348428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.348489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.348549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.348614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.348690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.349168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.349247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.349312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.349409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.349486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.349553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.349616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.349677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.349754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.349815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.349881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.349940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.350000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.350059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.350124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.350185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.350251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.350311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.350411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.350484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.350556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.350621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.350700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.350762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.350829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.350885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.350948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.351008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.351067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.351119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.351175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.351236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.351293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.351392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.351461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.351521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.351581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.351656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.351711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.351770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.351833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.351888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.351944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.352001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.352060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.352119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.352178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.117 [2024-11-19 11:28:16.352239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.352295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.352377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.352437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.352500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.352563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.352626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.352706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.352766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.352828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.352888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.352949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.353016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.353080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.353141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.353202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.353269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.353517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.353583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.353647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.353740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.353810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.353868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.353932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.354009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.354077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.354136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.354637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.354716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.354779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.354838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.354895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.354952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.355017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.355074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.355130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.355197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.355259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.355329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.355421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.355486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.355546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.355602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.355679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.355737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.355795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.355860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.355920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.355979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.356039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.356098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.356154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.356222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.356281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.356344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.356442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.356526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.356593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.356662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.356740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.356803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.356863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.356925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.356990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.357052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.357117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.357179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.357243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.357304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.357391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.357458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.357522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.357600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.357675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.357740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.357803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.357864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.357925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.357984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.358046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.358099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.358160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.358218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.358278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.358358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.358431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.358502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.358573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.358641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.358714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.358777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.358990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.118 [2024-11-19 11:28:16.359051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.359112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.359173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.359228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.359287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.359377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.359438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.359507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.359569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.359631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.359698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.359754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.359813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.359874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.359934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.360004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.360065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.360126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.360189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.360250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.360311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.360395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.360460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.360523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.360590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.360671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.360736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.360800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.360862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.360921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.360980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.361040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.361097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.361173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.361238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.361297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.361386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.361452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.361520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.361584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.361645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.361723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.361782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.361842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.361902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.361959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.362022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.362074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.362141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.362197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.362258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.362316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.362400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.362462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.362524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.362589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.362651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.362727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.362785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.362850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.362907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.362976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.363620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.363711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.363774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.363838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.363909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.363969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.364046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.364108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.364169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.364239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.364307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.364408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.364477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.364542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.364605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.364667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.364745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.364803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.364871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.364940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.365000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.365063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.365123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.365185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.365247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.365307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.365394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.365480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.365535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.365606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.365685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.119 [2024-11-19 11:28:16.365745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.365804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.365865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.365924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.365988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.366044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.366096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.366151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.366209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.366273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.366331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.366415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.366479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.366539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.366604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.366687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.366753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.366810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.366871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.366930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.366986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.367044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.367099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.367159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.367221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.367286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.367371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.367434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.367496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.367560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.367632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.367712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.367773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.368292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.368398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.368468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.368537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.368597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.368661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.368750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.368821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.368891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.368953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.369012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.369074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.369141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.369197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.369262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.369327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.369413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.369473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.369540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.369609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.369691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.369752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.369813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.369870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.369936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.369995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.370059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.370120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.370180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.370244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.370296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.370388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.370457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.370520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.370580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.370638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.370713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.370770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.370828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.370884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.370937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.370994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.371051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.371110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.371172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.371240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.371300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.371391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.371453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.371527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.371593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.371678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.371744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.371816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.371884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.371944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.372007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.120 [2024-11-19 11:28:16.372067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.372128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.372194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.372255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.372314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.372399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.373254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.373326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.373438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.373509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.373580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.373641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.373720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.373783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.373841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.373903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.373968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.374025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.374084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.374143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.374200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.374254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.374315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.374411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.374476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.374535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.374609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.374697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.374758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.374815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.374872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.374937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.374997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.375057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.375114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.375173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.375238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.375297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.375378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.375440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.375507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.375571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.375636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.375724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.375784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.375846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.375908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.375969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.376035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.376093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.376153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.376212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.376273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.376356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.376430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.376496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.376560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.376629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.376711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.376772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.376830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.376882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.376942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.377004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.377080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.377151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.377212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.377279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.377343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.377418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.377641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.377720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.377782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.377853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.377924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.377984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.378046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.378102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.378161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.378212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.378274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.378324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.378412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.378481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.378547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.378614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.378692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.379128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.379191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.379255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.379316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.379403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.379468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.121 [2024-11-19 11:28:16.379529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.379600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.379675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.379738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.379804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.379869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.379931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.379992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.380055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.380119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.380184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.380246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.380311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.380396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.380462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.380529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.380590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.380652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.380729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.380793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.380856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.380922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.380982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.381042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.381105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.381166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.381222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.381279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.381359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.381432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.381511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.381574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.381635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.381714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.381773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.381827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.381885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.381942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.382002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.382058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.382118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.382191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.382250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.382308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.382395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.382467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.382535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.382598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.382678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.382742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.382804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.382866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.382928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.382988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.383051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.383112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.383180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.383246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.383493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.383558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.383621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.383706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.383766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.383828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.383888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.383952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.384016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.384076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.384135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.384193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.384265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.384328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.384413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.384480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.384541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.384601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.384678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.384740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.384809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.384868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.122 [2024-11-19 11:28:16.384927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.384982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.385038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.385094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.385164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.385223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.385288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.385371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.385439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.385503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.385557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.385621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.385693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.385752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.385815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.385870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.385930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.385985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.386046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.386105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.386161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.386224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.386279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.386335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.386964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.387030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.387084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.387141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.387196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.387266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.387328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.387413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.387470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.387533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.387596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.387671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.387730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.387792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.387855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.387913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.387969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.388023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.388079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.388141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.388210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.388272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.388330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.388417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.388483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.388547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.388615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.388692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.388757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.388817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.388883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.388947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.389007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.389077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.389139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.389203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.389263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.389323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.389426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.389506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.389575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.389637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.389715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.389775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.389835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.389900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.389966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.390024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.390083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.390139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.390201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.390262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.390323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.390409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.390474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.390537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.390592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.390665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.390725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.390788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.390850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.390907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.390978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.391048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.391271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.391333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.391419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.391494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.391561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.123 [2024-11-19 11:28:16.391621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.391681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.391766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.391837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.391899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.391966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.392033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.392094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.392167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.392228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.392289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.392350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.393155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.393220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.393280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.393355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.393425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.393487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.393551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.393614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.393681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.393762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.393822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.393880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.393937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.394001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.394063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.394119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.394182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.394251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.394312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.394398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.394470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.394531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.394589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.394648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.394727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.394780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.394839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.394888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.394948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.395003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.395061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.395120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.395175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.395233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.395295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.395378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.395441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.395512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.395571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.395636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.395712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.395772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.395832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.395896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.395966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.396039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.396101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.396167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.396235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.396293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.396380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.396444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.396509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.396578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.396661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.396723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.396790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.396853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.396915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.396974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.397047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.397107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.397168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.397229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.397463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.397526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.397586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.397650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.397729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.397795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.397856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.397915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.397971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.398031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.398091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.398154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.398213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.398283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.398356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.398427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.124 [2024-11-19 11:28:16.398488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.398560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.398622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.398695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.398753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.398810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.398873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.398935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.398996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.399060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.399122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.399183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.399246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.399312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.399395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.399460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.399523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.399588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.399671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.399734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.399797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.399861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.399925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.399988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.400047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.400107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.400168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.400229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.400288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.400372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.400438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.400502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.400570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.400633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.400707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.400765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.400823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.400884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.400947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.401010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.401074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.401130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.401191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.401247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.401316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.401414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.401484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.402057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.402120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.402176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.402227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.402282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.402372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.402434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.402499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.402558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.402617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.402672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.402752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.402811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.402871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.402935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.402996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.403055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:26:21.125 [2024-11-19 11:28:16.403118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.403178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.403244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.403308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.403393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.403456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.403516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.403579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.403660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.403721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.403786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.403848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.403912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.403974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.404033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.404093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.404155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.404225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.404294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.404378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.404441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.404509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.404578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.404656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.404716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.404780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.404836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.404891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.125 [2024-11-19 11:28:16.404949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.405011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.405068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.405127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.405185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.405243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.405308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.405386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.405451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.405516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.405575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.405638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.405708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.405766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.405835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.405897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.405953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.406011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.406069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.406981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.407046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.407108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.407169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.407239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.407301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.407390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.407455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.407518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.407582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.407667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.407727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.407790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.407855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.407917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.407976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.408036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.408088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.408152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.408215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.408273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.408333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.408424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.408486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.408547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.408610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.408679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.408743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.408801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.408866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.408931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.408994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.409054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.409113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.409164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.409222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.409291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.409389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.409452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.409515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.409577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.409644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.409719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.409777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.409834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.409897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.409959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.410018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.410078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.410139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.410205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.410268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.410328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.410412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.410496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.410561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.410628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.410711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.410770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.410832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.410895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.410960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.411028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.411093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.411308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.411405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.411468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.411539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.411603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.411668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.411742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.411795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.411853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.411913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.126 [2024-11-19 11:28:16.411970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.412032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.412096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.412160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.412218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.412278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.412780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.412850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.412905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.412962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.413019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.413078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.413136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.413194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.413254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.413312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.413403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.413464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.413516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.413581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.413640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.413717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.413776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.413834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.413896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.413955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.414022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.414083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.414142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.414205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.414267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.414332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.414431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.414495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.414563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.414626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.414709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.414772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.414832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.414895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.414959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.415017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.415079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.415138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.415203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.415263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.415322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.415408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.415475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.415543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.415607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.415684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.415745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.415810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.415868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.415933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.415996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.416055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.416115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.416178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.416232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.416294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.416386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.416449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.416515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.416573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.416633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.416708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.416765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.416829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.417467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.417550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.417616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.417700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.417758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.417819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.417879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.417939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.418014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.418076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.418138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.127 [2024-11-19 11:28:16.418201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.418263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.418330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.418427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.418492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.418562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.418632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.418694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.418769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.418828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.418895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.418958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.419018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.419076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.419137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.419188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.419249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.419315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.419396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.419461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.419520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.419587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.419647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.419723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.419778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.419835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.419896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.419956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.420015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.420071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.420130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.420189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.420247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.420306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.420403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.420464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.420522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.420596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.420685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.420751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.420809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.420868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.420929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.420988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.421050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.421112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.421171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.421230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.421288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.421371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.421438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.421514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.421786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.421854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.421913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.421974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.422024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.422081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.422139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.422216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.422279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.422340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.422410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.422478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.422544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.422607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.422670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.422743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.422802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.422872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.422934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.422994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.423058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.423120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.423182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.423246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.423312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.423406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.423470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.423531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.423594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.423684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.423743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.423802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.423863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.423930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.423989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.424062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.424122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.424188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.424252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.424310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.424398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.128 [2024-11-19 11:28:16.424463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.424520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.424579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.424638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.424714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.424773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.424830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.424899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.424961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.425021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.425084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.425152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.425218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.425276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.425338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.425424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.425494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.425556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.425619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.425695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.425756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.425814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.425873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.426418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.426481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.426550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.426615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.426695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.426756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.426817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.426889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.426958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.427017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.427077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.427152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.427212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.427269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.427327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.427401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.427464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.427530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.427589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.427653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.427746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.427799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.427864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.427928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.427991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.428048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.428104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.428168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.428233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.428294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.428374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.428453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.428518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.428577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.428639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.428719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.428782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.428839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.428897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.428953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.429003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.429059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.429110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.429167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.429236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.429292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.429375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.429434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.429503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.429567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.429629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.429710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.429769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.429842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.429901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.429963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.430024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.430090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.430151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.430213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.430289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.430388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.430463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.431328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.431425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.431490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.431560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.431614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.431678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.431752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.129 [2024-11-19 11:28:16.431816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.431876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.431932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.431995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.432047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.432112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.432169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.432225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.432281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.432334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.432422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.432489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.432561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.432637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.432724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.432791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.432851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.432912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.432978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.433040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.433098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.433160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.433219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.433280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.433372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.433438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.433499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.433561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.433625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.433704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.433768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.433828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.433886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.433945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.434008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.434071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.434133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.434193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.434254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.434315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.434401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.434466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.434521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.434581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.434636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.434710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.434770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.434829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.434897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.434960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.435020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.435072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.435133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.435198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.435261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.435319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.435398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.435608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.435670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.435748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.435804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.435864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.435924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.435982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.436039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.436096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.436153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.436209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.436266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.436316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.436398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.436451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.436514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.436585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.437051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.437114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.437177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.437238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.437298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.437359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.437451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.437517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.437578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.437646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.437731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.437791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.437851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.437911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.437972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.438034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.438093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.438154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.130 [2024-11-19 11:28:16.438213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.438271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.438329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.438422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.438485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.438551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.438606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.438688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.438747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.438807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.438867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.438928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.438984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.439039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.439096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.439158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.439217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.439275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.439335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.439429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.439489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.439550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.439616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.439692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.439754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.439817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.439880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.439943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.440001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.440063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.440131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.440193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.440252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.440313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.440400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.440465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.440535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.440598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.440660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.440735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.440793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.440854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.440915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.440973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.441033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.441094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.441291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.441357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.441445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.441515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.441573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.441628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.441699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.441760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.441816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.441879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.441939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.441997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.442058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.442117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.442174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.442229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.442287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.442349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.442438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.442499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.442560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.442619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.442689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.442746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.442805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.442861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.442917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.442981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.443036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.443089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.443144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.443200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.443259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.443943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.444009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.444078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.444150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.444208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.444273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.444357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.444431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.444508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.444582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.444645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.131 [2024-11-19 11:28:16.444725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.444792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.444855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.444920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.444979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.445047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.445117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.445179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.445239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.445299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.445361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.445455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.445516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.445577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.445642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.445716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.445779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.445843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.445901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.445962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.446023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.446084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.446141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.446207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.446267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.446332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.446414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.446480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.446545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.446609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.446690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.446748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.446807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.446875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.446940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.447003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.447062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.447124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.447195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.447253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.447314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.447401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.447469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.447532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.447591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.447653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.447729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.447792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.447856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.447915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.447974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.448032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.448090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.448296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.448357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.448447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.448512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.448579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.448644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.448723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.448788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.448848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.448916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.448980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.449044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.449105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.449157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.449213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.449275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.449333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.449417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.449481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.449543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.449606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.449675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.449742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.449809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.449874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.449930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.449988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.132 [2024-11-19 11:28:16.450047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.450104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.450162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.451023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.451085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.451145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.451203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.451259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.451309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.451357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.451434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.451484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.451535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.451585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.451636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.451701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.451755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.451805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.451854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.451902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.451951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.452007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.452068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.452129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.452192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.452251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.452310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.452394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.452459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.452523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.452586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.452648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.452707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.452783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.452846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.452911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.452969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.453029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.453087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.453148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.453215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.453281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.453357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.453430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.453493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.453560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.453634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.453714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.453778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.453836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.453894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.453951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.454009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.454071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.454133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.454191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.454249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.454307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.454384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.454444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.454507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.454568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.454630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.454710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.454768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.454828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.454879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.455078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.455141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.455203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.455267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.455326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.455411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.455476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.455539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.455605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.455684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.455747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.455805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.455864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.455924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.455987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.456060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.456126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.456195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.456262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.456323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.456406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.456472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.456535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.456600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.456679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.456739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.133 [2024-11-19 11:28:16.456800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.456858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.456926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.456986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.457045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.457103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.457169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.457692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.457761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.457819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.457881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.457945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.458002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.458060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.458118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.458176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.458234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.458290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.458380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.458442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.458501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.458562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.458618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.458679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.458750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.458805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.458863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.458925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.458982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.459041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.459097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.459154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.459214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.459282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.459370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.459437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.459501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.459562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.459626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.459709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.459768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.459830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.459892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.459955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.460018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.460082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.460144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.460201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.460262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.460322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.460404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.460473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.460534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.460587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.460654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.460733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.460787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.460845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.460901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.460957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.461013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.461072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.461133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.461199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.461262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.461322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.461407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.461474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.461537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.461602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.461662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.461878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.461941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.462006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.462069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.462134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.462194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.462257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.462321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.462406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.462471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.462532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.462593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.462676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.462738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.462795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.462864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.462927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.462986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.463050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.463117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.463180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.134 [2024-11-19 11:28:16.463240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.463298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.463383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.463446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.463510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.463571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.463631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.463708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.463768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.464548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.464611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.464687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.464745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.464806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.464869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.464929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.464980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.465036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.465093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.465150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.465208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.465273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.465336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.465421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.465488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.465547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.465607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.465685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.465743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.465801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.465856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.465914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.465967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.466018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.466071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.466127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.466186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.466245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.466312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.466409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.466485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.466549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.466610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.466671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.466745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.466806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.466866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.466929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.466988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.467048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.467108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.467165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.467225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.467284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.467379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.467443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.467499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.467558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.467618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.467689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.467749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.467806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.467868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.467924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.467984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.468041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.468099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.468158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.468217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.468278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.468338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.468428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.468492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.468718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.468780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.468840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.468901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.468959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.469022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.469083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.469142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.469204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.469265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.469329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.469412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.469476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.469538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.469604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.469688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.469755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.469817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.469875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.469935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.135 [2024-11-19 11:28:16.470001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.470066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.470129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.470183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.470239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.470300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.470383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.470446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.470505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.470565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.470630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.470712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.470774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.471240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.471300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.471384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.471448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.471511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.471570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.471643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.471727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.471790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.471849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.471909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.471976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.472037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.472094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.472156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.472218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.472295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.472385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.472451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.472519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.472589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.472651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.472729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.472790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.472853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.472916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.472976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.473034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.473093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.473156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.473218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.473285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.473360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.473433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.473496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.473563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.473630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.473709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.473767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.473827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.473897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.473961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.474024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.474086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.474141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.474197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.474255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.474320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.474401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.474464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.474518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.474582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.474647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.474731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.474789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.474852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.474914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.474974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.475025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.475080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.475140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.475199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.475258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.475318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.475557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.475619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:26:21.136 [2024-11-19 11:28:16.475702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.475765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.475831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.475890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.475951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.476016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.476075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.476132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.476192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.476252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.476310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.476403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.476466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.136 [2024-11-19 11:28:16.476528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.476590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.476665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.476735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.476799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.476857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.476920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.476982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.477045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.477105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.477162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.477238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.477304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.477383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.477448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.478056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.478120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.478174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.478233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.478290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.478370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.478435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.478501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.478562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.478622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.478702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.478759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.478812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.478872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.478927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.478984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.479050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.479106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.479163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.479224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.479280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.479331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.479415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.479487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.479549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.479613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.479700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.479768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.479829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.479891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.479953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.480012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.480070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.480129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.480195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.480259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.480316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.480403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.480469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.480531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.480599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.480682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.480742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.480801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.480861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.480926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.480984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.481043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.481102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.481165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.481227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.481287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.481372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.481437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.481497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.481557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.481621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.481696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.481756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.481812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.481869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.481929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.481987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.482048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.482243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.482305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.137 [2024-11-19 11:28:16.482385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.482448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.482509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.482574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.482634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.482709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.482765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.482824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.482879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.482937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.483000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.483060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.483122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.483192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.483254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.483313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.483408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.483478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.483537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.483598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.483673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.483732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.483796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.483862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.483922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.483982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.484042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.484106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.484172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.484234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.484295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.485088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.485152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.485210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.485276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.485340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.485433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.485492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.485554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.485611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.485688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.485748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.485810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.485867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.485929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.485989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.486040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.486108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.486166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.486224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.486282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.486352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.486423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.486485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.486541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.486601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.486674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.486753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.486812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.486871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.486939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.487000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.487069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.487132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.487190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.487250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.487309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.487390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.487456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.487524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.487583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.487642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.487718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.487778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.487843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.487908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.487967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.488028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.488089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.488149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.488209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.488275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.488335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.488420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.488484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.488545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.488605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.488678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.488740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.488800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.488864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.488920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.138 [2024-11-19 11:28:16.488975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.489030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.489087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.489283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.489341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.489426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.489486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.489542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.489601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.489660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.489739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.489796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.489853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.489909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.489965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.490022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.490079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.490141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.490205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.490265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.490327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.490415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.490480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.490546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.490615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.490698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.490758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.490816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.490875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.490943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.491006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.491057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.491115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.491748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.491815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.491887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.491945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.492010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.492079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.492137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.492196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.492249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.492300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.492384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.492451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.492514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.492578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.492642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.492719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.492781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.492839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.492897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.492959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.493021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.493076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.493139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.493205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.493261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.493322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.493413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.493477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.493538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.493603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.493688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.493752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.493812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.493871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.493932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.493994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.494053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.494112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.494171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.494229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.494290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.494379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.494442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.494503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.494566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.494629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.494710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.494776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.494835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.494895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.494955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.495022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.495083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.495141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.495200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.495258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.495319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.495415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.495480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.495543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.139 [2024-11-19 11:28:16.495604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.495670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.495756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.495817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.496032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.496092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.496154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.496212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.496277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.496337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.496426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.496479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.496537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.496596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.496669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.496733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.496793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.496859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.496918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.496977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.497037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.497095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.497153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.497212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.497280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.497352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.497425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.497490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.497546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.497607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.497679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.497740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.497799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.497855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.497918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.497974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.498032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.498521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.498589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.498649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.498724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.498785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.498845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.498915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.498974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.499041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.499101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.499158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.499215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.499279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.499360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.499428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.499489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.499549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.499609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.499662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.499732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.499789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.499848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.499909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.499972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.500030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.500089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.500150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.500210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.500278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.500355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.500433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.500502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.500571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.500633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.500708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.500771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.500834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.500896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.500960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.501020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.501078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.501139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.501200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.501259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.501319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.501409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.501474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.501537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.501599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.501680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.501739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.501790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.501847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.501903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.501965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.502029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.140 [2024-11-19 11:28:16.502090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.502141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.502204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.502263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.502319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.502403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.502461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.502525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.502750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.502814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.502875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.502951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.503019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.503078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.503140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.503206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.503264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.503323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.503409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.503482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.503554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.503624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.503702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.503765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.503824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.503893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.503962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.504031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.504091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.504150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.504209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.504264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.504326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.504420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.504485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.504547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.504608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.504690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.505451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.505519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.505585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.505662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.505724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.505780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.505830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.505887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.505938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.505998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.506071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.506130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.506190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.506254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.506315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.506403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.506471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.506532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.506593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.506676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.506737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.506797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.506859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.506927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.506985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.507046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.507105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.507165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.507227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.507289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.507379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.507444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.507514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.507576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.507646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.507724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.507783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.507842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.507912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.507978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.508035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.141 [2024-11-19 11:28:16.508098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.508160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.508221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.508283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.508356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.508427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.508480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.508541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.508599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.508673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.508732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.508796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.508859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.508915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.508976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.509038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.509096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.509148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.509206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.509264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.509329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.509417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.509482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.509713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.509777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.509836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.509906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.509975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.510044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.510110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.510169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.510227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.510287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.510370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.510437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.510500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.510561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.510625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.510692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.510755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.510811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.510881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.510941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.510996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.511057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.511112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.511172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.511235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.511292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.511382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.511442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.511503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.511563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.511623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.511700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.511760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.512229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.512299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.512402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.512476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.512542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.512604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.512693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.512767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.512837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.512896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.512957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.513018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.513086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.513151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.513212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.513274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.513334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.513421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.513490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.513552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.513613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.513689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.513751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.513810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.513866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.513927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.513988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.514048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.514110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.514170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.514231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.514289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.514389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.514452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.514513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.514576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.514644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.142 [2024-11-19 11:28:16.514719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.514782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.514833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.514890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.514945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.515003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.515062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.515121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.515180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.515239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.515306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.515395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.515459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.515525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.515589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.515675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.515736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.515794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.515854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.515912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.515981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.516042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.516101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.516160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.516220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.516285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.516372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.516591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.516675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.516732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.516790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.516852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.516925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.516984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.517043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.517105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.517173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.517230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.517281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.517338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.517432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.517494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.517556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.517614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.517691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.517761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.517817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.517873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.517928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.517982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.518037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.518095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.518145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.518194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.518240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.518289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.518342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.519004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.519075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.519137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.519198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.519256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.519315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.519408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.519485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.519556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.519624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.519701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.519764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.519823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.519883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.519949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.520018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.520083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.520142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.520205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.520262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.520321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.520407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.520462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.520522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.520584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.520664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.520729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.520786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.520851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.520909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.520969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.521026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.521080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.521138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.521204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.521263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.143 [2024-11-19 11:28:16.521323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.521410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.521477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.521544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.521612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.521689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.521750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.521818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.521878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.521937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.521999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.522063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.522129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.522189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.522266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.522328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.522399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.522464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.522524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.522586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.522651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.522730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.522790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.522848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.522910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.522970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.523037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.523098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.523306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.523400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.523463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.523517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.523573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.523637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.523715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.523772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.523831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.523910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.523970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.524025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.524084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.524145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.524203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.524262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.524326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.524412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.524478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.524536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.524591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.524676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.524736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.524794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.524860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.524919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.524977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.525037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.525095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.525153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.525211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.525267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.525326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.526149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.526218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.526277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.526333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.526427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.526485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.526549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.526611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.526690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.526749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.526808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.526874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.526942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.526998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.527059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.527117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.527174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.527234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.527310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.527379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.527439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.527492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.527552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.527611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.527691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.527752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.527811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.527871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.527931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.527996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.528064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.144 [2024-11-19 11:28:16.528123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.528184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.528245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.528307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.528389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.528451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.528516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.528581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.528643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.528721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.528783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.528846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.528909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.528970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.529029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.529089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.529148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.529208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.529272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.529334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.529423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.529488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.529549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.529612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.529690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.529755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.529814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.529872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.529929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.529986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.530043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.530101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.530165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.530397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.530465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.530519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.530574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.530634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.530710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.530786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.530843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.530903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.530962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.531027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.531102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.531162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.531224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.531289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.531375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.531439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.531512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.531573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.531633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.531708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.531768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.531835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.531898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.531962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.532039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.532103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.532178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.532241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.532304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.532893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.532958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.533016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.533075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.533135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.533196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.533276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.533336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.533432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.533497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.533555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.533618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.533694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.533761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.533819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.533876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.533925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.533982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.534036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.534095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.534154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.534212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.534274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.534359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.534431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.534498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.534566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.534630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.534708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.534769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.534832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.534902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.534968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.535030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.145 [2024-11-19 11:28:16.535089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.535155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.535220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.535279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.535357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.535430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.535493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.535560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.535620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.535698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.535758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.535820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.535883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.535943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.536008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.536065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.536128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.536190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.536248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.536310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.536394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.536451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.536512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.536581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.536648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.536722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.536784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.536846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.536906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.536960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.537173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.537239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.537297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.537380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.537441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.537501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.537556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.537617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.537692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.537751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.537812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.537874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.537935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.537994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.538060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.538126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.538187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.538247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.538305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.538391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.538459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.538530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.538590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.538667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.538727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.538779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.538844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.538906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.538968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.539027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.539088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.539149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.539205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.539742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.539810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.539871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.539925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.539983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.540032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.540087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.540146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.540205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.540265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.540331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.540432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.540503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.540568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.540634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.540713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.540776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.540836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.540908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.540972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.541030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.541089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.541150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.541211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.541272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.541334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.541423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.146 [2024-11-19 11:28:16.541484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.541548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.541609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.541681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.541737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.541795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.541854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.541912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.541968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.542024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.542092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.542166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.542225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.542287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.542372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.542437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.542500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.542562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.542627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.542703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.542767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.542830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.542898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.542964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.543022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.543081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.543140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.543200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.543261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.543326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.543410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.543474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.543535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.543596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.543680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.543744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.543802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.544012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.544075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.544134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.544194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.544258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.544322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.544398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.544461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.544518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.544585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.544643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:26:21.147 [2024-11-19 11:28:16.544717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.544779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.544843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.544898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.544953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.545018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.545079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.545141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.545202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.545264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.545323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.545416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.545475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.545537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.545602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.545677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.545737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.545800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.545856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.546710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.546773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.546831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.546892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.546953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.547020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.547079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.547139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.547197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.547258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.547326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.547413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.547477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.547542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.547603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.547686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.547753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.547813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.547878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.547939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.548006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.548066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.548126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.548187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.548249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.548307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.548388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.548451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.548517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.548581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.548641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.548718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.147 [2024-11-19 11:28:16.548769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.548836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.548902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.548958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.549014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.549077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.549129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.549186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.549250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.549309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.549395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.549459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.549522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.549585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.549647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.549726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.549788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.549849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.549909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.549972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.550034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.550094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.550161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.550221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.550284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.550356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.550429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.550505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.550568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.550635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.550712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.550774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.550976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.551038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.551100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.551162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.551225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.551288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.551370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.551434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.551497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.551568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.551628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.551698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.551755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.551807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.551866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.551924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.551984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.552046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.552104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.552164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.552224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.552280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.552370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.552440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.552504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.552565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.552624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.552709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.552776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.552831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.552890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.552952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.553010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.553579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.553663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.553722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.553773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.553833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.553892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.553951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.554011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.554073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.554134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.554201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.554259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.554317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.554403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.554472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.554535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.554594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.554658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.554735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.554799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.554857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.554916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.554980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.555049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.555114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.555173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.555234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.555290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.555375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.555439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.555504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.555561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.555620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.555697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.555756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.148 [2024-11-19 11:28:16.555817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.555875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.555933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.555996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.556054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.556110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.556174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.556233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.556292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.556373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.556439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.556509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.556569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.556629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.556701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.556757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.556813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.556879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.556938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.556997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.557057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.557117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.557177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.557238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.557301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.557385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.557450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.557521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.557588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.557812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.557874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.557937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.557996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.558057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.558116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.558175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.558246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.558305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.558391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.558458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.558521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.558601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.558684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.558743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.558802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.558862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.558919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.558978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.559035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.559094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.559159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.559219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.559275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.559332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.559419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.559483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.559544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.559608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.559680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.560571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.560644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.560726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.560787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.560846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.560910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.560971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.561033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.561092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.561151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.561210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.561270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.561329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.561418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.561480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.561544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.561605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.561684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.561747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.561807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.149 [2024-11-19 11:28:16.561867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.561927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.561985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.562051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.562115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.562178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.562241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.562304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.562390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.562449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.562509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.562571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.562630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.562706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.562765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.562823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.562881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.562936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.563003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.563060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.563124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.563186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.563245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.563303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.563385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.563447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.563511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.563575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.563637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.563714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.563769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.563826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.563878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.563939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.563991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.564048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.564123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.564187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.564254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.564315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.564400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.564464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.564530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.564599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.564822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.564885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.564951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.565013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.565072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.565130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.565192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.565262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.565325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.565410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.565473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.565536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.565602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.565697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.565771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.565831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.565888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.565950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.566009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.566075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.566136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.566194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.566253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.566318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.566400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.566466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.566519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.566587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.566654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.566732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.566791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.566850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.566914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.567407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.567475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.567532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.567591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.567650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.567724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.150 [2024-11-19 11:28:16.567783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.567848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.567908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.567983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.568070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.568137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.568220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.568298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.568376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.568436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.568497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.568560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.568621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.568684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.568746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.568819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.568892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.568954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.569014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.569088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.569142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.569203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.569263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.569327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.569396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.569457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.569520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.569580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.569643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.569717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.569774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.569833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.569899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.569959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.570008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.570068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.570166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.570238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.570310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.570413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.570485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.570545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.570603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.570662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.570732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.570795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.570874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.570942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.571009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.571081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.571144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.571205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.571276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.571341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.571412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.571475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.571541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.571609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.571831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.571896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.571960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.572032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.572112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.572179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.572256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.572327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.572413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.572480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.572533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.572603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.151 [2024-11-19 11:28:16.572664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.413 [2024-11-19 11:28:16.572744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.413 [2024-11-19 11:28:16.572805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.413 [2024-11-19 11:28:16.572868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.413 [2024-11-19 11:28:16.572942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.413 [2024-11-19 11:28:16.573016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.413 [2024-11-19 11:28:16.573077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.413 [2024-11-19 11:28:16.573132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.413 [2024-11-19 11:28:16.573186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.413 [2024-11-19 11:28:16.573245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.413 [2024-11-19 11:28:16.573306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.413 [2024-11-19 11:28:16.573379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.413 [2024-11-19 11:28:16.573440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.413 [2024-11-19 11:28:16.573508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.413 [2024-11-19 11:28:16.573578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.413 [2024-11-19 11:28:16.573657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.413 [2024-11-19 11:28:16.573741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.413 [2024-11-19 11:28:16.573823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.413 [2024-11-19 11:28:16.574714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.413 [2024-11-19 11:28:16.574800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.413 [2024-11-19 11:28:16.574883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.413 [2024-11-19 11:28:16.574966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.413 [2024-11-19 11:28:16.575033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.413 [2024-11-19 11:28:16.575114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.413 [2024-11-19 11:28:16.575194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.413 [2024-11-19 11:28:16.575276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.413 [2024-11-19 11:28:16.575359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.413 [2024-11-19 11:28:16.575447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.413 [2024-11-19 11:28:16.575533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.413 [2024-11-19 11:28:16.575609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.413 [2024-11-19 11:28:16.575690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.413 [2024-11-19 11:28:16.575768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.413 [2024-11-19 11:28:16.575851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.575931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.576004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.576086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.576181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.576266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.576348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.576431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.576511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.576587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.576683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.576765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.576839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.576930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.577008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.577080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.577145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.577205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.577266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.577332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.577405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.577476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.577540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.577600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.577662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.577722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.577786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.577855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.577921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.577985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.578050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.578112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.578177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.578239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.578301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.578374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.578439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.578505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.578568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.578629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.578691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.578753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.578830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.578913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.578972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.579035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.579100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.579161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.579246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.579300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.579537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.579601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.579684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.579744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.579804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.579856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.579914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.579987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.580045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.580108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.580171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.580228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.580280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.580353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.580426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.580488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.580547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.580607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.580665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.580737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.580794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.580864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.580932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.580993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.581052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.581113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.581173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.581239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.581297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.581383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.581446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.581518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.581580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.581643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.581719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.581784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.581850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.581916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.581978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.582037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.582096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.414 [2024-11-19 11:28:16.582157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.582221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.582275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.582332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.582417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.582480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.582545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.582611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.582685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.582745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.582799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.582855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.582914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.582971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.583034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.583097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.583155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.583206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.583264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.583321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.583410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.583474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.584072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.584139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.584200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.584259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.584321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.584408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.584476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.584541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.584606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.584693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.584755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.584818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.584880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.584941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.585002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.585063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.585122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.585185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.585244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.585303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.585390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.585457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.585522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.585582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.585644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.585722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.585780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.585840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.585906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.585967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.586028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.586087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.586150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.586213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.586271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.586324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.586416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.586476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.586543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.586602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.586683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.586743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.586801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.586865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.586921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.586978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.587039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.587097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.587155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.587218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.587275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.587338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.587420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.587486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.587555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.587616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.587694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.587752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.587820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.587883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.587940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.588015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.588091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.588146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.588809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.588874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.588938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.589004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.589069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.589127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.589187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.415 [2024-11-19 11:28:16.589251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.589314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.589408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.589473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.589535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.589601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.589677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.589748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.589815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.589881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.589943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.590003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.590062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.590122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.590192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.590258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.590321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.590401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.590464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.590521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.590582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.590656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.590721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.590779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.590839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.590891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.590949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.591009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.591081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.591150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.591206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.591260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.591320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.591404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.591473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.591544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.591606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.591665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.591728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.591792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.591856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.591939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.592002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.592064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.592124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.592186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.592258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.592320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.592404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.592468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.592528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.592597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.592661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.592737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.592798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.592860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.593729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.593792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.593849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.593909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.593966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.594027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.594096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.594159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.594211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.594272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.594334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.594424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.594486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.594552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.594617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.594693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.594762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.594820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.594879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.594941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.595002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.595057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.595114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.595171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.595229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.595285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.595378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.595441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.595500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.595552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.595614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.595681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.595738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.595794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.595855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.595916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.595974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.416 [2024-11-19 11:28:16.596037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.596096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.596159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.596226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.596289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.596373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.596440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.596507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.596568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.596629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.596707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.596769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.596834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.596901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.596960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.597021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.597083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.597141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.597209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.597276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.597334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.597434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.597501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.597563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.597632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.597708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.597769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.597971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.598034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.598094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.598153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.598215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.598278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.598381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.598443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.598498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.598558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.598611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.598673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.598752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.598813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.598875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.598939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.599000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.599467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.599537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.599600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.599681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.599744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.599805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.599865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.599931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.599998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.600058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.600116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.600176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.600237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.600298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.600380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.600445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.600507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.600565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.600624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.600699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.600760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.600818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.600880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.600937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.600989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.601047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.601109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.601170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.601228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.601287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.601360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.601435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.601503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.601567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.601624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.601703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.601762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.601822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.601877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.601941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.601999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.602061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.602121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.602189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.602250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.602310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.602394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.602455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.602516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.417 [2024-11-19 11:28:16.602580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.602640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.602717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.602777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.602837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.602899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.602967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.603027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.603089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.603147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.603198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.603255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.603314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.603399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.603826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.603894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.603954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.604014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.604077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.604141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.604202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.604258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.604317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.604398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.604458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.604516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.604576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.604633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.604714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.604773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.604831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.604893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.604953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.605015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.605078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.605144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.605206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.605265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.605335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.605426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.605490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.605555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.605619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.605695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.605751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.605809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.605874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.605937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.605997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.606054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.606111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.606165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.606220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.606273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.606332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.606414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.606475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.606532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.606589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.606646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.606704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.606764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.606827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.606890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.606967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.607028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.607087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.607146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.607206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.607266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.607327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.607414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.607477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.607539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.607602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.607686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.607752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.607812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.418 [2024-11-19 11:28:16.608654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.608732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.608792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.608851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.608913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.608983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.609043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.609102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.609161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.609219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.609284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.609378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.609439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.609497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.609559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.609616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.609686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.609742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.609806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.609864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.609920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.609977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.610034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.610095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.610156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.610214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.610275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.610334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.610421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.610480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.610538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.610601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.610661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.610728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.610785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.610844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.610901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.610958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.611017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.611075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.611132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.611194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.611252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.611308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.611388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.611449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.611509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.611566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.611625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.611681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.611757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.611817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.611874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.611934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.611992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.612052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.612109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.612169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.612231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.612295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.612381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.612445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.612514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.612581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.612816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.612880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.612939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.613005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.613065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.613127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.613186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.613245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.613306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.613395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.613460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.613520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.613583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.613648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.613727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 [2024-11-19 11:28:16.613782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:26:21.419 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:21.419 11:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:21.419 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:21.677 11:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:26:21.677 11:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:26:21.935 true 00:26:21.935 11:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2734234 00:26:21.935 11:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:22.193 11:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:22.450 11:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:26:22.450 11:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:26:22.710 true 00:26:22.710 11:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2734234 00:26:22.710 11:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:22.967 11:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:23.225 11:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:26:23.225 11:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:26:23.484 true 00:26:23.484 11:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2734234 00:26:23.484 11:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:24.420 11:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:24.420 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:24.678 11:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:26:24.678 11:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:26:24.936 true 00:26:24.936 11:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2734234 00:26:24.936 11:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:25.195 11:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:25.452 11:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:26:25.452 11:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:26:25.710 true 00:26:25.710 11:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2734234 00:26:25.710 11:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:25.968 11:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:26.233 11:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:26:26.233 11:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:26:26.490 true 00:26:26.490 11:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2734234 00:26:26.490 11:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:27.423 11:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:27.423 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:27.681 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:27.681 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:27.681 11:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:26:27.681 11:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:26:27.939 true 00:26:27.939 11:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2734234 00:26:27.939 11:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:28.197 11:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:28.763 11:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:26:28.763 11:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:26:28.763 true 00:26:28.764 11:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2734234 00:26:28.764 11:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:29.697 11:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:29.697 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:29.955 11:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:26:29.955 11:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:26:30.213 true 00:26:30.213 11:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2734234 00:26:30.213 11:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:30.470 11:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:30.728 11:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:26:30.728 11:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:26:30.985 true 00:26:30.985 11:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2734234 00:26:30.985 11:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:31.918 11:28:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:31.918 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:31.918 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:32.175 11:28:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:26:32.175 11:28:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:26:32.432 true 00:26:32.432 11:28:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2734234 00:26:32.432 11:28:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:32.690 11:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:32.948 11:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:26:32.948 11:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:26:33.206 true 00:26:33.206 11:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2734234 00:26:33.206 11:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:33.463 11:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:33.721 11:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:26:33.721 11:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:26:33.978 true 00:26:33.978 11:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2734234 00:26:33.978 11:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:34.913 11:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:35.172 11:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:26:35.173 11:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:26:35.459 true 00:26:35.459 11:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2734234 00:26:35.459 11:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:35.742 11:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:36.000 11:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:26:36.000 11:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:26:36.258 true 00:26:36.258 11:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2734234 00:26:36.258 11:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:36.516 11:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:36.773 11:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:26:36.773 11:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:26:37.031 true 00:26:37.031 11:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2734234 00:26:37.031 11:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:37.966 11:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:38.532 11:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:26:38.532 11:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:26:38.532 true 00:26:38.532 11:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2734234 00:26:38.532 11:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:38.790 11:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:39.048 11:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:26:39.048 11:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:26:39.305 true 00:26:39.563 11:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2734234 00:26:39.563 11:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:39.822 11:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:40.080 11:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:26:40.080 11:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:26:40.338 true 00:26:40.338 11:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2734234 00:26:40.338 11:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:41.277 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:41.277 11:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:41.277 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:41.540 11:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:26:41.540 11:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:26:41.798 true 00:26:41.798 11:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2734234 00:26:41.798 11:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:42.055 11:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:42.313 11:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:26:42.313 11:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:26:42.571 true 00:26:42.571 11:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2734234 00:26:42.571 11:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:42.830 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:43.088 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:26:43.088 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:26:43.347 true 00:26:43.347 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2734234 00:26:43.347 11:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:44.278 11:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:44.278 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:44.536 11:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:26:44.536 11:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:26:44.794 true 00:26:44.794 11:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2734234 00:26:44.794 11:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:45.360 11:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:45.360 11:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:26:45.360 11:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:26:45.618 true 00:26:45.618 11:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2734234 00:26:45.618 11:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:46.183 11:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:46.183 11:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:26:46.183 11:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:26:46.441 true 00:26:46.441 11:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2734234 00:26:46.441 11:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:47.374 11:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:47.632 11:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:26:47.632 11:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:26:47.891 true 00:26:47.891 Initializing NVMe Controllers 00:26:47.891 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:47.891 Controller IO queue size 128, less than required. 00:26:47.891 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:47.891 Controller IO queue size 128, less than required. 00:26:47.891 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:47.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:47.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:47.891 Initialization complete. Launching workers. 00:26:47.891 ======================================================== 00:26:47.891 Latency(us) 00:26:47.891 Device Information : IOPS MiB/s Average min max 00:26:47.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1338.37 0.65 42881.16 2947.37 1013897.74 00:26:47.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9418.50 4.60 13589.57 2538.22 446899.19 00:26:47.891 ======================================================== 00:26:47.891 Total : 10756.87 5.25 17234.02 2538.22 1013897.74 00:26:47.891 00:26:47.891 11:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2734234 00:26:47.891 11:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:48.149 11:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:48.407 11:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:26:48.407 11:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:26:48.973 true 00:26:48.973 11:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2734234 00:26:48.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2734234) - No such process 00:26:48.973 11:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2734234 00:26:48.973 11:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:48.973 11:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:49.231 11:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:26:49.231 11:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:26:49.231 11:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:26:49.231 11:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:49.231 11:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:26:49.489 null0 00:26:49.489 11:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:49.489 11:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:49.489 11:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:26:49.747 null1 00:26:50.005 11:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:50.005 11:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:50.005 11:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:26:50.263 null2 00:26:50.263 11:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:50.263 11:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:50.263 11:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:26:50.521 null3 00:26:50.521 11:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:50.521 11:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:50.521 11:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:26:50.778 null4 00:26:50.778 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:50.778 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:50.778 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:26:51.036 null5 00:26:51.036 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:51.036 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:51.036 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:26:51.294 null6 00:26:51.294 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:51.294 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:51.294 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:26:51.554 null7 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2738243 2738244 2738246 2738248 2738250 2738252 2738254 2738256 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:51.554 11:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:51.813 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:51.813 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:51.813 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:51.813 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:51.813 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:51.813 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:51.813 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:51.813 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:52.071 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:52.071 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:52.071 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:52.071 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:52.071 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:52.071 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:52.071 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:52.071 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:52.071 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:52.071 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:52.071 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:52.071 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:52.071 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:52.071 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:52.071 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:52.071 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:52.072 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:52.072 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:52.072 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:52.072 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:52.072 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:52.072 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:52.072 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:52.072 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:52.329 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:52.329 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:52.329 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:52.329 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:52.329 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:52.330 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:52.330 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:52.330 11:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:52.587 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:52.587 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:52.587 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:52.846 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:52.846 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:52.846 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:52.846 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:52.846 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:52.846 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:52.846 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:52.846 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:52.846 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:52.846 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:52.846 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:52.846 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:52.846 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:52.846 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:52.846 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:52.846 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:52.846 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:52.846 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:52.846 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:52.846 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:52.846 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:53.105 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:53.105 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:53.105 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:53.105 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:53.105 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:53.105 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:53.105 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:53.105 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:53.363 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:53.363 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:53.363 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:53.363 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:53.363 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:53.363 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:53.363 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:53.363 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:53.363 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:53.363 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:53.363 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:53.363 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:53.363 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:53.363 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:53.363 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:53.363 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:53.363 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:53.363 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:53.363 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:53.363 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:53.363 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:53.363 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:53.363 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:53.363 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:53.622 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:53.622 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:53.622 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:53.622 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:53.622 11:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:53.622 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:53.622 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:53.622 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:53.880 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:53.880 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:53.880 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:53.880 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:53.880 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:53.880 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:53.880 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:53.880 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:53.880 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:53.880 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:53.880 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:53.880 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:53.880 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:53.880 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:53.880 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:53.881 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:53.881 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:53.881 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:53.881 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:53.881 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:53.881 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:53.881 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:53.881 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:53.881 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:54.144 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:54.144 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:54.144 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:54.145 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:54.145 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:54.145 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:54.145 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:54.145 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:54.409 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:54.409 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:54.409 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:54.667 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:54.667 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:54.667 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:54.667 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:54.667 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:54.667 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:54.667 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:54.667 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:54.667 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:54.667 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:54.667 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:54.667 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:54.667 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:54.667 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:54.667 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:54.667 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:54.667 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:54.667 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:54.667 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:54.667 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:54.667 11:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:54.926 11:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:54.926 11:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:54.926 11:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:54.926 11:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:54.926 11:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:54.926 11:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:54.926 11:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:54.926 11:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:55.185 11:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:55.185 11:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:55.185 11:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:55.185 11:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:55.185 11:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:55.185 11:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:55.185 11:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:55.185 11:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:55.185 11:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:55.185 11:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:55.185 11:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:55.185 11:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:55.185 11:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:55.185 11:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:55.185 11:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:55.185 11:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:55.185 11:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:55.185 11:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:55.185 11:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:55.185 11:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:55.185 11:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:55.185 11:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:55.185 11:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:55.185 11:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:55.443 11:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:55.443 11:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:55.443 11:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:55.443 11:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:55.443 11:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:55.443 11:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:55.443 11:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:55.443 11:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:55.702 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:55.702 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:55.702 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:55.702 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:55.702 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:55.702 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:55.702 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:55.702 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:55.702 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:55.702 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:55.702 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:55.702 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:55.702 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:55.702 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:55.702 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:55.702 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:55.702 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:55.702 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:55.702 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:55.702 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:55.702 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:55.702 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:55.702 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:55.703 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:55.961 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:55.961 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:55.961 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:55.961 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:55.961 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:55.961 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:55.961 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:55.961 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:56.220 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:56.220 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:56.220 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:56.220 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:56.220 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:56.220 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:56.220 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:56.220 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:56.220 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:56.220 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:56.220 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:56.220 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:56.220 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:56.220 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:56.220 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:56.220 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:56.220 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:56.220 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:56.220 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:56.220 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:56.220 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:56.220 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:56.220 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:56.220 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:56.787 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:56.787 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:56.787 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:56.787 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:56.787 11:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:56.787 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:56.787 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:56.787 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:57.045 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:57.045 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:57.045 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:57.045 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:57.045 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:57.045 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:57.045 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:57.045 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:57.045 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:57.045 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:57.045 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:57.045 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:57.045 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:57.045 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:57.045 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:57.045 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:57.045 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:57.045 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:57.045 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:57.045 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:57.045 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:57.045 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:57.045 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:57.045 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:57.303 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:57.303 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:57.303 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:57.303 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:57.303 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:57.303 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:57.303 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:57.303 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:57.562 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:57.562 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:57.562 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:57.562 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:57.562 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:57.562 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:57.562 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:57.562 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:57.562 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:57.562 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:57.562 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:57.562 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:57.562 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:57.562 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:57.562 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:57.562 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:57.562 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:26:57.562 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:26:57.562 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:57.562 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:26:57.562 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:57.562 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:26:57.562 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:57.562 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:57.562 rmmod nvme_tcp 00:26:57.562 rmmod nvme_fabrics 00:26:57.562 rmmod nvme_keyring 00:26:57.562 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:57.562 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:26:57.562 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:26:57.562 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2733827 ']' 00:26:57.562 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2733827 00:26:57.562 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2733827 ']' 00:26:57.562 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2733827 00:26:57.562 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:26:57.562 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:57.562 11:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2733827 00:26:57.562 11:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:57.562 11:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:57.562 11:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2733827' 00:26:57.562 killing process with pid 2733827 00:26:57.562 11:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2733827 00:26:57.562 11:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2733827 00:26:57.821 11:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:57.821 11:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:57.821 11:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:57.821 11:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:26:57.821 11:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:26:57.821 11:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:57.821 11:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:26:57.821 11:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:57.821 11:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:57.821 11:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:57.821 11:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:57.821 11:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:00.426 00:27:00.426 real 0m48.098s 00:27:00.426 user 3m19.108s 00:27:00.426 sys 0m22.584s 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:00.426 ************************************ 00:27:00.426 END TEST nvmf_ns_hotplug_stress 00:27:00.426 ************************************ 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:00.426 ************************************ 00:27:00.426 START TEST nvmf_delete_subsystem 00:27:00.426 ************************************ 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:27:00.426 * Looking for test storage... 00:27:00.426 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:00.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.426 --rc genhtml_branch_coverage=1 00:27:00.426 --rc genhtml_function_coverage=1 00:27:00.426 --rc genhtml_legend=1 00:27:00.426 --rc geninfo_all_blocks=1 00:27:00.426 --rc geninfo_unexecuted_blocks=1 00:27:00.426 00:27:00.426 ' 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:00.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.426 --rc genhtml_branch_coverage=1 00:27:00.426 --rc genhtml_function_coverage=1 00:27:00.426 --rc genhtml_legend=1 00:27:00.426 --rc geninfo_all_blocks=1 00:27:00.426 --rc geninfo_unexecuted_blocks=1 00:27:00.426 00:27:00.426 ' 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:00.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.426 --rc genhtml_branch_coverage=1 00:27:00.426 --rc genhtml_function_coverage=1 00:27:00.426 --rc genhtml_legend=1 00:27:00.426 --rc geninfo_all_blocks=1 00:27:00.426 --rc geninfo_unexecuted_blocks=1 00:27:00.426 00:27:00.426 ' 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:00.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.426 --rc genhtml_branch_coverage=1 00:27:00.426 --rc genhtml_function_coverage=1 00:27:00.426 --rc genhtml_legend=1 00:27:00.426 --rc geninfo_all_blocks=1 00:27:00.426 --rc geninfo_unexecuted_blocks=1 00:27:00.426 00:27:00.426 ' 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:00.426 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:00.427 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:00.427 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.427 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.427 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.427 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:27:00.427 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.427 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:27:00.427 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:00.427 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:00.427 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:00.427 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:00.427 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:00.427 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:00.427 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:00.427 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:00.427 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:00.427 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:00.427 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:27:00.427 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:00.427 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:00.427 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:00.427 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:00.427 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:00.427 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:00.427 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:00.427 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:00.427 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:00.427 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:00.427 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:27:00.427 11:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:27:02.962 Found 0000:82:00.0 (0x8086 - 0x159b) 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:27:02.962 Found 0000:82:00.1 (0x8086 - 0x159b) 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:02.962 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:27:02.963 Found net devices under 0000:82:00.0: cvl_0_0 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:27:02.963 Found net devices under 0000:82:00.1: cvl_0_1 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:02.963 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:02.963 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:27:02.963 00:27:02.963 --- 10.0.0.2 ping statistics --- 00:27:02.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:02.963 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:02.963 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:02.963 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:27:02.963 00:27:02.963 --- 10.0.0.1 ping statistics --- 00:27:02.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:02.963 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2741426 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2741426 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2741426 ']' 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:02.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:02.963 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:02.963 [2024-11-19 11:28:58.306650] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:02.963 [2024-11-19 11:28:58.307757] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:27:02.963 [2024-11-19 11:28:58.307823] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:02.963 [2024-11-19 11:28:58.388857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:02.963 [2024-11-19 11:28:58.445643] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:02.963 [2024-11-19 11:28:58.445697] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:02.963 [2024-11-19 11:28:58.445725] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:02.963 [2024-11-19 11:28:58.445736] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:02.963 [2024-11-19 11:28:58.445746] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:02.963 [2024-11-19 11:28:58.447131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:02.963 [2024-11-19 11:28:58.447136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:03.223 [2024-11-19 11:28:58.537486] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:03.223 [2024-11-19 11:28:58.537516] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:03.223 [2024-11-19 11:28:58.537781] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:03.223 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:03.223 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:27:03.223 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:03.223 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:03.223 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:03.223 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:03.223 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:03.223 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.223 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:03.223 [2024-11-19 11:28:58.591772] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:03.223 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.223 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:03.223 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.223 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:03.223 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.223 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:03.223 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.223 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:03.223 [2024-11-19 11:28:58.607987] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:03.223 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.223 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:27:03.223 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.223 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:03.223 NULL1 00:27:03.223 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.223 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:03.223 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.223 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:03.223 Delay0 00:27:03.223 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.223 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:03.223 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.223 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:03.223 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.223 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2741483 00:27:03.223 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:27:03.223 11:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:27:03.223 [2024-11-19 11:28:58.690425] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:05.750 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:05.750 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.750 11:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:05.750 Write completed with error (sct=0, sc=8) 00:27:05.750 Write completed with error (sct=0, sc=8) 00:27:05.750 Read completed with error (sct=0, sc=8) 00:27:05.750 Write completed with error (sct=0, sc=8) 00:27:05.750 starting I/O failed: -6 00:27:05.750 Read completed with error (sct=0, sc=8) 00:27:05.750 Read completed with error (sct=0, sc=8) 00:27:05.750 Read completed with error (sct=0, sc=8) 00:27:05.750 Write completed with error (sct=0, sc=8) 00:27:05.750 starting I/O failed: -6 00:27:05.750 Read completed with error (sct=0, sc=8) 00:27:05.750 Read completed with error (sct=0, sc=8) 00:27:05.750 Read completed with error (sct=0, sc=8) 00:27:05.750 Write completed with error (sct=0, sc=8) 00:27:05.750 starting I/O failed: -6 00:27:05.750 Write completed with error (sct=0, sc=8) 00:27:05.750 Read completed with error (sct=0, sc=8) 00:27:05.750 Write completed with error (sct=0, sc=8) 00:27:05.750 Read completed with error (sct=0, sc=8) 00:27:05.750 starting I/O failed: -6 00:27:05.750 Read completed with error (sct=0, sc=8) 00:27:05.750 Read completed with error (sct=0, sc=8) 00:27:05.750 Read completed with error (sct=0, sc=8) 00:27:05.750 Read completed with error (sct=0, sc=8) 00:27:05.750 starting I/O failed: -6 00:27:05.750 Read completed with error (sct=0, sc=8) 00:27:05.750 Read completed with error (sct=0, sc=8) 00:27:05.750 Write completed with error (sct=0, sc=8) 00:27:05.750 Read completed with error (sct=0, sc=8) 00:27:05.750 starting I/O failed: -6 00:27:05.750 Write completed with error (sct=0, sc=8) 00:27:05.750 Read completed with error (sct=0, sc=8) 00:27:05.750 Read completed with error (sct=0, sc=8) 00:27:05.750 Write completed with error (sct=0, sc=8) 00:27:05.750 starting I/O failed: -6 00:27:05.750 Read completed with error (sct=0, sc=8) 00:27:05.750 Write completed with error (sct=0, sc=8) 00:27:05.750 Write completed with error (sct=0, sc=8) 00:27:05.750 Read completed with error (sct=0, sc=8) 00:27:05.750 starting I/O failed: -6 00:27:05.750 Read completed with error (sct=0, sc=8) 00:27:05.750 Read completed with error (sct=0, sc=8) 00:27:05.750 Read completed with error (sct=0, sc=8) 00:27:05.750 Read completed with error (sct=0, sc=8) 00:27:05.750 starting I/O failed: -6 00:27:05.750 Write completed with error (sct=0, sc=8) 00:27:05.750 [2024-11-19 11:29:00.857277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c20000c40 is same with the state(6) to be set 00:27:05.750 Read completed with error (sct=0, sc=8) 00:27:05.750 Write completed with error (sct=0, sc=8) 00:27:05.750 Read completed with error (sct=0, sc=8) 00:27:05.750 Read completed with error (sct=0, sc=8) 00:27:05.750 Read completed with error (sct=0, sc=8) 00:27:05.750 Read completed with error (sct=0, sc=8) 00:27:05.750 Read completed with error (sct=0, sc=8) 00:27:05.750 Read completed with error (sct=0, sc=8) 00:27:05.750 Read completed with error (sct=0, sc=8) 00:27:05.750 Write completed with error (sct=0, sc=8) 00:27:05.750 Read completed with error (sct=0, sc=8) 00:27:05.750 Write completed with error (sct=0, sc=8) 00:27:05.750 Write completed with error (sct=0, sc=8) 00:27:05.750 starting I/O failed: -6 00:27:05.750 Read completed with error (sct=0, sc=8) 00:27:05.750 Read completed with error (sct=0, sc=8) 00:27:05.750 Read completed with error (sct=0, sc=8) 00:27:05.750 Read completed with error (sct=0, sc=8) 00:27:05.750 Read completed with error (sct=0, sc=8) 00:27:05.750 Write completed with error (sct=0, sc=8) 00:27:05.750 Write completed with error (sct=0, sc=8) 00:27:05.750 Write completed with error (sct=0, sc=8) 00:27:05.750 Write completed with error (sct=0, sc=8) 00:27:05.750 Write completed with error (sct=0, sc=8) 00:27:05.750 Read completed with error (sct=0, sc=8) 00:27:05.750 starting I/O failed: -6 00:27:05.750 Read completed with error (sct=0, sc=8) 00:27:05.750 Read completed with error (sct=0, sc=8) 00:27:05.750 Write completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Write completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Write completed with error (sct=0, sc=8) 00:27:05.751 starting I/O failed: -6 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Write completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Write completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 starting I/O failed: -6 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Write completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Write completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Write completed with error (sct=0, sc=8) 00:27:05.751 Write completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 starting I/O failed: -6 00:27:05.751 Write completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Write completed with error (sct=0, sc=8) 00:27:05.751 Write completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 starting I/O failed: -6 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Write completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Write completed with error (sct=0, sc=8) 00:27:05.751 starting I/O failed: -6 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Write completed with error (sct=0, sc=8) 00:27:05.751 Write completed with error (sct=0, sc=8) 00:27:05.751 Write completed with error (sct=0, sc=8) 00:27:05.751 starting I/O failed: -6 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Write completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 starting I/O failed: -6 00:27:05.751 Write completed with error (sct=0, sc=8) 00:27:05.751 Write completed with error (sct=0, sc=8) 00:27:05.751 Write completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 starting I/O failed: -6 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 starting I/O failed: -6 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Write completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Write completed with error (sct=0, sc=8) 00:27:05.751 Write completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Write completed with error (sct=0, sc=8) 00:27:05.751 Write completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Write completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 starting I/O failed: -6 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Write completed with error (sct=0, sc=8) 00:27:05.751 starting I/O failed: -6 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Write completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Write completed with error (sct=0, sc=8) 00:27:05.751 starting I/O failed: -6 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Write completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 starting I/O failed: -6 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 Write completed with error (sct=0, sc=8) 00:27:05.751 Read completed with error (sct=0, sc=8) 00:27:05.751 [2024-11-19 11:29:00.858459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c2000d020 is same with the state(6) to be set 00:27:05.751 starting I/O failed: -6 00:27:06.685 [2024-11-19 11:29:01.830120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaf9a0 is same with the state(6) to be set 00:27:06.685 Read completed with error (sct=0, sc=8) 00:27:06.685 Read completed with error (sct=0, sc=8) 00:27:06.685 Read completed with error (sct=0, sc=8) 00:27:06.685 Read completed with error (sct=0, sc=8) 00:27:06.685 Read completed with error (sct=0, sc=8) 00:27:06.685 Write completed with error (sct=0, sc=8) 00:27:06.685 Read completed with error (sct=0, sc=8) 00:27:06.685 Write completed with error (sct=0, sc=8) 00:27:06.685 Read completed with error (sct=0, sc=8) 00:27:06.685 Read completed with error (sct=0, sc=8) 00:27:06.685 Read completed with error (sct=0, sc=8) 00:27:06.685 Read completed with error (sct=0, sc=8) 00:27:06.685 Read completed with error (sct=0, sc=8) 00:27:06.685 Write completed with error (sct=0, sc=8) 00:27:06.685 Write completed with error (sct=0, sc=8) 00:27:06.685 Read completed with error (sct=0, sc=8) 00:27:06.685 Read completed with error (sct=0, sc=8) 00:27:06.685 Read completed with error (sct=0, sc=8) 00:27:06.685 Read completed with error (sct=0, sc=8) 00:27:06.685 Write completed with error (sct=0, sc=8) 00:27:06.685 Write completed with error (sct=0, sc=8) 00:27:06.685 Write completed with error (sct=0, sc=8) 00:27:06.685 Write completed with error (sct=0, sc=8) 00:27:06.685 Read completed with error (sct=0, sc=8) 00:27:06.685 Read completed with error (sct=0, sc=8) 00:27:06.685 Read completed with error (sct=0, sc=8) 00:27:06.685 Read completed with error (sct=0, sc=8) 00:27:06.685 Read completed with error (sct=0, sc=8) 00:27:06.685 Read completed with error (sct=0, sc=8) 00:27:06.685 Read completed with error (sct=0, sc=8) 00:27:06.686 [2024-11-19 11:29:01.856008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aae2c0 is same with the state(6) to be set 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 Write completed with error (sct=0, sc=8) 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 Write completed with error (sct=0, sc=8) 00:27:06.686 Write completed with error (sct=0, sc=8) 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 Write completed with error (sct=0, sc=8) 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 Write completed with error (sct=0, sc=8) 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 Write completed with error (sct=0, sc=8) 00:27:06.686 Write completed with error (sct=0, sc=8) 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 Write completed with error (sct=0, sc=8) 00:27:06.686 Write completed with error (sct=0, sc=8) 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 [2024-11-19 11:29:01.856227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aae680 is same with the state(6) to be set 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 Write completed with error (sct=0, sc=8) 00:27:06.686 Write completed with error (sct=0, sc=8) 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 Write completed with error (sct=0, sc=8) 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 Write completed with error (sct=0, sc=8) 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 Write completed with error (sct=0, sc=8) 00:27:06.686 Write completed with error (sct=0, sc=8) 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 Write completed with error (sct=0, sc=8) 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 Write completed with error (sct=0, sc=8) 00:27:06.686 Write completed with error (sct=0, sc=8) 00:27:06.686 [2024-11-19 11:29:01.856458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aae860 is same with the state(6) to be set 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 Write completed with error (sct=0, sc=8) 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 Write completed with error (sct=0, sc=8) 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 Write completed with error (sct=0, sc=8) 00:27:06.686 Read completed with error (sct=0, sc=8) 00:27:06.686 [2024-11-19 11:29:01.857204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c2000d350 is same with the state(6) to be set 00:27:06.686 Initializing NVMe Controllers 00:27:06.686 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:06.686 Controller IO queue size 128, less than required. 00:27:06.686 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:06.686 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:27:06.686 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:27:06.686 Initialization complete. Launching workers. 00:27:06.686 ======================================================== 00:27:06.686 Latency(us) 00:27:06.686 Device Information : IOPS MiB/s Average min max 00:27:06.686 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 177.47 0.09 965671.52 651.36 1012898.81 00:27:06.686 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 149.21 0.07 900352.41 577.93 1013541.43 00:27:06.686 ======================================================== 00:27:06.686 Total : 326.68 0.16 935836.84 577.93 1013541.43 00:27:06.686 00:27:06.686 [2024-11-19 11:29:01.858359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aaf9a0 (9): Bad file descriptor 00:27:06.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:27:06.686 11:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.686 11:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:27:06.686 11:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2741483 00:27:06.686 11:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:27:06.944 11:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:27:06.944 11:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2741483 00:27:06.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2741483) - No such process 00:27:06.944 11:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2741483 00:27:06.944 11:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:27:06.944 11:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2741483 00:27:06.944 11:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:27:06.944 11:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:06.944 11:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:27:06.944 11:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:06.944 11:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2741483 00:27:06.944 11:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:27:06.944 11:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:06.944 11:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:06.944 11:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:06.944 11:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:06.944 11:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.944 11:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:06.944 11:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.944 11:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:06.944 11:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.944 11:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:06.944 [2024-11-19 11:29:02.379919] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:06.944 11:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.944 11:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:06.944 11:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.944 11:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:06.944 11:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.944 11:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2741967 00:27:06.944 11:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:27:06.944 11:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2741967 00:27:06.944 11:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:06.945 11:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:27:07.203 [2024-11-19 11:29:02.445503] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:07.460 11:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:07.460 11:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2741967 00:27:07.460 11:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:08.026 11:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:08.026 11:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2741967 00:27:08.026 11:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:08.591 11:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:08.591 11:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2741967 00:27:08.591 11:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:09.156 11:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:09.156 11:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2741967 00:27:09.156 11:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:09.414 11:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:09.414 11:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2741967 00:27:09.414 11:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:09.980 11:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:09.980 11:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2741967 00:27:09.980 11:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:10.238 Initializing NVMe Controllers 00:27:10.238 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:10.238 Controller IO queue size 128, less than required. 00:27:10.238 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:10.238 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:27:10.238 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:27:10.238 Initialization complete. Launching workers. 00:27:10.238 ======================================================== 00:27:10.238 Latency(us) 00:27:10.238 Device Information : IOPS MiB/s Average min max 00:27:10.238 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004982.14 1000259.39 1014899.69 00:27:10.238 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004653.20 1000198.05 1043734.93 00:27:10.238 ======================================================== 00:27:10.238 Total : 256.00 0.12 1004817.67 1000198.05 1043734.93 00:27:10.239 00:27:10.497 11:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:10.497 11:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2741967 00:27:10.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2741967) - No such process 00:27:10.497 11:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2741967 00:27:10.497 11:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:27:10.497 11:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:27:10.497 11:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:10.497 11:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:27:10.497 11:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:10.497 11:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:27:10.497 11:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:10.497 11:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:10.497 rmmod nvme_tcp 00:27:10.497 rmmod nvme_fabrics 00:27:10.497 rmmod nvme_keyring 00:27:10.497 11:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:10.497 11:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:27:10.497 11:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:27:10.497 11:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2741426 ']' 00:27:10.497 11:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2741426 00:27:10.498 11:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2741426 ']' 00:27:10.498 11:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2741426 00:27:10.498 11:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:27:10.498 11:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:10.498 11:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2741426 00:27:10.498 11:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:10.498 11:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:10.498 11:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2741426' 00:27:10.498 killing process with pid 2741426 00:27:10.498 11:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2741426 00:27:10.498 11:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2741426 00:27:10.758 11:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:10.758 11:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:10.758 11:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:10.758 11:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:27:10.758 11:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:27:10.758 11:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:10.758 11:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:27:10.758 11:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:10.758 11:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:10.758 11:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.758 11:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:10.758 11:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:13.301 00:27:13.301 real 0m12.888s 00:27:13.301 user 0m25.050s 00:27:13.301 sys 0m4.002s 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:13.301 ************************************ 00:27:13.301 END TEST nvmf_delete_subsystem 00:27:13.301 ************************************ 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:13.301 ************************************ 00:27:13.301 START TEST nvmf_host_management 00:27:13.301 ************************************ 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:27:13.301 * Looking for test storage... 00:27:13.301 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:13.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:13.301 --rc genhtml_branch_coverage=1 00:27:13.301 --rc genhtml_function_coverage=1 00:27:13.301 --rc genhtml_legend=1 00:27:13.301 --rc geninfo_all_blocks=1 00:27:13.301 --rc geninfo_unexecuted_blocks=1 00:27:13.301 00:27:13.301 ' 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:13.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:13.301 --rc genhtml_branch_coverage=1 00:27:13.301 --rc genhtml_function_coverage=1 00:27:13.301 --rc genhtml_legend=1 00:27:13.301 --rc geninfo_all_blocks=1 00:27:13.301 --rc geninfo_unexecuted_blocks=1 00:27:13.301 00:27:13.301 ' 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:13.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:13.301 --rc genhtml_branch_coverage=1 00:27:13.301 --rc genhtml_function_coverage=1 00:27:13.301 --rc genhtml_legend=1 00:27:13.301 --rc geninfo_all_blocks=1 00:27:13.301 --rc geninfo_unexecuted_blocks=1 00:27:13.301 00:27:13.301 ' 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:13.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:13.301 --rc genhtml_branch_coverage=1 00:27:13.301 --rc genhtml_function_coverage=1 00:27:13.301 --rc genhtml_legend=1 00:27:13.301 --rc geninfo_all_blocks=1 00:27:13.301 --rc geninfo_unexecuted_blocks=1 00:27:13.301 00:27:13.301 ' 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:13.301 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:13.302 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.302 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.302 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.302 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:27:13.302 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.302 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:27:13.302 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:13.302 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:13.302 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:13.302 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:13.302 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:13.302 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:13.302 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:13.302 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:13.302 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:13.302 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:13.302 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:13.302 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:13.302 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:27:13.302 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:13.302 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:13.302 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:13.302 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:13.302 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:13.302 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:13.302 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:13.302 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:13.302 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:13.302 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:13.302 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:27:13.302 11:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:15.839 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:15.839 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:27:15.839 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:15.839 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:15.839 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:27:15.840 Found 0000:82:00.0 (0x8086 - 0x159b) 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:27:15.840 Found 0000:82:00.1 (0x8086 - 0x159b) 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:27:15.840 Found net devices under 0000:82:00.0: cvl_0_0 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:27:15.840 Found net devices under 0000:82:00.1: cvl_0_1 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:15.840 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:15.841 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:15.841 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:15.841 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:15.841 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:15.841 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:15.841 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:15.841 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:27:15.841 00:27:15.841 --- 10.0.0.2 ping statistics --- 00:27:15.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:15.841 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:27:15.841 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:15.841 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:15.841 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:27:15.841 00:27:15.841 --- 10.0.0.1 ping statistics --- 00:27:15.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:15.841 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:27:15.841 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:15.841 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:27:15.841 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:15.841 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:15.841 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:15.841 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:15.841 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:15.841 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:15.841 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:15.841 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:27:15.841 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:27:15.841 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:27:15.841 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:15.841 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:15.841 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:15.841 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2744675 00:27:15.841 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:27:15.841 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2744675 00:27:15.841 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2744675 ']' 00:27:15.841 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:15.841 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:15.841 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:15.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:15.841 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:15.841 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:15.841 [2024-11-19 11:29:11.324525] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:15.841 [2024-11-19 11:29:11.325558] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:27:15.841 [2024-11-19 11:29:11.325614] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:16.100 [2024-11-19 11:29:11.407970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:16.100 [2024-11-19 11:29:11.462774] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:16.100 [2024-11-19 11:29:11.462830] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:16.100 [2024-11-19 11:29:11.462858] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:16.100 [2024-11-19 11:29:11.462869] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:16.100 [2024-11-19 11:29:11.462878] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:16.100 [2024-11-19 11:29:11.464416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:16.100 [2024-11-19 11:29:11.464478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:16.100 [2024-11-19 11:29:11.464546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:16.100 [2024-11-19 11:29:11.464550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:16.100 [2024-11-19 11:29:11.548459] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:16.100 [2024-11-19 11:29:11.548734] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:16.100 [2024-11-19 11:29:11.548986] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:16.100 [2024-11-19 11:29:11.549665] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:16.101 [2024-11-19 11:29:11.549896] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:27:16.101 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:16.101 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:27:16.101 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:16.101 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:16.101 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:16.359 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:16.359 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:16.359 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.359 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:16.359 [2024-11-19 11:29:11.605252] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:16.359 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.359 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:27:16.359 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:16.359 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:16.360 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:16.360 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:27:16.360 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:27:16.360 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.360 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:16.360 Malloc0 00:27:16.360 [2024-11-19 11:29:11.681559] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:16.360 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.360 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:27:16.360 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:16.360 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:16.360 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2744770 00:27:16.360 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2744770 /var/tmp/bdevperf.sock 00:27:16.360 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2744770 ']' 00:27:16.360 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:16.360 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:27:16.360 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:16.360 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:16.360 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:16.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:16.360 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:27:16.360 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:16.360 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:27:16.360 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:16.360 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:16.360 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:16.360 { 00:27:16.360 "params": { 00:27:16.360 "name": "Nvme$subsystem", 00:27:16.360 "trtype": "$TEST_TRANSPORT", 00:27:16.360 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:16.360 "adrfam": "ipv4", 00:27:16.360 "trsvcid": "$NVMF_PORT", 00:27:16.360 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:16.360 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:16.360 "hdgst": ${hdgst:-false}, 00:27:16.360 "ddgst": ${ddgst:-false} 00:27:16.360 }, 00:27:16.360 "method": "bdev_nvme_attach_controller" 00:27:16.360 } 00:27:16.360 EOF 00:27:16.360 )") 00:27:16.360 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:27:16.360 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:27:16.360 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:27:16.360 11:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:16.360 "params": { 00:27:16.360 "name": "Nvme0", 00:27:16.360 "trtype": "tcp", 00:27:16.360 "traddr": "10.0.0.2", 00:27:16.360 "adrfam": "ipv4", 00:27:16.360 "trsvcid": "4420", 00:27:16.360 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:16.360 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:16.360 "hdgst": false, 00:27:16.360 "ddgst": false 00:27:16.360 }, 00:27:16.360 "method": "bdev_nvme_attach_controller" 00:27:16.360 }' 00:27:16.360 [2024-11-19 11:29:11.756836] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:27:16.360 [2024-11-19 11:29:11.756912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2744770 ] 00:27:16.360 [2024-11-19 11:29:11.836848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:16.619 [2024-11-19 11:29:11.896439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:16.619 Running I/O for 10 seconds... 00:27:16.877 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:16.877 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:27:16.877 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:16.877 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.877 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:16.877 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.877 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:16.877 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:27:16.877 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:16.877 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:27:16.877 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:27:16.877 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:27:16.877 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:27:16.877 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:27:16.877 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:27:16.877 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:27:16.877 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.877 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:16.877 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.877 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:27:16.877 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:27:16.878 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:27:17.138 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:27:17.138 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:27:17.138 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:27:17.138 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:27:17.138 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.138 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:17.138 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.138 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:27:17.138 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:27:17.138 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:27:17.138 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:27:17.138 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:27:17.138 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:27:17.138 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.138 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:17.138 [2024-11-19 11:29:12.493283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f520 is same with the state(6) to be set 00:27:17.138 [2024-11-19 11:29:12.493374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f520 is same with the state(6) to be set 00:27:17.138 [2024-11-19 11:29:12.493393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f520 is same with the state(6) to be set 00:27:17.138 [2024-11-19 11:29:12.493406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f520 is same with the state(6) to be set 00:27:17.138 [2024-11-19 11:29:12.493418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f520 is same with the state(6) to be set 00:27:17.138 [2024-11-19 11:29:12.493430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f520 is same with the state(6) to be set 00:27:17.138 [2024-11-19 11:29:12.493442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f520 is same with the state(6) to be set 00:27:17.138 [2024-11-19 11:29:12.493454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f520 is same with the state(6) to be set 00:27:17.138 [2024-11-19 11:29:12.493465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f520 is same with the state(6) to be set 00:27:17.138 [2024-11-19 11:29:12.493477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f520 is same with the state(6) to be set 00:27:17.138 [2024-11-19 11:29:12.493489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f520 is same with the state(6) to be set 00:27:17.138 [2024-11-19 11:29:12.493500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f520 is same with the state(6) to be set 00:27:17.138 [2024-11-19 11:29:12.493512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f520 is same with the state(6) to be set 00:27:17.138 [2024-11-19 11:29:12.493524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f520 is same with the state(6) to be set 00:27:17.138 [2024-11-19 11:29:12.493535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f520 is same with the state(6) to be set 00:27:17.139 [2024-11-19 11:29:12.493547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f520 is same with the state(6) to be set 00:27:17.139 [2024-11-19 11:29:12.493569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f520 is same with the state(6) to be set 00:27:17.139 [2024-11-19 11:29:12.493582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f520 is same with the state(6) to be set 00:27:17.139 [2024-11-19 11:29:12.493593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f520 is same with the state(6) to be set 00:27:17.139 [2024-11-19 11:29:12.493605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f520 is same with the state(6) to be set 00:27:17.139 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.139 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:27:17.139 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.139 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:17.139 [2024-11-19 11:29:12.501915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.139 [2024-11-19 11:29:12.501955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.139 [2024-11-19 11:29:12.501989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.139 [2024-11-19 11:29:12.502004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.139 [2024-11-19 11:29:12.502018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.139 [2024-11-19 11:29:12.502040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.139 [2024-11-19 11:29:12.502054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.139 [2024-11-19 11:29:12.502067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.139 [2024-11-19 11:29:12.502080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x761a40 is same with the state(6) to be set 00:27:17.139 [2024-11-19 11:29:12.502415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.139 [2024-11-19 11:29:12.502441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.139 [2024-11-19 11:29:12.502467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.139 [2024-11-19 11:29:12.502483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.139 [2024-11-19 11:29:12.502498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.139 [2024-11-19 11:29:12.502513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.139 [2024-11-19 11:29:12.502528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.139 [2024-11-19 11:29:12.502543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.139 [2024-11-19 11:29:12.502558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.139 [2024-11-19 11:29:12.502578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.139 [2024-11-19 11:29:12.502594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.139 [2024-11-19 11:29:12.502609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.139 [2024-11-19 11:29:12.502624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.139 [2024-11-19 11:29:12.502639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.139 [2024-11-19 11:29:12.502655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.139 [2024-11-19 11:29:12.502668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.139 [2024-11-19 11:29:12.502683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.139 [2024-11-19 11:29:12.502697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.139 [2024-11-19 11:29:12.502712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.139 [2024-11-19 11:29:12.502730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.139 [2024-11-19 11:29:12.502745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.139 [2024-11-19 11:29:12.502760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.139 [2024-11-19 11:29:12.502775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.139 [2024-11-19 11:29:12.502789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.139 [2024-11-19 11:29:12.502804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.139 [2024-11-19 11:29:12.502818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.139 [2024-11-19 11:29:12.502833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.139 [2024-11-19 11:29:12.502847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.139 [2024-11-19 11:29:12.502862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.139 [2024-11-19 11:29:12.502877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.139 [2024-11-19 11:29:12.502892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.139 [2024-11-19 11:29:12.502906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.139 [2024-11-19 11:29:12.502921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.139 [2024-11-19 11:29:12.502935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.139 [2024-11-19 11:29:12.502955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.139 [2024-11-19 11:29:12.502970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.139 [2024-11-19 11:29:12.502985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.139 [2024-11-19 11:29:12.502999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.139 [2024-11-19 11:29:12.503014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.139 [2024-11-19 11:29:12.503028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.139 [2024-11-19 11:29:12.503044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.139 [2024-11-19 11:29:12.503057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.139 [2024-11-19 11:29:12.503073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.139 [2024-11-19 11:29:12.503086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.139 [2024-11-19 11:29:12.503102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.139 [2024-11-19 11:29:12.503116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.139 [2024-11-19 11:29:12.503131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.139 [2024-11-19 11:29:12.503145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.139 [2024-11-19 11:29:12.503160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.139 [2024-11-19 11:29:12.503174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.139 [2024-11-19 11:29:12.503189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.139 [2024-11-19 11:29:12.503204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.139 [2024-11-19 11:29:12.503219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.139 [2024-11-19 11:29:12.503233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.139 [2024-11-19 11:29:12.503249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.139 [2024-11-19 11:29:12.503263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.139 [2024-11-19 11:29:12.503277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.140 [2024-11-19 11:29:12.503292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.140 [2024-11-19 11:29:12.503307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.140 [2024-11-19 11:29:12.503329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.140 [2024-11-19 11:29:12.503356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.140 [2024-11-19 11:29:12.503379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.140 [2024-11-19 11:29:12.503396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.140 [2024-11-19 11:29:12.503412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.140 [2024-11-19 11:29:12.503427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.140 [2024-11-19 11:29:12.503442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.140 [2024-11-19 11:29:12.503457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.140 [2024-11-19 11:29:12.503472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.140 [2024-11-19 11:29:12.503487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.140 [2024-11-19 11:29:12.503502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.140 [2024-11-19 11:29:12.503517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.140 [2024-11-19 11:29:12.503532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.140 [2024-11-19 11:29:12.503547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.140 [2024-11-19 11:29:12.503561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.140 [2024-11-19 11:29:12.503577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.140 [2024-11-19 11:29:12.503591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.140 [2024-11-19 11:29:12.503606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.140 [2024-11-19 11:29:12.503620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.140 [2024-11-19 11:29:12.503636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.140 [2024-11-19 11:29:12.503650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.140 [2024-11-19 11:29:12.503665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.140 [2024-11-19 11:29:12.503679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.140 [2024-11-19 11:29:12.503694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.140 [2024-11-19 11:29:12.503708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.140 [2024-11-19 11:29:12.503727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.140 [2024-11-19 11:29:12.503742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.140 [2024-11-19 11:29:12.503758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.140 [2024-11-19 11:29:12.503771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.140 [2024-11-19 11:29:12.503786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.140 [2024-11-19 11:29:12.503800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.140 [2024-11-19 11:29:12.503815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.140 [2024-11-19 11:29:12.503830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.140 [2024-11-19 11:29:12.503845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.140 [2024-11-19 11:29:12.503860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.140 [2024-11-19 11:29:12.503875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.140 [2024-11-19 11:29:12.503889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.140 [2024-11-19 11:29:12.503905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.140 [2024-11-19 11:29:12.503920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.140 [2024-11-19 11:29:12.503935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.140 [2024-11-19 11:29:12.503949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.140 [2024-11-19 11:29:12.503964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.140 [2024-11-19 11:29:12.503978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.140 [2024-11-19 11:29:12.503994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.140 [2024-11-19 11:29:12.504008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.140 [2024-11-19 11:29:12.504023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.140 [2024-11-19 11:29:12.504038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.140 [2024-11-19 11:29:12.504053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.140 [2024-11-19 11:29:12.504067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.140 [2024-11-19 11:29:12.504082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.140 [2024-11-19 11:29:12.504101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.140 [2024-11-19 11:29:12.504117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.140 [2024-11-19 11:29:12.504132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.140 [2024-11-19 11:29:12.504147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.140 [2024-11-19 11:29:12.504161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.140 [2024-11-19 11:29:12.504176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.140 [2024-11-19 11:29:12.504191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.140 [2024-11-19 11:29:12.504206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.140 [2024-11-19 11:29:12.504220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.140 [2024-11-19 11:29:12.504235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.140 [2024-11-19 11:29:12.504250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.140 [2024-11-19 11:29:12.504266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.140 [2024-11-19 11:29:12.504280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.140 [2024-11-19 11:29:12.504295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.140 [2024-11-19 11:29:12.504309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.140 [2024-11-19 11:29:12.504325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.140 [2024-11-19 11:29:12.504338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.140 [2024-11-19 11:29:12.504360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.140 [2024-11-19 11:29:12.504383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.140 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.140 [2024-11-19 11:29:12.505561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:17.140 11:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:27:17.140 task offset: 73728 on job bdev=Nvme0n1 fails 00:27:17.140 00:27:17.140 Latency(us) 00:27:17.140 [2024-11-19T10:29:12.638Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:17.141 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:17.141 Job: Nvme0n1 ended in about 0.39 seconds with error 00:27:17.141 Verification LBA range: start 0x0 length 0x400 00:27:17.141 Nvme0n1 : 0.39 1479.65 92.48 164.41 0.00 37795.05 2876.30 34175.81 00:27:17.141 [2024-11-19T10:29:12.638Z] =================================================================================================================== 00:27:17.141 [2024-11-19T10:29:12.638Z] Total : 1479.65 92.48 164.41 0.00 37795.05 2876.30 34175.81 00:27:17.141 [2024-11-19 11:29:12.508241] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:17.141 [2024-11-19 11:29:12.508275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x761a40 (9): Bad file descriptor 00:27:17.141 [2024-11-19 11:29:12.552768] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:27:18.076 11:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2744770 00:27:18.076 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2744770) - No such process 00:27:18.076 11:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:27:18.076 11:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:27:18.076 11:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:18.076 11:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:27:18.076 11:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:27:18.076 11:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:27:18.076 11:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:18.076 11:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:18.076 { 00:27:18.076 "params": { 00:27:18.076 "name": "Nvme$subsystem", 00:27:18.076 "trtype": "$TEST_TRANSPORT", 00:27:18.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:18.076 "adrfam": "ipv4", 00:27:18.076 "trsvcid": "$NVMF_PORT", 00:27:18.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:18.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:18.076 "hdgst": ${hdgst:-false}, 00:27:18.076 "ddgst": ${ddgst:-false} 00:27:18.076 }, 00:27:18.076 "method": "bdev_nvme_attach_controller" 00:27:18.076 } 00:27:18.076 EOF 00:27:18.076 )") 00:27:18.076 11:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:27:18.076 11:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:27:18.076 11:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:27:18.076 11:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:18.076 "params": { 00:27:18.076 "name": "Nvme0", 00:27:18.076 "trtype": "tcp", 00:27:18.076 "traddr": "10.0.0.2", 00:27:18.076 "adrfam": "ipv4", 00:27:18.076 "trsvcid": "4420", 00:27:18.076 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:18.076 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:18.076 "hdgst": false, 00:27:18.076 "ddgst": false 00:27:18.076 }, 00:27:18.076 "method": "bdev_nvme_attach_controller" 00:27:18.076 }' 00:27:18.076 [2024-11-19 11:29:13.555530] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:27:18.076 [2024-11-19 11:29:13.555613] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2744979 ] 00:27:18.335 [2024-11-19 11:29:13.635806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.335 [2024-11-19 11:29:13.693754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.593 Running I/O for 1 seconds... 00:27:19.527 1565.00 IOPS, 97.81 MiB/s 00:27:19.527 Latency(us) 00:27:19.527 [2024-11-19T10:29:15.024Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:19.528 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:19.528 Verification LBA range: start 0x0 length 0x400 00:27:19.528 Nvme0n1 : 1.02 1601.40 100.09 0.00 0.00 39132.88 3034.07 33981.63 00:27:19.528 [2024-11-19T10:29:15.025Z] =================================================================================================================== 00:27:19.528 [2024-11-19T10:29:15.025Z] Total : 1601.40 100.09 0.00 0.00 39132.88 3034.07 33981.63 00:27:19.786 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:27:19.786 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:27:19.786 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:19.786 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:19.786 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:27:19.786 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:19.786 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:27:19.786 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:19.786 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:27:19.786 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:19.786 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:19.786 rmmod nvme_tcp 00:27:19.786 rmmod nvme_fabrics 00:27:19.786 rmmod nvme_keyring 00:27:19.786 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:20.044 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:27:20.044 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:27:20.044 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2744675 ']' 00:27:20.044 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2744675 00:27:20.044 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2744675 ']' 00:27:20.044 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2744675 00:27:20.044 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:27:20.044 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:20.044 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2744675 00:27:20.044 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:20.044 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:20.044 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2744675' 00:27:20.044 killing process with pid 2744675 00:27:20.044 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2744675 00:27:20.044 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2744675 00:27:20.303 [2024-11-19 11:29:15.549337] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:27:20.303 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:20.303 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:20.303 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:20.303 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:27:20.303 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:27:20.303 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:20.303 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:27:20.303 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:20.303 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:20.303 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:20.303 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:20.303 11:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:22.209 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:22.209 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:27:22.209 00:27:22.209 real 0m9.348s 00:27:22.209 user 0m17.630s 00:27:22.209 sys 0m4.156s 00:27:22.209 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:22.209 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:22.209 ************************************ 00:27:22.209 END TEST nvmf_host_management 00:27:22.209 ************************************ 00:27:22.209 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:27:22.209 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:22.209 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:22.209 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:22.209 ************************************ 00:27:22.209 START TEST nvmf_lvol 00:27:22.209 ************************************ 00:27:22.209 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:27:22.468 * Looking for test storage... 00:27:22.468 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:22.468 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:22.468 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:27:22.468 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:22.468 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:22.468 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:22.468 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:22.468 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:22.468 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:27:22.468 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:27:22.468 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:27:22.468 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:27:22.468 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:27:22.468 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:27:22.468 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:27:22.468 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:22.468 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:27:22.468 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:27:22.468 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:22.468 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:22.468 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:27:22.468 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:27:22.468 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:22.468 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:27:22.468 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:27:22.468 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:27:22.468 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:27:22.468 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:22.468 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:27:22.468 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:27:22.468 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:22.468 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:22.468 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:27:22.468 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:22.468 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:22.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:22.468 --rc genhtml_branch_coverage=1 00:27:22.468 --rc genhtml_function_coverage=1 00:27:22.468 --rc genhtml_legend=1 00:27:22.468 --rc geninfo_all_blocks=1 00:27:22.468 --rc geninfo_unexecuted_blocks=1 00:27:22.468 00:27:22.468 ' 00:27:22.468 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:22.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:22.468 --rc genhtml_branch_coverage=1 00:27:22.468 --rc genhtml_function_coverage=1 00:27:22.468 --rc genhtml_legend=1 00:27:22.468 --rc geninfo_all_blocks=1 00:27:22.468 --rc geninfo_unexecuted_blocks=1 00:27:22.468 00:27:22.468 ' 00:27:22.468 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:22.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:22.468 --rc genhtml_branch_coverage=1 00:27:22.468 --rc genhtml_function_coverage=1 00:27:22.468 --rc genhtml_legend=1 00:27:22.468 --rc geninfo_all_blocks=1 00:27:22.468 --rc geninfo_unexecuted_blocks=1 00:27:22.468 00:27:22.468 ' 00:27:22.468 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:22.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:22.468 --rc genhtml_branch_coverage=1 00:27:22.468 --rc genhtml_function_coverage=1 00:27:22.468 --rc genhtml_legend=1 00:27:22.468 --rc geninfo_all_blocks=1 00:27:22.468 --rc geninfo_unexecuted_blocks=1 00:27:22.468 00:27:22.468 ' 00:27:22.468 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:22.468 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:27:22.468 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:27:22.469 11:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:27:25.003 Found 0000:82:00.0 (0x8086 - 0x159b) 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:27:25.003 Found 0000:82:00.1 (0x8086 - 0x159b) 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:27:25.003 Found net devices under 0000:82:00.0: cvl_0_0 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:27:25.003 Found net devices under 0000:82:00.1: cvl_0_1 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:25.003 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:25.004 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:25.262 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:25.262 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:25.262 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:25.262 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:25.262 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:25.262 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:25.262 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:25.262 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:25.262 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:25.262 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:27:25.262 00:27:25.262 --- 10.0.0.2 ping statistics --- 00:27:25.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.262 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:27:25.262 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:25.262 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:25.262 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:27:25.262 00:27:25.262 --- 10.0.0.1 ping statistics --- 00:27:25.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.262 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:27:25.262 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:25.262 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:27:25.262 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:25.262 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:25.262 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:25.263 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:25.263 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:25.263 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:25.263 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:25.263 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:27:25.263 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:25.263 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:25.263 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:25.263 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2747533 00:27:25.263 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2747533 00:27:25.263 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:27:25.263 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2747533 ']' 00:27:25.263 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:25.263 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:25.263 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:25.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:25.263 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:25.263 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:25.263 [2024-11-19 11:29:20.699354] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:25.263 [2024-11-19 11:29:20.700515] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:27:25.263 [2024-11-19 11:29:20.700588] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:25.522 [2024-11-19 11:29:20.784584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:25.522 [2024-11-19 11:29:20.843284] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:25.522 [2024-11-19 11:29:20.843346] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:25.522 [2024-11-19 11:29:20.843383] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:25.522 [2024-11-19 11:29:20.843403] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:25.522 [2024-11-19 11:29:20.843413] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:25.522 [2024-11-19 11:29:20.844974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:25.522 [2024-11-19 11:29:20.848384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:25.522 [2024-11-19 11:29:20.848396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:25.522 [2024-11-19 11:29:20.943086] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:25.522 [2024-11-19 11:29:20.943320] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:25.522 [2024-11-19 11:29:20.943326] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:25.522 [2024-11-19 11:29:20.943602] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:25.522 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:25.522 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:27:25.522 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:25.522 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:25.522 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:25.522 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:25.522 11:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:25.780 [2024-11-19 11:29:21.257085] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:26.039 11:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:26.298 11:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:27:26.298 11:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:26.556 11:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:27:26.556 11:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:27:26.814 11:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:27:27.072 11:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=c9588ee6-e4d6-426a-beb9-cc20f1623666 00:27:27.072 11:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c9588ee6-e4d6-426a-beb9-cc20f1623666 lvol 20 00:27:27.331 11:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=a295b0d5-dc7f-42d8-a059-9755d8dc7faa 00:27:27.331 11:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:27.589 11:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a295b0d5-dc7f-42d8-a059-9755d8dc7faa 00:27:27.848 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:28.107 [2024-11-19 11:29:23.501256] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:28.107 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:28.376 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2747920 00:27:28.377 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:27:28.377 11:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:27:29.362 11:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot a295b0d5-dc7f-42d8-a059-9755d8dc7faa MY_SNAPSHOT 00:27:29.929 11:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=cd2274cc-15bb-4f9f-b347-6b1ddbedac68 00:27:29.929 11:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize a295b0d5-dc7f-42d8-a059-9755d8dc7faa 30 00:27:30.187 11:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone cd2274cc-15bb-4f9f-b347-6b1ddbedac68 MY_CLONE 00:27:30.445 11:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=ea0f729a-7540-40d0-8daa-016bd810e0d1 00:27:30.445 11:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate ea0f729a-7540-40d0-8daa-016bd810e0d1 00:27:31.381 11:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2747920 00:27:39.495 Initializing NVMe Controllers 00:27:39.495 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:27:39.495 Controller IO queue size 128, less than required. 00:27:39.495 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:39.495 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:27:39.495 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:27:39.495 Initialization complete. Launching workers. 00:27:39.495 ======================================================== 00:27:39.495 Latency(us) 00:27:39.495 Device Information : IOPS MiB/s Average min max 00:27:39.495 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10467.00 40.89 12237.01 1739.10 108825.17 00:27:39.495 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10368.30 40.50 12352.37 2578.00 57624.09 00:27:39.495 ======================================================== 00:27:39.495 Total : 20835.30 81.39 12294.42 1739.10 108825.17 00:27:39.495 00:27:39.495 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:39.495 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a295b0d5-dc7f-42d8-a059-9755d8dc7faa 00:27:39.495 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c9588ee6-e4d6-426a-beb9-cc20f1623666 00:27:39.495 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:27:39.495 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:27:39.495 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:27:39.495 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:39.495 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:27:39.495 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:39.495 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:27:39.495 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:39.495 11:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:39.495 rmmod nvme_tcp 00:27:39.495 rmmod nvme_fabrics 00:27:39.752 rmmod nvme_keyring 00:27:39.752 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:39.752 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:27:39.752 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:27:39.752 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2747533 ']' 00:27:39.752 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2747533 00:27:39.752 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2747533 ']' 00:27:39.752 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2747533 00:27:39.752 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:27:39.752 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:39.753 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2747533 00:27:39.753 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:39.753 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:39.753 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2747533' 00:27:39.753 killing process with pid 2747533 00:27:39.753 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2747533 00:27:39.753 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2747533 00:27:40.013 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:40.013 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:40.013 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:40.013 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:27:40.013 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:27:40.013 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:40.013 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:27:40.013 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:40.013 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:40.013 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:40.013 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:40.013 11:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:41.922 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:41.922 00:27:41.922 real 0m19.715s 00:27:41.922 user 0m56.457s 00:27:41.922 sys 0m8.419s 00:27:41.922 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:41.922 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:41.922 ************************************ 00:27:41.922 END TEST nvmf_lvol 00:27:41.922 ************************************ 00:27:42.181 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:27:42.181 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:42.181 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:42.181 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:42.181 ************************************ 00:27:42.181 START TEST nvmf_lvs_grow 00:27:42.181 ************************************ 00:27:42.181 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:27:42.181 * Looking for test storage... 00:27:42.181 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:42.181 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:42.181 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:27:42.181 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:42.181 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:42.181 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:42.181 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:42.181 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:42.181 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:27:42.181 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:27:42.181 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:27:42.181 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:27:42.181 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:27:42.181 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:27:42.181 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:27:42.181 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:42.181 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:27:42.181 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:27:42.181 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:42.181 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:42.181 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:27:42.181 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:27:42.181 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:42.181 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:27:42.181 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:27:42.181 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:27:42.181 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:27:42.181 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:42.181 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:27:42.181 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:27:42.181 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:42.181 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:42.181 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:27:42.181 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:42.181 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:42.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:42.181 --rc genhtml_branch_coverage=1 00:27:42.181 --rc genhtml_function_coverage=1 00:27:42.181 --rc genhtml_legend=1 00:27:42.181 --rc geninfo_all_blocks=1 00:27:42.181 --rc geninfo_unexecuted_blocks=1 00:27:42.181 00:27:42.181 ' 00:27:42.181 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:42.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:42.181 --rc genhtml_branch_coverage=1 00:27:42.181 --rc genhtml_function_coverage=1 00:27:42.181 --rc genhtml_legend=1 00:27:42.181 --rc geninfo_all_blocks=1 00:27:42.181 --rc geninfo_unexecuted_blocks=1 00:27:42.181 00:27:42.181 ' 00:27:42.181 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:42.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:42.181 --rc genhtml_branch_coverage=1 00:27:42.181 --rc genhtml_function_coverage=1 00:27:42.181 --rc genhtml_legend=1 00:27:42.181 --rc geninfo_all_blocks=1 00:27:42.181 --rc geninfo_unexecuted_blocks=1 00:27:42.182 00:27:42.182 ' 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:42.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:42.182 --rc genhtml_branch_coverage=1 00:27:42.182 --rc genhtml_function_coverage=1 00:27:42.182 --rc genhtml_legend=1 00:27:42.182 --rc geninfo_all_blocks=1 00:27:42.182 --rc geninfo_unexecuted_blocks=1 00:27:42.182 00:27:42.182 ' 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:27:42.182 11:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:45.466 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:45.466 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:27:45.466 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:45.466 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:45.466 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:45.466 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:45.466 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:45.466 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:27:45.466 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:45.466 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:27:45.466 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:27:45.466 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:27:45.467 Found 0000:82:00.0 (0x8086 - 0x159b) 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:27:45.467 Found 0000:82:00.1 (0x8086 - 0x159b) 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:27:45.467 Found net devices under 0000:82:00.0: cvl_0_0 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:27:45.467 Found net devices under 0000:82:00.1: cvl_0_1 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:45.467 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:45.468 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:45.468 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:45.468 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:45.468 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:45.468 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:45.468 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:45.468 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:45.468 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:45.468 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:45.468 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:45.468 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:27:45.468 00:27:45.468 --- 10.0.0.2 ping statistics --- 00:27:45.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.468 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:27:45.468 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:45.468 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:45.468 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:27:45.468 00:27:45.468 --- 10.0.0.1 ping statistics --- 00:27:45.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.468 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:27:45.468 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:45.468 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:27:45.468 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:45.468 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:45.468 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:45.468 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:45.468 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:45.468 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:45.468 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:45.468 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:27:45.468 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:45.468 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:45.468 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:45.468 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2751507 00:27:45.468 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:27:45.468 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2751507 00:27:45.468 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2751507 ']' 00:27:45.468 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:45.468 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:45.468 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:45.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:45.468 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:45.468 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:45.468 [2024-11-19 11:29:40.473870] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:45.468 [2024-11-19 11:29:40.474924] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:27:45.468 [2024-11-19 11:29:40.474987] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:45.468 [2024-11-19 11:29:40.551920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:45.468 [2024-11-19 11:29:40.604307] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:45.468 [2024-11-19 11:29:40.604373] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:45.468 [2024-11-19 11:29:40.604402] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:45.468 [2024-11-19 11:29:40.604413] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:45.468 [2024-11-19 11:29:40.604422] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:45.468 [2024-11-19 11:29:40.604981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:45.468 [2024-11-19 11:29:40.685995] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:45.468 [2024-11-19 11:29:40.686310] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:45.468 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:45.468 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:27:45.468 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:45.468 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:45.468 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:45.468 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:45.468 11:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:45.727 [2024-11-19 11:29:40.993574] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:45.727 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:27:45.727 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:45.727 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:45.727 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:45.727 ************************************ 00:27:45.727 START TEST lvs_grow_clean 00:27:45.727 ************************************ 00:27:45.727 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:27:45.727 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:27:45.727 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:27:45.727 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:27:45.727 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:27:45.727 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:27:45.727 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:27:45.727 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:45.727 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:45.727 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:27:45.986 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:27:45.986 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:27:46.244 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=01d105da-cf5b-4a33-823c-7a99ed0950bd 00:27:46.244 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 01d105da-cf5b-4a33-823c-7a99ed0950bd 00:27:46.244 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:27:46.503 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:27:46.503 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:27:46.503 11:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 01d105da-cf5b-4a33-823c-7a99ed0950bd lvol 150 00:27:46.761 11:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=076265ce-3ffc-4a6a-be24-42913ee80509 00:27:46.761 11:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:46.761 11:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:27:47.020 [2024-11-19 11:29:42.421477] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:27:47.020 [2024-11-19 11:29:42.421570] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:27:47.020 true 00:27:47.020 11:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 01d105da-cf5b-4a33-823c-7a99ed0950bd 00:27:47.020 11:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:27:47.278 11:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:27:47.278 11:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:47.536 11:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 076265ce-3ffc-4a6a-be24-42913ee80509 00:27:47.795 11:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:48.053 [2024-11-19 11:29:43.521786] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:48.053 11:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:48.312 11:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2751941 00:27:48.312 11:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:27:48.312 11:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:48.312 11:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2751941 /var/tmp/bdevperf.sock 00:27:48.312 11:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2751941 ']' 00:27:48.312 11:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:48.312 11:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:48.312 11:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:48.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:48.312 11:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:48.312 11:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:27:48.571 [2024-11-19 11:29:43.849987] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:27:48.571 [2024-11-19 11:29:43.850075] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2751941 ] 00:27:48.571 [2024-11-19 11:29:43.926149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.571 [2024-11-19 11:29:43.984805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:48.829 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:48.829 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:27:48.829 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:27:49.088 Nvme0n1 00:27:49.088 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:27:49.346 [ 00:27:49.346 { 00:27:49.346 "name": "Nvme0n1", 00:27:49.346 "aliases": [ 00:27:49.346 "076265ce-3ffc-4a6a-be24-42913ee80509" 00:27:49.346 ], 00:27:49.346 "product_name": "NVMe disk", 00:27:49.346 "block_size": 4096, 00:27:49.346 "num_blocks": 38912, 00:27:49.346 "uuid": "076265ce-3ffc-4a6a-be24-42913ee80509", 00:27:49.346 "numa_id": 1, 00:27:49.346 "assigned_rate_limits": { 00:27:49.346 "rw_ios_per_sec": 0, 00:27:49.346 "rw_mbytes_per_sec": 0, 00:27:49.346 "r_mbytes_per_sec": 0, 00:27:49.346 "w_mbytes_per_sec": 0 00:27:49.346 }, 00:27:49.346 "claimed": false, 00:27:49.346 "zoned": false, 00:27:49.346 "supported_io_types": { 00:27:49.346 "read": true, 00:27:49.346 "write": true, 00:27:49.346 "unmap": true, 00:27:49.346 "flush": true, 00:27:49.346 "reset": true, 00:27:49.346 "nvme_admin": true, 00:27:49.346 "nvme_io": true, 00:27:49.346 "nvme_io_md": false, 00:27:49.346 "write_zeroes": true, 00:27:49.346 "zcopy": false, 00:27:49.346 "get_zone_info": false, 00:27:49.346 "zone_management": false, 00:27:49.346 "zone_append": false, 00:27:49.346 "compare": true, 00:27:49.346 "compare_and_write": true, 00:27:49.346 "abort": true, 00:27:49.346 "seek_hole": false, 00:27:49.346 "seek_data": false, 00:27:49.346 "copy": true, 00:27:49.346 "nvme_iov_md": false 00:27:49.346 }, 00:27:49.346 "memory_domains": [ 00:27:49.346 { 00:27:49.346 "dma_device_id": "system", 00:27:49.346 "dma_device_type": 1 00:27:49.346 } 00:27:49.346 ], 00:27:49.346 "driver_specific": { 00:27:49.346 "nvme": [ 00:27:49.346 { 00:27:49.346 "trid": { 00:27:49.346 "trtype": "TCP", 00:27:49.346 "adrfam": "IPv4", 00:27:49.346 "traddr": "10.0.0.2", 00:27:49.346 "trsvcid": "4420", 00:27:49.346 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:49.346 }, 00:27:49.346 "ctrlr_data": { 00:27:49.346 "cntlid": 1, 00:27:49.346 "vendor_id": "0x8086", 00:27:49.346 "model_number": "SPDK bdev Controller", 00:27:49.346 "serial_number": "SPDK0", 00:27:49.346 "firmware_revision": "25.01", 00:27:49.346 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:49.346 "oacs": { 00:27:49.346 "security": 0, 00:27:49.346 "format": 0, 00:27:49.346 "firmware": 0, 00:27:49.346 "ns_manage": 0 00:27:49.346 }, 00:27:49.346 "multi_ctrlr": true, 00:27:49.346 "ana_reporting": false 00:27:49.346 }, 00:27:49.346 "vs": { 00:27:49.346 "nvme_version": "1.3" 00:27:49.346 }, 00:27:49.346 "ns_data": { 00:27:49.346 "id": 1, 00:27:49.346 "can_share": true 00:27:49.346 } 00:27:49.346 } 00:27:49.346 ], 00:27:49.346 "mp_policy": "active_passive" 00:27:49.346 } 00:27:49.346 } 00:27:49.346 ] 00:27:49.346 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2752070 00:27:49.346 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:49.346 11:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:27:49.605 Running I/O for 10 seconds... 00:27:50.539 Latency(us) 00:27:50.539 [2024-11-19T10:29:46.036Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:50.539 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:50.539 Nvme0n1 : 1.00 16256.00 63.50 0.00 0.00 0.00 0.00 0.00 00:27:50.540 [2024-11-19T10:29:46.037Z] =================================================================================================================== 00:27:50.540 [2024-11-19T10:29:46.037Z] Total : 16256.00 63.50 0.00 0.00 0.00 0.00 0.00 00:27:50.540 00:27:51.472 11:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 01d105da-cf5b-4a33-823c-7a99ed0950bd 00:27:51.472 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:51.473 Nvme0n1 : 2.00 16383.00 64.00 0.00 0.00 0.00 0.00 0.00 00:27:51.473 [2024-11-19T10:29:46.970Z] =================================================================================================================== 00:27:51.473 [2024-11-19T10:29:46.970Z] Total : 16383.00 64.00 0.00 0.00 0.00 0.00 0.00 00:27:51.473 00:27:51.731 true 00:27:51.731 11:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 01d105da-cf5b-4a33-823c-7a99ed0950bd 00:27:51.731 11:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:27:51.989 11:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:27:51.989 11:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:27:51.989 11:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2752070 00:27:52.555 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:52.555 Nvme0n1 : 3.00 16340.67 63.83 0.00 0.00 0.00 0.00 0.00 00:27:52.555 [2024-11-19T10:29:48.052Z] =================================================================================================================== 00:27:52.555 [2024-11-19T10:29:48.052Z] Total : 16340.67 63.83 0.00 0.00 0.00 0.00 0.00 00:27:52.555 00:27:53.489 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:53.489 Nvme0n1 : 4.00 16414.75 64.12 0.00 0.00 0.00 0.00 0.00 00:27:53.489 [2024-11-19T10:29:48.986Z] =================================================================================================================== 00:27:53.489 [2024-11-19T10:29:48.986Z] Total : 16414.75 64.12 0.00 0.00 0.00 0.00 0.00 00:27:53.489 00:27:54.421 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:54.421 Nvme0n1 : 5.00 16510.00 64.49 0.00 0.00 0.00 0.00 0.00 00:27:54.421 [2024-11-19T10:29:49.918Z] =================================================================================================================== 00:27:54.421 [2024-11-19T10:29:49.919Z] Total : 16510.00 64.49 0.00 0.00 0.00 0.00 0.00 00:27:54.422 00:27:55.802 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:55.802 Nvme0n1 : 6.00 16573.50 64.74 0.00 0.00 0.00 0.00 0.00 00:27:55.802 [2024-11-19T10:29:51.299Z] =================================================================================================================== 00:27:55.802 [2024-11-19T10:29:51.299Z] Total : 16573.50 64.74 0.00 0.00 0.00 0.00 0.00 00:27:55.802 00:27:56.736 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:56.736 Nvme0n1 : 7.00 16600.71 64.85 0.00 0.00 0.00 0.00 0.00 00:27:56.736 [2024-11-19T10:29:52.233Z] =================================================================================================================== 00:27:56.736 [2024-11-19T10:29:52.233Z] Total : 16600.71 64.85 0.00 0.00 0.00 0.00 0.00 00:27:56.736 00:27:57.669 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:57.669 Nvme0n1 : 8.00 16621.12 64.93 0.00 0.00 0.00 0.00 0.00 00:27:57.669 [2024-11-19T10:29:53.166Z] =================================================================================================================== 00:27:57.669 [2024-11-19T10:29:53.166Z] Total : 16621.12 64.93 0.00 0.00 0.00 0.00 0.00 00:27:57.669 00:27:58.602 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:58.602 Nvme0n1 : 9.00 16679.33 65.15 0.00 0.00 0.00 0.00 0.00 00:27:58.602 [2024-11-19T10:29:54.099Z] =================================================================================================================== 00:27:58.602 [2024-11-19T10:29:54.099Z] Total : 16679.33 65.15 0.00 0.00 0.00 0.00 0.00 00:27:58.602 00:27:59.536 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:59.536 Nvme0n1 : 10.00 16713.20 65.29 0.00 0.00 0.00 0.00 0.00 00:27:59.536 [2024-11-19T10:29:55.033Z] =================================================================================================================== 00:27:59.536 [2024-11-19T10:29:55.033Z] Total : 16713.20 65.29 0.00 0.00 0.00 0.00 0.00 00:27:59.536 00:27:59.536 00:27:59.536 Latency(us) 00:27:59.536 [2024-11-19T10:29:55.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:59.536 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:59.536 Nvme0n1 : 10.01 16713.27 65.29 0.00 0.00 7654.61 6747.78 17864.63 00:27:59.536 [2024-11-19T10:29:55.033Z] =================================================================================================================== 00:27:59.536 [2024-11-19T10:29:55.033Z] Total : 16713.27 65.29 0.00 0.00 7654.61 6747.78 17864.63 00:27:59.536 { 00:27:59.536 "results": [ 00:27:59.536 { 00:27:59.536 "job": "Nvme0n1", 00:27:59.536 "core_mask": "0x2", 00:27:59.536 "workload": "randwrite", 00:27:59.536 "status": "finished", 00:27:59.536 "queue_depth": 128, 00:27:59.536 "io_size": 4096, 00:27:59.536 "runtime": 10.007617, 00:27:59.536 "iops": 16713.269502619856, 00:27:59.536 "mibps": 65.28620899460881, 00:27:59.536 "io_failed": 0, 00:27:59.536 "io_timeout": 0, 00:27:59.536 "avg_latency_us": 7654.608340901945, 00:27:59.536 "min_latency_us": 6747.780740740741, 00:27:59.536 "max_latency_us": 17864.62814814815 00:27:59.536 } 00:27:59.536 ], 00:27:59.536 "core_count": 1 00:27:59.536 } 00:27:59.536 11:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2751941 00:27:59.536 11:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2751941 ']' 00:27:59.536 11:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2751941 00:27:59.536 11:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:27:59.536 11:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:59.536 11:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2751941 00:27:59.536 11:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:59.536 11:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:59.536 11:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2751941' 00:27:59.536 killing process with pid 2751941 00:27:59.536 11:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2751941 00:27:59.536 Received shutdown signal, test time was about 10.000000 seconds 00:27:59.536 00:27:59.536 Latency(us) 00:27:59.537 [2024-11-19T10:29:55.034Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:59.537 [2024-11-19T10:29:55.034Z] =================================================================================================================== 00:27:59.537 [2024-11-19T10:29:55.034Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:59.537 11:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2751941 00:27:59.795 11:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:00.053 11:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:00.311 11:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 01d105da-cf5b-4a33-823c-7a99ed0950bd 00:28:00.311 11:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:28:00.570 11:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:28:00.570 11:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:28:00.570 11:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:00.829 [2024-11-19 11:29:56.285534] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:28:00.829 11:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 01d105da-cf5b-4a33-823c-7a99ed0950bd 00:28:00.829 11:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:28:00.829 11:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 01d105da-cf5b-4a33-823c-7a99ed0950bd 00:28:00.829 11:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:00.829 11:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:00.829 11:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:00.829 11:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:00.829 11:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:00.829 11:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:00.829 11:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:00.829 11:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:28:00.829 11:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 01d105da-cf5b-4a33-823c-7a99ed0950bd 00:28:01.127 request: 00:28:01.127 { 00:28:01.127 "uuid": "01d105da-cf5b-4a33-823c-7a99ed0950bd", 00:28:01.127 "method": "bdev_lvol_get_lvstores", 00:28:01.127 "req_id": 1 00:28:01.127 } 00:28:01.127 Got JSON-RPC error response 00:28:01.127 response: 00:28:01.127 { 00:28:01.127 "code": -19, 00:28:01.127 "message": "No such device" 00:28:01.128 } 00:28:01.128 11:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:28:01.128 11:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:01.128 11:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:01.128 11:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:01.128 11:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:01.725 aio_bdev 00:28:01.725 11:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 076265ce-3ffc-4a6a-be24-42913ee80509 00:28:01.725 11:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=076265ce-3ffc-4a6a-be24-42913ee80509 00:28:01.725 11:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:01.725 11:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:28:01.725 11:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:01.725 11:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:01.725 11:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:01.725 11:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 076265ce-3ffc-4a6a-be24-42913ee80509 -t 2000 00:28:01.983 [ 00:28:01.983 { 00:28:01.983 "name": "076265ce-3ffc-4a6a-be24-42913ee80509", 00:28:01.983 "aliases": [ 00:28:01.983 "lvs/lvol" 00:28:01.983 ], 00:28:01.983 "product_name": "Logical Volume", 00:28:01.983 "block_size": 4096, 00:28:01.983 "num_blocks": 38912, 00:28:01.983 "uuid": "076265ce-3ffc-4a6a-be24-42913ee80509", 00:28:01.983 "assigned_rate_limits": { 00:28:01.983 "rw_ios_per_sec": 0, 00:28:01.983 "rw_mbytes_per_sec": 0, 00:28:01.983 "r_mbytes_per_sec": 0, 00:28:01.983 "w_mbytes_per_sec": 0 00:28:01.983 }, 00:28:01.983 "claimed": false, 00:28:01.983 "zoned": false, 00:28:01.983 "supported_io_types": { 00:28:01.983 "read": true, 00:28:01.983 "write": true, 00:28:01.983 "unmap": true, 00:28:01.983 "flush": false, 00:28:01.983 "reset": true, 00:28:01.983 "nvme_admin": false, 00:28:01.983 "nvme_io": false, 00:28:01.983 "nvme_io_md": false, 00:28:01.983 "write_zeroes": true, 00:28:01.983 "zcopy": false, 00:28:01.983 "get_zone_info": false, 00:28:01.983 "zone_management": false, 00:28:01.983 "zone_append": false, 00:28:01.983 "compare": false, 00:28:01.983 "compare_and_write": false, 00:28:01.983 "abort": false, 00:28:01.983 "seek_hole": true, 00:28:01.983 "seek_data": true, 00:28:01.983 "copy": false, 00:28:01.983 "nvme_iov_md": false 00:28:01.983 }, 00:28:01.983 "driver_specific": { 00:28:01.983 "lvol": { 00:28:01.983 "lvol_store_uuid": "01d105da-cf5b-4a33-823c-7a99ed0950bd", 00:28:01.983 "base_bdev": "aio_bdev", 00:28:01.983 "thin_provision": false, 00:28:01.983 "num_allocated_clusters": 38, 00:28:01.983 "snapshot": false, 00:28:01.983 "clone": false, 00:28:01.983 "esnap_clone": false 00:28:01.983 } 00:28:01.983 } 00:28:01.983 } 00:28:01.983 ] 00:28:01.983 11:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:28:01.983 11:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 01d105da-cf5b-4a33-823c-7a99ed0950bd 00:28:01.983 11:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:28:02.241 11:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:28:02.499 11:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 01d105da-cf5b-4a33-823c-7a99ed0950bd 00:28:02.499 11:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:28:02.757 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:28:02.757 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 076265ce-3ffc-4a6a-be24-42913ee80509 00:28:03.015 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 01d105da-cf5b-4a33-823c-7a99ed0950bd 00:28:03.273 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:03.531 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:03.531 00:28:03.531 real 0m17.816s 00:28:03.531 user 0m17.466s 00:28:03.531 sys 0m1.846s 00:28:03.531 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:03.531 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:28:03.531 ************************************ 00:28:03.531 END TEST lvs_grow_clean 00:28:03.531 ************************************ 00:28:03.531 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:28:03.531 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:03.531 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:03.531 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:03.531 ************************************ 00:28:03.531 START TEST lvs_grow_dirty 00:28:03.531 ************************************ 00:28:03.531 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:28:03.531 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:28:03.531 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:28:03.531 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:28:03.531 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:28:03.531 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:28:03.531 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:28:03.531 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:03.531 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:03.531 11:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:03.790 11:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:28:03.790 11:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:28:04.047 11:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=e3eb30b1-7202-48d5-9a2a-80546af395c8 00:28:04.047 11:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3eb30b1-7202-48d5-9a2a-80546af395c8 00:28:04.047 11:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:28:04.305 11:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:28:04.305 11:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:28:04.305 11:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e3eb30b1-7202-48d5-9a2a-80546af395c8 lvol 150 00:28:04.563 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=30963984-5c15-4d64-99af-3c190d01d40f 00:28:04.563 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:04.563 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:28:04.821 [2024-11-19 11:30:00.301511] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:28:04.821 [2024-11-19 11:30:00.301620] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:28:04.821 true 00:28:05.079 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3eb30b1-7202-48d5-9a2a-80546af395c8 00:28:05.079 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:28:05.337 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:28:05.337 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:05.595 11:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 30963984-5c15-4d64-99af-3c190d01d40f 00:28:05.853 11:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:06.112 [2024-11-19 11:30:01.441775] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:06.112 11:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:06.370 11:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2754212 00:28:06.370 11:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:28:06.370 11:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:06.370 11:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2754212 /var/tmp/bdevperf.sock 00:28:06.370 11:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2754212 ']' 00:28:06.370 11:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:06.370 11:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:06.370 11:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:06.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:06.370 11:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:06.370 11:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:06.370 [2024-11-19 11:30:01.782839] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:28:06.370 [2024-11-19 11:30:01.782919] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2754212 ] 00:28:06.370 [2024-11-19 11:30:01.858412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:06.628 [2024-11-19 11:30:01.918652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:06.628 11:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:06.628 11:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:28:06.628 11:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:28:07.194 Nvme0n1 00:28:07.194 11:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:28:07.452 [ 00:28:07.452 { 00:28:07.452 "name": "Nvme0n1", 00:28:07.452 "aliases": [ 00:28:07.452 "30963984-5c15-4d64-99af-3c190d01d40f" 00:28:07.452 ], 00:28:07.452 "product_name": "NVMe disk", 00:28:07.452 "block_size": 4096, 00:28:07.452 "num_blocks": 38912, 00:28:07.452 "uuid": "30963984-5c15-4d64-99af-3c190d01d40f", 00:28:07.452 "numa_id": 1, 00:28:07.452 "assigned_rate_limits": { 00:28:07.452 "rw_ios_per_sec": 0, 00:28:07.452 "rw_mbytes_per_sec": 0, 00:28:07.452 "r_mbytes_per_sec": 0, 00:28:07.452 "w_mbytes_per_sec": 0 00:28:07.452 }, 00:28:07.452 "claimed": false, 00:28:07.452 "zoned": false, 00:28:07.452 "supported_io_types": { 00:28:07.452 "read": true, 00:28:07.452 "write": true, 00:28:07.452 "unmap": true, 00:28:07.452 "flush": true, 00:28:07.452 "reset": true, 00:28:07.453 "nvme_admin": true, 00:28:07.453 "nvme_io": true, 00:28:07.453 "nvme_io_md": false, 00:28:07.453 "write_zeroes": true, 00:28:07.453 "zcopy": false, 00:28:07.453 "get_zone_info": false, 00:28:07.453 "zone_management": false, 00:28:07.453 "zone_append": false, 00:28:07.453 "compare": true, 00:28:07.453 "compare_and_write": true, 00:28:07.453 "abort": true, 00:28:07.453 "seek_hole": false, 00:28:07.453 "seek_data": false, 00:28:07.453 "copy": true, 00:28:07.453 "nvme_iov_md": false 00:28:07.453 }, 00:28:07.453 "memory_domains": [ 00:28:07.453 { 00:28:07.453 "dma_device_id": "system", 00:28:07.453 "dma_device_type": 1 00:28:07.453 } 00:28:07.453 ], 00:28:07.453 "driver_specific": { 00:28:07.453 "nvme": [ 00:28:07.453 { 00:28:07.453 "trid": { 00:28:07.453 "trtype": "TCP", 00:28:07.453 "adrfam": "IPv4", 00:28:07.453 "traddr": "10.0.0.2", 00:28:07.453 "trsvcid": "4420", 00:28:07.453 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:07.453 }, 00:28:07.453 "ctrlr_data": { 00:28:07.453 "cntlid": 1, 00:28:07.453 "vendor_id": "0x8086", 00:28:07.453 "model_number": "SPDK bdev Controller", 00:28:07.453 "serial_number": "SPDK0", 00:28:07.453 "firmware_revision": "25.01", 00:28:07.453 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:07.453 "oacs": { 00:28:07.453 "security": 0, 00:28:07.453 "format": 0, 00:28:07.453 "firmware": 0, 00:28:07.453 "ns_manage": 0 00:28:07.453 }, 00:28:07.453 "multi_ctrlr": true, 00:28:07.453 "ana_reporting": false 00:28:07.453 }, 00:28:07.453 "vs": { 00:28:07.453 "nvme_version": "1.3" 00:28:07.453 }, 00:28:07.453 "ns_data": { 00:28:07.453 "id": 1, 00:28:07.453 "can_share": true 00:28:07.453 } 00:28:07.453 } 00:28:07.453 ], 00:28:07.453 "mp_policy": "active_passive" 00:28:07.453 } 00:28:07.453 } 00:28:07.453 ] 00:28:07.453 11:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2754363 00:28:07.453 11:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:07.453 11:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:28:07.453 Running I/O for 10 seconds... 00:28:08.829 Latency(us) 00:28:08.829 [2024-11-19T10:30:04.326Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:08.829 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:08.829 Nvme0n1 : 1.00 15494.00 60.52 0.00 0.00 0.00 0.00 0.00 00:28:08.829 [2024-11-19T10:30:04.326Z] =================================================================================================================== 00:28:08.829 [2024-11-19T10:30:04.326Z] Total : 15494.00 60.52 0.00 0.00 0.00 0.00 0.00 00:28:08.829 00:28:09.396 11:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e3eb30b1-7202-48d5-9a2a-80546af395c8 00:28:09.654 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:09.654 Nvme0n1 : 2.00 14160.50 55.31 0.00 0.00 0.00 0.00 0.00 00:28:09.654 [2024-11-19T10:30:05.151Z] =================================================================================================================== 00:28:09.654 [2024-11-19T10:30:05.151Z] Total : 14160.50 55.31 0.00 0.00 0.00 0.00 0.00 00:28:09.654 00:28:09.654 true 00:28:09.912 11:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3eb30b1-7202-48d5-9a2a-80546af395c8 00:28:09.912 11:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:28:10.169 11:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:28:10.169 11:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:28:10.169 11:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2754363 00:28:10.736 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:10.736 Nvme0n1 : 3.00 14478.00 56.55 0.00 0.00 0.00 0.00 0.00 00:28:10.736 [2024-11-19T10:30:06.233Z] =================================================================================================================== 00:28:10.736 [2024-11-19T10:30:06.233Z] Total : 14478.00 56.55 0.00 0.00 0.00 0.00 0.00 00:28:10.736 00:28:11.671 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:11.671 Nvme0n1 : 4.00 15081.25 58.91 0.00 0.00 0.00 0.00 0.00 00:28:11.671 [2024-11-19T10:30:07.168Z] =================================================================================================================== 00:28:11.671 [2024-11-19T10:30:07.168Z] Total : 15081.25 58.91 0.00 0.00 0.00 0.00 0.00 00:28:11.671 00:28:12.607 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:12.607 Nvme0n1 : 5.00 15367.00 60.03 0.00 0.00 0.00 0.00 0.00 00:28:12.607 [2024-11-19T10:30:08.104Z] =================================================================================================================== 00:28:12.607 [2024-11-19T10:30:08.104Z] Total : 15367.00 60.03 0.00 0.00 0.00 0.00 0.00 00:28:12.607 00:28:13.543 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:13.543 Nvme0n1 : 6.00 15557.50 60.77 0.00 0.00 0.00 0.00 0.00 00:28:13.543 [2024-11-19T10:30:09.040Z] =================================================================================================================== 00:28:13.543 [2024-11-19T10:30:09.040Z] Total : 15557.50 60.77 0.00 0.00 0.00 0.00 0.00 00:28:13.543 00:28:14.477 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:14.477 Nvme0n1 : 7.00 15711.71 61.37 0.00 0.00 0.00 0.00 0.00 00:28:14.477 [2024-11-19T10:30:09.974Z] =================================================================================================================== 00:28:14.477 [2024-11-19T10:30:09.974Z] Total : 15711.71 61.37 0.00 0.00 0.00 0.00 0.00 00:28:14.477 00:28:15.853 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:15.853 Nvme0n1 : 8.00 15811.50 61.76 0.00 0.00 0.00 0.00 0.00 00:28:15.853 [2024-11-19T10:30:11.350Z] =================================================================================================================== 00:28:15.853 [2024-11-19T10:30:11.350Z] Total : 15811.50 61.76 0.00 0.00 0.00 0.00 0.00 00:28:15.853 00:28:16.787 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:16.787 Nvme0n1 : 9.00 15933.33 62.24 0.00 0.00 0.00 0.00 0.00 00:28:16.787 [2024-11-19T10:30:12.284Z] =================================================================================================================== 00:28:16.787 [2024-11-19T10:30:12.284Z] Total : 15933.33 62.24 0.00 0.00 0.00 0.00 0.00 00:28:16.787 00:28:17.723 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:17.723 Nvme0n1 : 10.00 16048.20 62.69 0.00 0.00 0.00 0.00 0.00 00:28:17.723 [2024-11-19T10:30:13.220Z] =================================================================================================================== 00:28:17.723 [2024-11-19T10:30:13.220Z] Total : 16048.20 62.69 0.00 0.00 0.00 0.00 0.00 00:28:17.723 00:28:17.723 00:28:17.723 Latency(us) 00:28:17.723 [2024-11-19T10:30:13.220Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:17.723 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:17.723 Nvme0n1 : 10.01 16053.52 62.71 0.00 0.00 7969.23 3907.89 19612.25 00:28:17.723 [2024-11-19T10:30:13.221Z] =================================================================================================================== 00:28:17.724 [2024-11-19T10:30:13.221Z] Total : 16053.52 62.71 0.00 0.00 7969.23 3907.89 19612.25 00:28:17.724 { 00:28:17.724 "results": [ 00:28:17.724 { 00:28:17.724 "job": "Nvme0n1", 00:28:17.724 "core_mask": "0x2", 00:28:17.724 "workload": "randwrite", 00:28:17.724 "status": "finished", 00:28:17.724 "queue_depth": 128, 00:28:17.724 "io_size": 4096, 00:28:17.724 "runtime": 10.008583, 00:28:17.724 "iops": 16053.521262700224, 00:28:17.724 "mibps": 62.70906743242275, 00:28:17.724 "io_failed": 0, 00:28:17.724 "io_timeout": 0, 00:28:17.724 "avg_latency_us": 7969.229588635395, 00:28:17.724 "min_latency_us": 3907.8874074074074, 00:28:17.724 "max_latency_us": 19612.254814814816 00:28:17.724 } 00:28:17.724 ], 00:28:17.724 "core_count": 1 00:28:17.724 } 00:28:17.724 11:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2754212 00:28:17.724 11:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2754212 ']' 00:28:17.724 11:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2754212 00:28:17.724 11:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:28:17.724 11:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:17.724 11:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2754212 00:28:17.724 11:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:17.724 11:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:17.724 11:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2754212' 00:28:17.724 killing process with pid 2754212 00:28:17.724 11:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2754212 00:28:17.724 Received shutdown signal, test time was about 10.000000 seconds 00:28:17.724 00:28:17.724 Latency(us) 00:28:17.724 [2024-11-19T10:30:13.221Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:17.724 [2024-11-19T10:30:13.221Z] =================================================================================================================== 00:28:17.724 [2024-11-19T10:30:13.221Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:17.724 11:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2754212 00:28:17.724 11:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:17.983 11:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:18.551 11:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3eb30b1-7202-48d5-9a2a-80546af395c8 00:28:18.551 11:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:28:18.551 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:28:18.551 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:28:18.551 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2751507 00:28:18.551 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2751507 00:28:18.810 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2751507 Killed "${NVMF_APP[@]}" "$@" 00:28:18.810 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:28:18.810 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:28:18.810 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:18.810 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:18.810 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:18.810 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2756183 00:28:18.810 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:28:18.810 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2756183 00:28:18.810 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2756183 ']' 00:28:18.810 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:18.810 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:18.810 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:18.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:18.810 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:18.810 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:18.810 [2024-11-19 11:30:14.118803] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:18.810 [2024-11-19 11:30:14.119941] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:28:18.810 [2024-11-19 11:30:14.120015] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:18.810 [2024-11-19 11:30:14.203973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.810 [2024-11-19 11:30:14.260962] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:18.810 [2024-11-19 11:30:14.261026] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:18.810 [2024-11-19 11:30:14.261055] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:18.810 [2024-11-19 11:30:14.261066] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:18.810 [2024-11-19 11:30:14.261075] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:18.810 [2024-11-19 11:30:14.261755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:19.069 [2024-11-19 11:30:14.355889] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:19.069 [2024-11-19 11:30:14.356197] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:19.069 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:19.069 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:28:19.069 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:19.069 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:19.069 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:19.069 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:19.069 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:19.327 [2024-11-19 11:30:14.660400] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:28:19.327 [2024-11-19 11:30:14.660542] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:28:19.327 [2024-11-19 11:30:14.660592] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:28:19.327 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:28:19.327 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 30963984-5c15-4d64-99af-3c190d01d40f 00:28:19.327 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=30963984-5c15-4d64-99af-3c190d01d40f 00:28:19.327 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:19.327 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:28:19.327 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:19.327 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:19.327 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:19.585 11:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 30963984-5c15-4d64-99af-3c190d01d40f -t 2000 00:28:19.843 [ 00:28:19.843 { 00:28:19.843 "name": "30963984-5c15-4d64-99af-3c190d01d40f", 00:28:19.843 "aliases": [ 00:28:19.843 "lvs/lvol" 00:28:19.843 ], 00:28:19.843 "product_name": "Logical Volume", 00:28:19.843 "block_size": 4096, 00:28:19.843 "num_blocks": 38912, 00:28:19.843 "uuid": "30963984-5c15-4d64-99af-3c190d01d40f", 00:28:19.843 "assigned_rate_limits": { 00:28:19.843 "rw_ios_per_sec": 0, 00:28:19.843 "rw_mbytes_per_sec": 0, 00:28:19.843 "r_mbytes_per_sec": 0, 00:28:19.843 "w_mbytes_per_sec": 0 00:28:19.843 }, 00:28:19.843 "claimed": false, 00:28:19.843 "zoned": false, 00:28:19.843 "supported_io_types": { 00:28:19.843 "read": true, 00:28:19.843 "write": true, 00:28:19.843 "unmap": true, 00:28:19.843 "flush": false, 00:28:19.843 "reset": true, 00:28:19.843 "nvme_admin": false, 00:28:19.843 "nvme_io": false, 00:28:19.843 "nvme_io_md": false, 00:28:19.843 "write_zeroes": true, 00:28:19.843 "zcopy": false, 00:28:19.843 "get_zone_info": false, 00:28:19.843 "zone_management": false, 00:28:19.843 "zone_append": false, 00:28:19.843 "compare": false, 00:28:19.843 "compare_and_write": false, 00:28:19.843 "abort": false, 00:28:19.843 "seek_hole": true, 00:28:19.843 "seek_data": true, 00:28:19.843 "copy": false, 00:28:19.843 "nvme_iov_md": false 00:28:19.844 }, 00:28:19.844 "driver_specific": { 00:28:19.844 "lvol": { 00:28:19.844 "lvol_store_uuid": "e3eb30b1-7202-48d5-9a2a-80546af395c8", 00:28:19.844 "base_bdev": "aio_bdev", 00:28:19.844 "thin_provision": false, 00:28:19.844 "num_allocated_clusters": 38, 00:28:19.844 "snapshot": false, 00:28:19.844 "clone": false, 00:28:19.844 "esnap_clone": false 00:28:19.844 } 00:28:19.844 } 00:28:19.844 } 00:28:19.844 ] 00:28:19.844 11:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:28:19.844 11:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3eb30b1-7202-48d5-9a2a-80546af395c8 00:28:19.844 11:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:28:20.102 11:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:28:20.102 11:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3eb30b1-7202-48d5-9a2a-80546af395c8 00:28:20.102 11:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:28:20.359 11:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:28:20.359 11:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:20.618 [2024-11-19 11:30:16.030256] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:28:20.618 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3eb30b1-7202-48d5-9a2a-80546af395c8 00:28:20.618 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:28:20.618 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3eb30b1-7202-48d5-9a2a-80546af395c8 00:28:20.618 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:20.618 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:20.618 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:20.618 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:20.618 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:20.618 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:20.618 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:20.618 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:28:20.618 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3eb30b1-7202-48d5-9a2a-80546af395c8 00:28:20.875 request: 00:28:20.875 { 00:28:20.875 "uuid": "e3eb30b1-7202-48d5-9a2a-80546af395c8", 00:28:20.875 "method": "bdev_lvol_get_lvstores", 00:28:20.875 "req_id": 1 00:28:20.875 } 00:28:20.875 Got JSON-RPC error response 00:28:20.875 response: 00:28:20.875 { 00:28:20.875 "code": -19, 00:28:20.875 "message": "No such device" 00:28:20.875 } 00:28:20.875 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:28:20.875 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:20.875 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:20.875 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:20.875 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:21.133 aio_bdev 00:28:21.392 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 30963984-5c15-4d64-99af-3c190d01d40f 00:28:21.392 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=30963984-5c15-4d64-99af-3c190d01d40f 00:28:21.392 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:21.392 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:28:21.392 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:21.392 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:21.392 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:21.651 11:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 30963984-5c15-4d64-99af-3c190d01d40f -t 2000 00:28:21.909 [ 00:28:21.909 { 00:28:21.909 "name": "30963984-5c15-4d64-99af-3c190d01d40f", 00:28:21.909 "aliases": [ 00:28:21.909 "lvs/lvol" 00:28:21.909 ], 00:28:21.909 "product_name": "Logical Volume", 00:28:21.909 "block_size": 4096, 00:28:21.909 "num_blocks": 38912, 00:28:21.909 "uuid": "30963984-5c15-4d64-99af-3c190d01d40f", 00:28:21.909 "assigned_rate_limits": { 00:28:21.909 "rw_ios_per_sec": 0, 00:28:21.909 "rw_mbytes_per_sec": 0, 00:28:21.909 "r_mbytes_per_sec": 0, 00:28:21.909 "w_mbytes_per_sec": 0 00:28:21.909 }, 00:28:21.909 "claimed": false, 00:28:21.909 "zoned": false, 00:28:21.909 "supported_io_types": { 00:28:21.909 "read": true, 00:28:21.909 "write": true, 00:28:21.909 "unmap": true, 00:28:21.909 "flush": false, 00:28:21.909 "reset": true, 00:28:21.909 "nvme_admin": false, 00:28:21.909 "nvme_io": false, 00:28:21.909 "nvme_io_md": false, 00:28:21.909 "write_zeroes": true, 00:28:21.909 "zcopy": false, 00:28:21.909 "get_zone_info": false, 00:28:21.909 "zone_management": false, 00:28:21.909 "zone_append": false, 00:28:21.909 "compare": false, 00:28:21.909 "compare_and_write": false, 00:28:21.909 "abort": false, 00:28:21.909 "seek_hole": true, 00:28:21.909 "seek_data": true, 00:28:21.909 "copy": false, 00:28:21.909 "nvme_iov_md": false 00:28:21.909 }, 00:28:21.909 "driver_specific": { 00:28:21.909 "lvol": { 00:28:21.909 "lvol_store_uuid": "e3eb30b1-7202-48d5-9a2a-80546af395c8", 00:28:21.909 "base_bdev": "aio_bdev", 00:28:21.909 "thin_provision": false, 00:28:21.909 "num_allocated_clusters": 38, 00:28:21.909 "snapshot": false, 00:28:21.909 "clone": false, 00:28:21.909 "esnap_clone": false 00:28:21.909 } 00:28:21.909 } 00:28:21.909 } 00:28:21.909 ] 00:28:21.909 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:28:21.909 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3eb30b1-7202-48d5-9a2a-80546af395c8 00:28:21.909 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:28:22.167 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:28:22.167 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3eb30b1-7202-48d5-9a2a-80546af395c8 00:28:22.167 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:28:22.425 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:28:22.425 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 30963984-5c15-4d64-99af-3c190d01d40f 00:28:22.683 11:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e3eb30b1-7202-48d5-9a2a-80546af395c8 00:28:22.941 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:23.200 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:23.200 00:28:23.200 real 0m19.639s 00:28:23.200 user 0m36.260s 00:28:23.200 sys 0m5.137s 00:28:23.200 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:23.200 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:23.200 ************************************ 00:28:23.200 END TEST lvs_grow_dirty 00:28:23.200 ************************************ 00:28:23.200 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:28:23.200 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:28:23.200 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:28:23.200 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:28:23.200 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:28:23.200 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:28:23.200 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:28:23.200 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:28:23.200 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:28:23.200 nvmf_trace.0 00:28:23.200 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:28:23.200 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:28:23.200 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:23.200 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:28:23.200 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:23.200 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:28:23.200 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:23.200 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:23.200 rmmod nvme_tcp 00:28:23.200 rmmod nvme_fabrics 00:28:23.200 rmmod nvme_keyring 00:28:23.200 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:23.200 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:28:23.200 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:28:23.200 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2756183 ']' 00:28:23.200 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2756183 00:28:23.200 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2756183 ']' 00:28:23.200 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2756183 00:28:23.200 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:28:23.200 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:23.200 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2756183 00:28:23.458 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:23.458 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:23.458 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2756183' 00:28:23.458 killing process with pid 2756183 00:28:23.458 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2756183 00:28:23.458 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2756183 00:28:23.458 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:23.458 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:23.458 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:23.458 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:28:23.458 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:28:23.458 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:23.458 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:28:23.458 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:23.458 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:23.458 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.458 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:23.458 11:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:26.000 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:26.000 00:28:26.000 real 0m43.535s 00:28:26.000 user 0m55.693s 00:28:26.000 sys 0m9.415s 00:28:26.000 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:26.000 11:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:26.000 ************************************ 00:28:26.000 END TEST nvmf_lvs_grow 00:28:26.000 ************************************ 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:26.000 ************************************ 00:28:26.000 START TEST nvmf_bdev_io_wait 00:28:26.000 ************************************ 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:28:26.000 * Looking for test storage... 00:28:26.000 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:26.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.000 --rc genhtml_branch_coverage=1 00:28:26.000 --rc genhtml_function_coverage=1 00:28:26.000 --rc genhtml_legend=1 00:28:26.000 --rc geninfo_all_blocks=1 00:28:26.000 --rc geninfo_unexecuted_blocks=1 00:28:26.000 00:28:26.000 ' 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:26.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.000 --rc genhtml_branch_coverage=1 00:28:26.000 --rc genhtml_function_coverage=1 00:28:26.000 --rc genhtml_legend=1 00:28:26.000 --rc geninfo_all_blocks=1 00:28:26.000 --rc geninfo_unexecuted_blocks=1 00:28:26.000 00:28:26.000 ' 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:26.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.000 --rc genhtml_branch_coverage=1 00:28:26.000 --rc genhtml_function_coverage=1 00:28:26.000 --rc genhtml_legend=1 00:28:26.000 --rc geninfo_all_blocks=1 00:28:26.000 --rc geninfo_unexecuted_blocks=1 00:28:26.000 00:28:26.000 ' 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:26.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.000 --rc genhtml_branch_coverage=1 00:28:26.000 --rc genhtml_function_coverage=1 00:28:26.000 --rc genhtml_legend=1 00:28:26.000 --rc geninfo_all_blocks=1 00:28:26.000 --rc geninfo_unexecuted_blocks=1 00:28:26.000 00:28:26.000 ' 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:26.000 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:26.001 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.001 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.001 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.001 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:28:26.001 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.001 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:28:26.001 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:26.001 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:26.001 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:26.001 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:26.001 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:26.001 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:26.001 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:26.001 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:26.001 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:26.001 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:26.001 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:26.001 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:26.001 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:28:26.001 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:26.001 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:26.001 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:26.001 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:26.001 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:26.001 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:26.001 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:26.001 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:26.001 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:26.001 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:26.001 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:28:26.001 11:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:28:28.587 Found 0000:82:00.0 (0x8086 - 0x159b) 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:28:28.587 Found 0000:82:00.1 (0x8086 - 0x159b) 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:28:28.587 Found net devices under 0000:82:00.0: cvl_0_0 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:28.587 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:28:28.587 Found net devices under 0000:82:00.1: cvl_0_1 00:28:28.588 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:28.588 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:28.588 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:28:28.588 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:28.588 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:28.588 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:28.588 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:28.588 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:28.588 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:28.588 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:28.588 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:28.588 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:28.588 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:28.588 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:28.588 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:28.588 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:28.588 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:28.588 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:28.588 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:28.588 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:28.588 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:28.588 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:28.588 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:28.588 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:28.588 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:28.588 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:28.588 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:28.588 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:28.588 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:28.588 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:28.588 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:28:28.588 00:28:28.588 --- 10.0.0.2 ping statistics --- 00:28:28.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:28.588 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:28:28.588 11:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:28.588 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:28.588 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:28:28.588 00:28:28.588 --- 10.0.0.1 ping statistics --- 00:28:28.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:28.588 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:28:28.588 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:28.588 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:28:28.588 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:28.588 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:28.588 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:28.588 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:28.588 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:28.588 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:28.588 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:28.588 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:28.588 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:28.588 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:28.588 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:28.588 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2759012 00:28:28.588 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:28:28.588 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2759012 00:28:28.588 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2759012 ']' 00:28:28.588 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:28.588 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:28.588 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:28.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:28.588 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:28.588 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:28.847 [2024-11-19 11:30:24.082437] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:28.847 [2024-11-19 11:30:24.083690] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:28:28.847 [2024-11-19 11:30:24.083753] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:28.847 [2024-11-19 11:30:24.165737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:28.847 [2024-11-19 11:30:24.222681] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:28.847 [2024-11-19 11:30:24.222736] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:28.847 [2024-11-19 11:30:24.222764] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:28.847 [2024-11-19 11:30:24.222775] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:28.847 [2024-11-19 11:30:24.222785] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:28.847 [2024-11-19 11:30:24.224339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:28.847 [2024-11-19 11:30:24.224475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:28.847 [2024-11-19 11:30:24.224501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:28.847 [2024-11-19 11:30:24.224505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:28.847 [2024-11-19 11:30:24.224997] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:28.848 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:28.848 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:28:28.848 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:28.848 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:28.848 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:29.107 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:29.107 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:28:29.107 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.107 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:29.107 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.107 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:28:29.107 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.107 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:29.107 [2024-11-19 11:30:24.417195] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:29.107 [2024-11-19 11:30:24.417465] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:29.107 [2024-11-19 11:30:24.418277] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:29.107 [2024-11-19 11:30:24.419087] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:28:29.107 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.107 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:29.107 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.107 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:29.107 [2024-11-19 11:30:24.425196] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:29.107 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.107 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:29.107 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.107 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:29.107 Malloc0 00:28:29.107 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.107 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:29.107 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.107 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:29.107 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.107 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:29.107 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.107 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:29.107 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.107 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:29.107 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.107 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:29.107 [2024-11-19 11:30:24.485397] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:29.107 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.107 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2759155 00:28:29.107 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2759156 00:28:29.107 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:28:29.107 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:28:29.107 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:28:29.107 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2759159 00:28:29.107 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:28:29.107 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:28:29.107 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:28:29.107 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:29.107 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:29.107 { 00:28:29.107 "params": { 00:28:29.107 "name": "Nvme$subsystem", 00:28:29.107 "trtype": "$TEST_TRANSPORT", 00:28:29.107 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:29.107 "adrfam": "ipv4", 00:28:29.107 "trsvcid": "$NVMF_PORT", 00:28:29.107 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:29.107 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:29.107 "hdgst": ${hdgst:-false}, 00:28:29.107 "ddgst": ${ddgst:-false} 00:28:29.107 }, 00:28:29.107 "method": "bdev_nvme_attach_controller" 00:28:29.107 } 00:28:29.107 EOF 00:28:29.107 )") 00:28:29.107 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:28:29.107 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:28:29.107 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:29.107 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2759161 00:28:29.107 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:29.107 { 00:28:29.107 "params": { 00:28:29.107 "name": "Nvme$subsystem", 00:28:29.107 "trtype": "$TEST_TRANSPORT", 00:28:29.107 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:29.107 "adrfam": "ipv4", 00:28:29.107 "trsvcid": "$NVMF_PORT", 00:28:29.107 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:29.107 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:29.108 "hdgst": ${hdgst:-false}, 00:28:29.108 "ddgst": ${ddgst:-false} 00:28:29.108 }, 00:28:29.108 "method": "bdev_nvme_attach_controller" 00:28:29.108 } 00:28:29.108 EOF 00:28:29.108 )") 00:28:29.108 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:28:29.108 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:28:29.108 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:28:29.108 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:28:29.108 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:28:29.108 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:29.108 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:28:29.108 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:28:29.108 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:29.108 { 00:28:29.108 "params": { 00:28:29.108 "name": "Nvme$subsystem", 00:28:29.108 "trtype": "$TEST_TRANSPORT", 00:28:29.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:29.108 "adrfam": "ipv4", 00:28:29.108 "trsvcid": "$NVMF_PORT", 00:28:29.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:29.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:29.108 "hdgst": ${hdgst:-false}, 00:28:29.108 "ddgst": ${ddgst:-false} 00:28:29.108 }, 00:28:29.108 "method": "bdev_nvme_attach_controller" 00:28:29.108 } 00:28:29.108 EOF 00:28:29.108 )") 00:28:29.108 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:28:29.108 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:28:29.108 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:28:29.108 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:29.108 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:29.108 { 00:28:29.108 "params": { 00:28:29.108 "name": "Nvme$subsystem", 00:28:29.108 "trtype": "$TEST_TRANSPORT", 00:28:29.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:29.108 "adrfam": "ipv4", 00:28:29.108 "trsvcid": "$NVMF_PORT", 00:28:29.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:29.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:29.108 "hdgst": ${hdgst:-false}, 00:28:29.108 "ddgst": ${ddgst:-false} 00:28:29.108 }, 00:28:29.108 "method": "bdev_nvme_attach_controller" 00:28:29.108 } 00:28:29.108 EOF 00:28:29.108 )") 00:28:29.108 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:28:29.108 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:28:29.108 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2759155 00:28:29.108 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:28:29.108 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:28:29.108 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:28:29.108 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:28:29.108 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:28:29.108 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:28:29.108 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:29.108 "params": { 00:28:29.108 "name": "Nvme1", 00:28:29.108 "trtype": "tcp", 00:28:29.108 "traddr": "10.0.0.2", 00:28:29.108 "adrfam": "ipv4", 00:28:29.108 "trsvcid": "4420", 00:28:29.108 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:29.108 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:29.108 "hdgst": false, 00:28:29.108 "ddgst": false 00:28:29.108 }, 00:28:29.108 "method": "bdev_nvme_attach_controller" 00:28:29.108 }' 00:28:29.108 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:28:29.108 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:29.108 "params": { 00:28:29.108 "name": "Nvme1", 00:28:29.108 "trtype": "tcp", 00:28:29.108 "traddr": "10.0.0.2", 00:28:29.108 "adrfam": "ipv4", 00:28:29.108 "trsvcid": "4420", 00:28:29.108 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:29.108 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:29.108 "hdgst": false, 00:28:29.108 "ddgst": false 00:28:29.108 }, 00:28:29.108 "method": "bdev_nvme_attach_controller" 00:28:29.108 }' 00:28:29.108 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:28:29.108 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:29.108 "params": { 00:28:29.108 "name": "Nvme1", 00:28:29.108 "trtype": "tcp", 00:28:29.108 "traddr": "10.0.0.2", 00:28:29.108 "adrfam": "ipv4", 00:28:29.108 "trsvcid": "4420", 00:28:29.108 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:29.108 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:29.108 "hdgst": false, 00:28:29.108 "ddgst": false 00:28:29.108 }, 00:28:29.108 "method": "bdev_nvme_attach_controller" 00:28:29.108 }' 00:28:29.108 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:28:29.108 11:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:29.108 "params": { 00:28:29.108 "name": "Nvme1", 00:28:29.108 "trtype": "tcp", 00:28:29.108 "traddr": "10.0.0.2", 00:28:29.108 "adrfam": "ipv4", 00:28:29.108 "trsvcid": "4420", 00:28:29.108 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:29.108 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:29.108 "hdgst": false, 00:28:29.108 "ddgst": false 00:28:29.108 }, 00:28:29.108 "method": "bdev_nvme_attach_controller" 00:28:29.108 }' 00:28:29.108 [2024-11-19 11:30:24.537833] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:28:29.108 [2024-11-19 11:30:24.537832] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:28:29.108 [2024-11-19 11:30:24.537834] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:28:29.108 [2024-11-19 11:30:24.537918] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-19 11:30:24.537918] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-19 11:30:24.537918] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:28:29.108 --proc-type=auto ] 00:28:29.108 --proc-type=auto ] 00:28:29.108 [2024-11-19 11:30:24.538283] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:28:29.108 [2024-11-19 11:30:24.538373] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:28:29.366 [2024-11-19 11:30:24.728867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.366 [2024-11-19 11:30:24.782199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:29.366 [2024-11-19 11:30:24.830436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.623 [2024-11-19 11:30:24.884055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:28:29.624 [2024-11-19 11:30:24.928166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.624 [2024-11-19 11:30:24.984511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:28:29.624 [2024-11-19 11:30:25.001165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.624 [2024-11-19 11:30:25.052889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:28:29.881 Running I/O for 1 seconds... 00:28:29.881 Running I/O for 1 seconds... 00:28:29.881 Running I/O for 1 seconds... 00:28:29.881 Running I/O for 1 seconds... 00:28:30.813 201064.00 IOPS, 785.41 MiB/s 00:28:30.813 Latency(us) 00:28:30.814 [2024-11-19T10:30:26.311Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.814 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:28:30.814 Nvme1n1 : 1.00 200685.68 783.93 0.00 0.00 634.47 282.17 1856.85 00:28:30.814 [2024-11-19T10:30:26.311Z] =================================================================================================================== 00:28:30.814 [2024-11-19T10:30:26.311Z] Total : 200685.68 783.93 0.00 0.00 634.47 282.17 1856.85 00:28:30.814 7146.00 IOPS, 27.91 MiB/s 00:28:30.814 Latency(us) 00:28:30.814 [2024-11-19T10:30:26.311Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.814 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:28:30.814 Nvme1n1 : 1.06 6858.82 26.79 0.00 0.00 17793.35 4417.61 60196.03 00:28:30.814 [2024-11-19T10:30:26.311Z] =================================================================================================================== 00:28:30.814 [2024-11-19T10:30:26.311Z] Total : 6858.82 26.79 0.00 0.00 17793.35 4417.61 60196.03 00:28:30.814 9473.00 IOPS, 37.00 MiB/s 00:28:30.814 Latency(us) 00:28:30.814 [2024-11-19T10:30:26.311Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.814 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:28:30.814 Nvme1n1 : 1.01 9533.59 37.24 0.00 0.00 13366.64 5898.24 19029.71 00:28:30.814 [2024-11-19T10:30:26.311Z] =================================================================================================================== 00:28:30.814 [2024-11-19T10:30:26.311Z] Total : 9533.59 37.24 0.00 0.00 13366.64 5898.24 19029.71 00:28:31.072 7108.00 IOPS, 27.77 MiB/s 00:28:31.072 Latency(us) 00:28:31.072 [2024-11-19T10:30:26.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:31.072 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:28:31.072 Nvme1n1 : 1.01 7233.03 28.25 0.00 0.00 17648.61 3883.61 37476.88 00:28:31.072 [2024-11-19T10:30:26.569Z] =================================================================================================================== 00:28:31.072 [2024-11-19T10:30:26.569Z] Total : 7233.03 28.25 0.00 0.00 17648.61 3883.61 37476.88 00:28:31.072 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2759156 00:28:31.072 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2759159 00:28:31.072 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2759161 00:28:31.072 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:31.072 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.072 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:31.072 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.072 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:28:31.072 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:28:31.072 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:31.072 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:28:31.072 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:31.072 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:28:31.072 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:31.072 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:31.072 rmmod nvme_tcp 00:28:31.072 rmmod nvme_fabrics 00:28:31.072 rmmod nvme_keyring 00:28:31.332 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:31.332 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:28:31.332 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:28:31.332 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2759012 ']' 00:28:31.332 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2759012 00:28:31.332 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2759012 ']' 00:28:31.332 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2759012 00:28:31.332 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:28:31.332 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:31.332 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2759012 00:28:31.332 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:31.332 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:31.332 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2759012' 00:28:31.332 killing process with pid 2759012 00:28:31.332 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2759012 00:28:31.332 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2759012 00:28:31.591 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:31.591 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:31.591 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:31.591 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:28:31.591 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:28:31.591 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:31.591 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:28:31.591 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:31.591 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:31.591 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:31.591 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:31.592 11:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.493 11:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:33.493 00:28:33.493 real 0m7.851s 00:28:33.493 user 0m14.972s 00:28:33.493 sys 0m4.356s 00:28:33.493 11:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:33.493 11:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:33.493 ************************************ 00:28:33.493 END TEST nvmf_bdev_io_wait 00:28:33.493 ************************************ 00:28:33.493 11:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:28:33.493 11:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:33.493 11:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:33.493 11:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:33.493 ************************************ 00:28:33.493 START TEST nvmf_queue_depth 00:28:33.493 ************************************ 00:28:33.493 11:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:28:33.493 * Looking for test storage... 00:28:33.753 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:33.753 11:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:33.753 11:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:28:33.753 11:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:33.753 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:33.753 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:33.753 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:33.753 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:33.753 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:28:33.753 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:28:33.753 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:28:33.753 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:28:33.753 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:28:33.753 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:28:33.753 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:28:33.753 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:33.753 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:28:33.753 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:28:33.753 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:33.753 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:33.753 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:28:33.753 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:28:33.753 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:33.753 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:28:33.753 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:28:33.753 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:28:33.753 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:28:33.753 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:33.753 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:28:33.753 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:28:33.753 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:33.753 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:33.753 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:28:33.753 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:33.753 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:33.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.753 --rc genhtml_branch_coverage=1 00:28:33.753 --rc genhtml_function_coverage=1 00:28:33.753 --rc genhtml_legend=1 00:28:33.753 --rc geninfo_all_blocks=1 00:28:33.753 --rc geninfo_unexecuted_blocks=1 00:28:33.753 00:28:33.753 ' 00:28:33.753 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:33.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.753 --rc genhtml_branch_coverage=1 00:28:33.753 --rc genhtml_function_coverage=1 00:28:33.753 --rc genhtml_legend=1 00:28:33.753 --rc geninfo_all_blocks=1 00:28:33.753 --rc geninfo_unexecuted_blocks=1 00:28:33.753 00:28:33.753 ' 00:28:33.753 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:33.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.753 --rc genhtml_branch_coverage=1 00:28:33.753 --rc genhtml_function_coverage=1 00:28:33.753 --rc genhtml_legend=1 00:28:33.753 --rc geninfo_all_blocks=1 00:28:33.753 --rc geninfo_unexecuted_blocks=1 00:28:33.753 00:28:33.753 ' 00:28:33.753 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:33.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.753 --rc genhtml_branch_coverage=1 00:28:33.753 --rc genhtml_function_coverage=1 00:28:33.753 --rc genhtml_legend=1 00:28:33.753 --rc geninfo_all_blocks=1 00:28:33.753 --rc geninfo_unexecuted_blocks=1 00:28:33.753 00:28:33.753 ' 00:28:33.753 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:33.753 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:28:33.753 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:33.753 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:33.753 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:33.753 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:33.753 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:33.753 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:33.753 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:33.753 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:33.753 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:33.754 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:33.754 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:28:33.754 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:28:33.754 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:33.754 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:33.754 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:33.754 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:33.754 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:33.754 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:28:33.754 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:33.754 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:33.754 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:33.754 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.754 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.754 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.754 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:28:33.754 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.754 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:28:33.754 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:33.754 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:33.754 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:33.754 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:33.754 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:33.754 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:33.754 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:33.754 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:33.754 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:33.754 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:33.754 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:28:33.754 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:28:33.754 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:33.754 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:28:33.754 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:33.754 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:33.754 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:33.754 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:33.754 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:33.754 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:33.754 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:33.754 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.754 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:33.754 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:33.754 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:28:33.754 11:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:36.286 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:36.286 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:28:36.286 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:36.286 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:36.286 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:36.286 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:36.286 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:36.286 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:28:36.286 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:36.286 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:28:36.286 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:28:36.286 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:28:36.286 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:28:36.286 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:28:36.286 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:28:36.286 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:36.286 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:36.286 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:36.286 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:36.286 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:36.286 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:36.286 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:36.286 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:36.286 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:36.286 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:36.286 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:36.286 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:36.286 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:36.286 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:36.286 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:36.286 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:36.286 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:36.286 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:36.286 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:36.286 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:28:36.286 Found 0000:82:00.0 (0x8086 - 0x159b) 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:28:36.287 Found 0000:82:00.1 (0x8086 - 0x159b) 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:28:36.287 Found net devices under 0000:82:00.0: cvl_0_0 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:28:36.287 Found net devices under 0000:82:00.1: cvl_0_1 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:36.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:36.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:28:36.287 00:28:36.287 --- 10.0.0.2 ping statistics --- 00:28:36.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.287 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:36.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:36.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:28:36.287 00:28:36.287 --- 10.0.0.1 ping statistics --- 00:28:36.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.287 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:36.287 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2761669 00:28:36.288 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:28:36.288 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2761669 00:28:36.288 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2761669 ']' 00:28:36.288 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:36.288 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:36.288 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:36.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:36.288 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:36.288 11:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:36.288 [2024-11-19 11:30:31.742883] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:36.288 [2024-11-19 11:30:31.743889] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:28:36.288 [2024-11-19 11:30:31.743937] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:36.546 [2024-11-19 11:30:31.829272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.546 [2024-11-19 11:30:31.893109] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:36.546 [2024-11-19 11:30:31.893166] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:36.546 [2024-11-19 11:30:31.893196] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:36.546 [2024-11-19 11:30:31.893208] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:36.546 [2024-11-19 11:30:31.893218] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:36.546 [2024-11-19 11:30:31.893901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:36.546 [2024-11-19 11:30:31.992356] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:36.546 [2024-11-19 11:30:31.992721] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:36.546 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:36.546 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:28:36.546 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:36.546 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:36.546 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:36.546 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:36.546 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:36.546 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.546 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:36.805 [2024-11-19 11:30:32.046514] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:36.805 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.805 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:36.805 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.805 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:36.805 Malloc0 00:28:36.805 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.805 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:36.805 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.805 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:36.805 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.805 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:36.805 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.805 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:36.805 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.805 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:36.805 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.805 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:36.805 [2024-11-19 11:30:32.106621] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:36.805 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.805 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2761733 00:28:36.805 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:36.805 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:28:36.805 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2761733 /var/tmp/bdevperf.sock 00:28:36.805 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2761733 ']' 00:28:36.805 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:36.806 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:36.806 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:36.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:36.806 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:36.806 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:36.806 [2024-11-19 11:30:32.162580] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:28:36.806 [2024-11-19 11:30:32.162661] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2761733 ] 00:28:36.806 [2024-11-19 11:30:32.244436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.064 [2024-11-19 11:30:32.306860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.064 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:37.064 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:28:37.064 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:37.064 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.064 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:37.323 NVMe0n1 00:28:37.323 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.323 11:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:37.323 Running I/O for 10 seconds... 00:28:39.632 9193.00 IOPS, 35.91 MiB/s [2024-11-19T10:30:36.064Z] 9235.00 IOPS, 36.07 MiB/s [2024-11-19T10:30:36.998Z] 9379.00 IOPS, 36.64 MiB/s [2024-11-19T10:30:37.932Z] 9473.00 IOPS, 37.00 MiB/s [2024-11-19T10:30:38.867Z] 9467.80 IOPS, 36.98 MiB/s [2024-11-19T10:30:39.801Z] 9478.00 IOPS, 37.02 MiB/s [2024-11-19T10:30:41.176Z] 9512.14 IOPS, 37.16 MiB/s [2024-11-19T10:30:42.111Z] 9565.12 IOPS, 37.36 MiB/s [2024-11-19T10:30:43.074Z] 9571.56 IOPS, 37.39 MiB/s [2024-11-19T10:30:43.074Z] 9620.20 IOPS, 37.58 MiB/s 00:28:47.577 Latency(us) 00:28:47.577 [2024-11-19T10:30:43.074Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:47.577 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:28:47.577 Verification LBA range: start 0x0 length 0x4000 00:28:47.577 NVMe0n1 : 10.08 9647.22 37.68 0.00 0.00 105755.55 20486.07 66409.81 00:28:47.577 [2024-11-19T10:30:43.074Z] =================================================================================================================== 00:28:47.577 [2024-11-19T10:30:43.074Z] Total : 9647.22 37.68 0.00 0.00 105755.55 20486.07 66409.81 00:28:47.577 { 00:28:47.577 "results": [ 00:28:47.577 { 00:28:47.577 "job": "NVMe0n1", 00:28:47.577 "core_mask": "0x1", 00:28:47.577 "workload": "verify", 00:28:47.577 "status": "finished", 00:28:47.577 "verify_range": { 00:28:47.577 "start": 0, 00:28:47.577 "length": 16384 00:28:47.577 }, 00:28:47.577 "queue_depth": 1024, 00:28:47.577 "io_size": 4096, 00:28:47.577 "runtime": 10.077309, 00:28:47.577 "iops": 9647.218319890757, 00:28:47.577 "mibps": 37.68444656207327, 00:28:47.577 "io_failed": 0, 00:28:47.577 "io_timeout": 0, 00:28:47.577 "avg_latency_us": 105755.54810709493, 00:28:47.577 "min_latency_us": 20486.068148148148, 00:28:47.577 "max_latency_us": 66409.81333333334 00:28:47.577 } 00:28:47.577 ], 00:28:47.577 "core_count": 1 00:28:47.577 } 00:28:47.577 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2761733 00:28:47.577 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2761733 ']' 00:28:47.577 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2761733 00:28:47.577 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:28:47.577 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:47.577 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2761733 00:28:47.577 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:47.577 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:47.577 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2761733' 00:28:47.577 killing process with pid 2761733 00:28:47.577 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2761733 00:28:47.577 Received shutdown signal, test time was about 10.000000 seconds 00:28:47.577 00:28:47.577 Latency(us) 00:28:47.577 [2024-11-19T10:30:43.074Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:47.577 [2024-11-19T10:30:43.074Z] =================================================================================================================== 00:28:47.577 [2024-11-19T10:30:43.074Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:47.577 11:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2761733 00:28:47.835 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:28:47.835 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:28:47.835 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:47.835 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:28:47.835 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:47.835 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:28:47.835 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:47.835 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:47.835 rmmod nvme_tcp 00:28:47.835 rmmod nvme_fabrics 00:28:47.835 rmmod nvme_keyring 00:28:47.835 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:47.835 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:28:47.835 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:28:47.835 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2761669 ']' 00:28:47.835 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2761669 00:28:47.835 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2761669 ']' 00:28:47.835 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2761669 00:28:47.835 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:28:47.835 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:47.835 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2761669 00:28:47.835 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:47.835 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:47.835 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2761669' 00:28:47.835 killing process with pid 2761669 00:28:47.835 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2761669 00:28:47.835 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2761669 00:28:48.093 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:48.093 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:48.093 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:48.093 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:28:48.093 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:28:48.093 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:48.093 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:28:48.093 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:48.093 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:48.093 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:48.093 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:48.093 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:50.630 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:50.630 00:28:50.630 real 0m16.588s 00:28:50.630 user 0m22.202s 00:28:50.630 sys 0m4.053s 00:28:50.630 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:50.630 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:50.630 ************************************ 00:28:50.630 END TEST nvmf_queue_depth 00:28:50.630 ************************************ 00:28:50.630 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:28:50.630 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:50.630 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:50.630 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:50.630 ************************************ 00:28:50.630 START TEST nvmf_target_multipath 00:28:50.630 ************************************ 00:28:50.630 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:28:50.630 * Looking for test storage... 00:28:50.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:50.630 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:50.630 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:28:50.630 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:50.630 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:50.630 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:50.630 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:50.630 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:50.630 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:28:50.630 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:28:50.630 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:28:50.630 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:28:50.630 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:28:50.630 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:28:50.630 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:28:50.630 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:50.630 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:28:50.630 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:50.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:50.631 --rc genhtml_branch_coverage=1 00:28:50.631 --rc genhtml_function_coverage=1 00:28:50.631 --rc genhtml_legend=1 00:28:50.631 --rc geninfo_all_blocks=1 00:28:50.631 --rc geninfo_unexecuted_blocks=1 00:28:50.631 00:28:50.631 ' 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:50.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:50.631 --rc genhtml_branch_coverage=1 00:28:50.631 --rc genhtml_function_coverage=1 00:28:50.631 --rc genhtml_legend=1 00:28:50.631 --rc geninfo_all_blocks=1 00:28:50.631 --rc geninfo_unexecuted_blocks=1 00:28:50.631 00:28:50.631 ' 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:50.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:50.631 --rc genhtml_branch_coverage=1 00:28:50.631 --rc genhtml_function_coverage=1 00:28:50.631 --rc genhtml_legend=1 00:28:50.631 --rc geninfo_all_blocks=1 00:28:50.631 --rc geninfo_unexecuted_blocks=1 00:28:50.631 00:28:50.631 ' 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:50.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:50.631 --rc genhtml_branch_coverage=1 00:28:50.631 --rc genhtml_function_coverage=1 00:28:50.631 --rc genhtml_legend=1 00:28:50.631 --rc geninfo_all_blocks=1 00:28:50.631 --rc geninfo_unexecuted_blocks=1 00:28:50.631 00:28:50.631 ' 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:50.631 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:28:50.632 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:50.632 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:50.632 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:50.632 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:50.632 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:50.632 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:50.632 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:50.632 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:50.632 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:50.632 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:50.632 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:28:50.632 11:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:28:53.166 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:53.166 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:28:53.166 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:53.166 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:53.166 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:53.166 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:53.166 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:53.166 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:28:53.166 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:53.166 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:28:53.166 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:28:53.166 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:28:53.166 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:28:53.166 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:28:53.166 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:28:53.166 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:53.166 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:53.166 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:53.166 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:53.166 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:53.166 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:53.166 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:53.166 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:28:53.167 Found 0000:82:00.0 (0x8086 - 0x159b) 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:28:53.167 Found 0000:82:00.1 (0x8086 - 0x159b) 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:28:53.167 Found net devices under 0000:82:00.0: cvl_0_0 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:28:53.167 Found net devices under 0000:82:00.1: cvl_0_1 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:53.167 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:53.168 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:53.168 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:53.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:53.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:28:53.168 00:28:53.168 --- 10.0.0.2 ping statistics --- 00:28:53.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:53.168 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:28:53.168 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:53.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:53.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:28:53.168 00:28:53.168 --- 10.0.0.1 ping statistics --- 00:28:53.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:53.168 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:28:53.168 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:53.168 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:28:53.168 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:53.168 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:53.168 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:53.168 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:53.168 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:53.168 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:53.168 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:53.168 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:28:53.168 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:28:53.168 only one NIC for nvmf test 00:28:53.168 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:28:53.168 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:53.168 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:28:53.168 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:53.168 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:28:53.168 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:53.168 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:53.168 rmmod nvme_tcp 00:28:53.168 rmmod nvme_fabrics 00:28:53.168 rmmod nvme_keyring 00:28:53.168 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:53.168 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:28:53.168 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:28:53.168 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:28:53.168 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:53.168 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:53.168 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:53.168 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:28:53.168 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:28:53.168 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:53.168 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:28:53.168 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:53.168 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:53.168 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:53.168 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:53.168 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:55.707 00:28:55.707 real 0m5.050s 00:28:55.707 user 0m1.116s 00:28:55.707 sys 0m1.961s 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:28:55.707 ************************************ 00:28:55.707 END TEST nvmf_target_multipath 00:28:55.707 ************************************ 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:55.707 ************************************ 00:28:55.707 START TEST nvmf_zcopy 00:28:55.707 ************************************ 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:28:55.707 * Looking for test storage... 00:28:55.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:28:55.707 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:55.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.708 --rc genhtml_branch_coverage=1 00:28:55.708 --rc genhtml_function_coverage=1 00:28:55.708 --rc genhtml_legend=1 00:28:55.708 --rc geninfo_all_blocks=1 00:28:55.708 --rc geninfo_unexecuted_blocks=1 00:28:55.708 00:28:55.708 ' 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:55.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.708 --rc genhtml_branch_coverage=1 00:28:55.708 --rc genhtml_function_coverage=1 00:28:55.708 --rc genhtml_legend=1 00:28:55.708 --rc geninfo_all_blocks=1 00:28:55.708 --rc geninfo_unexecuted_blocks=1 00:28:55.708 00:28:55.708 ' 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:55.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.708 --rc genhtml_branch_coverage=1 00:28:55.708 --rc genhtml_function_coverage=1 00:28:55.708 --rc genhtml_legend=1 00:28:55.708 --rc geninfo_all_blocks=1 00:28:55.708 --rc geninfo_unexecuted_blocks=1 00:28:55.708 00:28:55.708 ' 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:55.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.708 --rc genhtml_branch_coverage=1 00:28:55.708 --rc genhtml_function_coverage=1 00:28:55.708 --rc genhtml_legend=1 00:28:55.708 --rc geninfo_all_blocks=1 00:28:55.708 --rc geninfo_unexecuted_blocks=1 00:28:55.708 00:28:55.708 ' 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:28:55.708 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:58.246 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:58.246 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:28:58.246 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:58.246 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:58.246 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:58.246 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:58.246 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:58.246 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:28:58.247 Found 0000:82:00.0 (0x8086 - 0x159b) 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:28:58.247 Found 0000:82:00.1 (0x8086 - 0x159b) 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:28:58.247 Found net devices under 0000:82:00.0: cvl_0_0 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:28:58.247 Found net devices under 0000:82:00.1: cvl_0_1 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:58.247 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:58.247 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:28:58.247 00:28:58.247 --- 10.0.0.2 ping statistics --- 00:28:58.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:58.247 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:28:58.247 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:58.248 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:58.248 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:28:58.248 00:28:58.248 --- 10.0.0.1 ping statistics --- 00:28:58.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:58.248 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:28:58.248 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:58.248 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:28:58.248 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:58.248 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:58.248 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:58.248 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:58.248 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:58.248 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:58.248 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:58.248 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:28:58.248 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:58.248 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:58.248 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:58.248 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2767572 00:28:58.248 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:28:58.248 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2767572 00:28:58.248 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2767572 ']' 00:28:58.248 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:58.248 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:58.248 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:58.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:58.248 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:58.248 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:58.248 [2024-11-19 11:30:53.579847] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:58.248 [2024-11-19 11:30:53.580985] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:28:58.248 [2024-11-19 11:30:53.581056] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:58.248 [2024-11-19 11:30:53.664248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.248 [2024-11-19 11:30:53.721758] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:58.248 [2024-11-19 11:30:53.721825] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:58.248 [2024-11-19 11:30:53.721854] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:58.248 [2024-11-19 11:30:53.721864] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:58.248 [2024-11-19 11:30:53.721874] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:58.248 [2024-11-19 11:30:53.722522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:58.507 [2024-11-19 11:30:53.817557] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:58.507 [2024-11-19 11:30:53.817900] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:58.507 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:58.507 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:28:58.507 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:58.507 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:58.507 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:58.507 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:58.507 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:28:58.507 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:28:58.507 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.507 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:58.507 [2024-11-19 11:30:53.867112] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:58.507 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.507 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:58.507 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.507 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:58.507 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.507 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:58.507 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.507 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:58.507 [2024-11-19 11:30:53.883462] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:58.507 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.507 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:58.507 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.507 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:58.507 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.507 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:28:58.507 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.507 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:58.507 malloc0 00:28:58.507 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.507 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:28:58.507 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.507 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:58.507 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.507 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:28:58.507 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:28:58.507 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:28:58.507 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:28:58.507 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:58.507 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:58.507 { 00:28:58.507 "params": { 00:28:58.507 "name": "Nvme$subsystem", 00:28:58.507 "trtype": "$TEST_TRANSPORT", 00:28:58.507 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:58.507 "adrfam": "ipv4", 00:28:58.507 "trsvcid": "$NVMF_PORT", 00:28:58.507 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:58.507 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:58.507 "hdgst": ${hdgst:-false}, 00:28:58.507 "ddgst": ${ddgst:-false} 00:28:58.507 }, 00:28:58.507 "method": "bdev_nvme_attach_controller" 00:28:58.507 } 00:28:58.507 EOF 00:28:58.507 )") 00:28:58.507 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:28:58.507 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:28:58.507 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:28:58.507 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:58.507 "params": { 00:28:58.507 "name": "Nvme1", 00:28:58.507 "trtype": "tcp", 00:28:58.507 "traddr": "10.0.0.2", 00:28:58.507 "adrfam": "ipv4", 00:28:58.507 "trsvcid": "4420", 00:28:58.507 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:58.507 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:58.507 "hdgst": false, 00:28:58.507 "ddgst": false 00:28:58.507 }, 00:28:58.507 "method": "bdev_nvme_attach_controller" 00:28:58.507 }' 00:28:58.507 [2024-11-19 11:30:53.976072] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:28:58.507 [2024-11-19 11:30:53.976147] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2767682 ] 00:28:58.766 [2024-11-19 11:30:54.054139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.766 [2024-11-19 11:30:54.118506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.024 Running I/O for 10 seconds... 00:29:00.894 6008.00 IOPS, 46.94 MiB/s [2024-11-19T10:30:57.767Z] 6080.50 IOPS, 47.50 MiB/s [2024-11-19T10:30:58.702Z] 6078.33 IOPS, 47.49 MiB/s [2024-11-19T10:30:59.647Z] 6098.75 IOPS, 47.65 MiB/s [2024-11-19T10:31:00.580Z] 6096.20 IOPS, 47.63 MiB/s [2024-11-19T10:31:01.512Z] 6087.00 IOPS, 47.55 MiB/s [2024-11-19T10:31:02.445Z] 6091.14 IOPS, 47.59 MiB/s [2024-11-19T10:31:03.379Z] 6111.25 IOPS, 47.74 MiB/s [2024-11-19T10:31:04.755Z] 6126.44 IOPS, 47.86 MiB/s [2024-11-19T10:31:04.755Z] 6134.80 IOPS, 47.93 MiB/s 00:29:09.258 Latency(us) 00:29:09.258 [2024-11-19T10:31:04.755Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:09.258 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:29:09.258 Verification LBA range: start 0x0 length 0x1000 00:29:09.258 Nvme1n1 : 10.02 6138.09 47.95 0.00 0.00 20800.13 3883.61 28544.57 00:29:09.258 [2024-11-19T10:31:04.755Z] =================================================================================================================== 00:29:09.258 [2024-11-19T10:31:04.755Z] Total : 6138.09 47.95 0.00 0.00 20800.13 3883.61 28544.57 00:29:09.258 11:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2768900 00:29:09.258 11:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:29:09.258 11:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:09.258 11:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:29:09.258 11:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:29:09.258 11:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:29:09.258 11:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:29:09.258 11:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:09.258 11:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:09.258 { 00:29:09.258 "params": { 00:29:09.258 "name": "Nvme$subsystem", 00:29:09.258 "trtype": "$TEST_TRANSPORT", 00:29:09.258 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:09.258 "adrfam": "ipv4", 00:29:09.258 "trsvcid": "$NVMF_PORT", 00:29:09.258 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:09.258 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:09.258 "hdgst": ${hdgst:-false}, 00:29:09.258 "ddgst": ${ddgst:-false} 00:29:09.258 }, 00:29:09.258 "method": "bdev_nvme_attach_controller" 00:29:09.258 } 00:29:09.258 EOF 00:29:09.258 )") 00:29:09.258 11:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:29:09.258 11:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:29:09.258 [2024-11-19 11:31:04.595048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.258 [2024-11-19 11:31:04.595085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.258 11:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:29:09.258 11:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:09.258 "params": { 00:29:09.258 "name": "Nvme1", 00:29:09.258 "trtype": "tcp", 00:29:09.258 "traddr": "10.0.0.2", 00:29:09.258 "adrfam": "ipv4", 00:29:09.258 "trsvcid": "4420", 00:29:09.258 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:09.258 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:09.258 "hdgst": false, 00:29:09.258 "ddgst": false 00:29:09.258 }, 00:29:09.258 "method": "bdev_nvme_attach_controller" 00:29:09.258 }' 00:29:09.258 [2024-11-19 11:31:04.602986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.258 [2024-11-19 11:31:04.603006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.258 [2024-11-19 11:31:04.610985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.258 [2024-11-19 11:31:04.611004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.258 [2024-11-19 11:31:04.618984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.258 [2024-11-19 11:31:04.619004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.258 [2024-11-19 11:31:04.626984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.258 [2024-11-19 11:31:04.627004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.258 [2024-11-19 11:31:04.634626] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:29:09.258 [2024-11-19 11:31:04.634723] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2768900 ] 00:29:09.258 [2024-11-19 11:31:04.634987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.258 [2024-11-19 11:31:04.635006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.258 [2024-11-19 11:31:04.642993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.258 [2024-11-19 11:31:04.643013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.258 [2024-11-19 11:31:04.650986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.258 [2024-11-19 11:31:04.651006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.258 [2024-11-19 11:31:04.658984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.258 [2024-11-19 11:31:04.659009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.258 [2024-11-19 11:31:04.666987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.258 [2024-11-19 11:31:04.667006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.258 [2024-11-19 11:31:04.674985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.258 [2024-11-19 11:31:04.675004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.258 [2024-11-19 11:31:04.682984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.258 [2024-11-19 11:31:04.683003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.258 [2024-11-19 11:31:04.690984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.258 [2024-11-19 11:31:04.691003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.258 [2024-11-19 11:31:04.698984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.258 [2024-11-19 11:31:04.699003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.258 [2024-11-19 11:31:04.706983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.258 [2024-11-19 11:31:04.707002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.258 [2024-11-19 11:31:04.710841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:09.258 [2024-11-19 11:31:04.714990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.259 [2024-11-19 11:31:04.715010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.259 [2024-11-19 11:31:04.723019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.259 [2024-11-19 11:31:04.723053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.259 [2024-11-19 11:31:04.730986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.259 [2024-11-19 11:31:04.731006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.259 [2024-11-19 11:31:04.738985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.259 [2024-11-19 11:31:04.739003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.259 [2024-11-19 11:31:04.746984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.259 [2024-11-19 11:31:04.747003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.517 [2024-11-19 11:31:04.755003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.517 [2024-11-19 11:31:04.755022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.518 [2024-11-19 11:31:04.762984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.518 [2024-11-19 11:31:04.763003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.518 [2024-11-19 11:31:04.769879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:09.518 [2024-11-19 11:31:04.770984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.518 [2024-11-19 11:31:04.771004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.518 [2024-11-19 11:31:04.778984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.518 [2024-11-19 11:31:04.779002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.518 [2024-11-19 11:31:04.787012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.518 [2024-11-19 11:31:04.787044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.518 [2024-11-19 11:31:04.795018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.518 [2024-11-19 11:31:04.795053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.518 [2024-11-19 11:31:04.803018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.518 [2024-11-19 11:31:04.803061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.518 [2024-11-19 11:31:04.811016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.518 [2024-11-19 11:31:04.811050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.518 [2024-11-19 11:31:04.819019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.518 [2024-11-19 11:31:04.819053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.518 [2024-11-19 11:31:04.827010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.518 [2024-11-19 11:31:04.827045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.518 [2024-11-19 11:31:04.834988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.518 [2024-11-19 11:31:04.835008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.518 [2024-11-19 11:31:04.843018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.518 [2024-11-19 11:31:04.843054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.518 [2024-11-19 11:31:04.851016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.518 [2024-11-19 11:31:04.851050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.518 [2024-11-19 11:31:04.858984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.518 [2024-11-19 11:31:04.859003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.518 [2024-11-19 11:31:04.866985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.518 [2024-11-19 11:31:04.867004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.518 [2024-11-19 11:31:04.874993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.518 [2024-11-19 11:31:04.875016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.518 [2024-11-19 11:31:04.882991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.518 [2024-11-19 11:31:04.883013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.518 [2024-11-19 11:31:04.890990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.518 [2024-11-19 11:31:04.891036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.518 [2024-11-19 11:31:04.898990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.518 [2024-11-19 11:31:04.899011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.518 [2024-11-19 11:31:04.906985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.518 [2024-11-19 11:31:04.907004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.518 [2024-11-19 11:31:04.914983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.518 [2024-11-19 11:31:04.915003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.518 [2024-11-19 11:31:04.922983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.518 [2024-11-19 11:31:04.923002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.518 [2024-11-19 11:31:04.930984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.518 [2024-11-19 11:31:04.931003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.518 [2024-11-19 11:31:04.938989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.518 [2024-11-19 11:31:04.939009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.518 [2024-11-19 11:31:04.946989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.518 [2024-11-19 11:31:04.947011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.518 [2024-11-19 11:31:04.954986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.518 [2024-11-19 11:31:04.955012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.518 [2024-11-19 11:31:04.962986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.518 [2024-11-19 11:31:04.963006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.518 [2024-11-19 11:31:04.970984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.518 [2024-11-19 11:31:04.971003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.518 [2024-11-19 11:31:04.978986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.518 [2024-11-19 11:31:04.979006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.518 [2024-11-19 11:31:04.986984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.518 [2024-11-19 11:31:04.987002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.518 [2024-11-19 11:31:04.994990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.518 [2024-11-19 11:31:04.995011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.518 [2024-11-19 11:31:05.002985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.518 [2024-11-19 11:31:05.003006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.518 [2024-11-19 11:31:05.010984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.518 [2024-11-19 11:31:05.011004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.777 [2024-11-19 11:31:05.018983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.777 [2024-11-19 11:31:05.019003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.777 [2024-11-19 11:31:05.026985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.777 [2024-11-19 11:31:05.027004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.777 [2024-11-19 11:31:05.034987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.777 [2024-11-19 11:31:05.035007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.777 [2024-11-19 11:31:05.042986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.777 [2024-11-19 11:31:05.043006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.777 [2024-11-19 11:31:05.050984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.777 [2024-11-19 11:31:05.051003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.777 [2024-11-19 11:31:05.058984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.777 [2024-11-19 11:31:05.059003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.777 [2024-11-19 11:31:05.066983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.777 [2024-11-19 11:31:05.067002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.777 [2024-11-19 11:31:05.074985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.777 [2024-11-19 11:31:05.075004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.777 [2024-11-19 11:31:05.082986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.777 [2024-11-19 11:31:05.083005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.777 [2024-11-19 11:31:05.090992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.777 [2024-11-19 11:31:05.091016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.777 [2024-11-19 11:31:05.098991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.777 [2024-11-19 11:31:05.099012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.777 Running I/O for 5 seconds... 00:29:09.777 [2024-11-19 11:31:05.115155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.777 [2024-11-19 11:31:05.115179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.777 [2024-11-19 11:31:05.125082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.777 [2024-11-19 11:31:05.125107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.777 [2024-11-19 11:31:05.137107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.777 [2024-11-19 11:31:05.137136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.777 [2024-11-19 11:31:05.152839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.777 [2024-11-19 11:31:05.152863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.777 [2024-11-19 11:31:05.162258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.777 [2024-11-19 11:31:05.162282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.777 [2024-11-19 11:31:05.174007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.777 [2024-11-19 11:31:05.174031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.777 [2024-11-19 11:31:05.189016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.777 [2024-11-19 11:31:05.189040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.778 [2024-11-19 11:31:05.198207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.778 [2024-11-19 11:31:05.198230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.778 [2024-11-19 11:31:05.209235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.778 [2024-11-19 11:31:05.209258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.778 [2024-11-19 11:31:05.226497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.778 [2024-11-19 11:31:05.226523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.778 [2024-11-19 11:31:05.236193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.778 [2024-11-19 11:31:05.236227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.778 [2024-11-19 11:31:05.247671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.778 [2024-11-19 11:31:05.247696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.778 [2024-11-19 11:31:05.258319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.778 [2024-11-19 11:31:05.258358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:09.778 [2024-11-19 11:31:05.269631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:09.778 [2024-11-19 11:31:05.269674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.037 [2024-11-19 11:31:05.284989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.037 [2024-11-19 11:31:05.285014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.037 [2024-11-19 11:31:05.294495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.037 [2024-11-19 11:31:05.294522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.037 [2024-11-19 11:31:05.306133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.037 [2024-11-19 11:31:05.306156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.037 [2024-11-19 11:31:05.319515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.037 [2024-11-19 11:31:05.319541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.037 [2024-11-19 11:31:05.329217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.037 [2024-11-19 11:31:05.329242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.037 [2024-11-19 11:31:05.340726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.037 [2024-11-19 11:31:05.340750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.037 [2024-11-19 11:31:05.351106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.037 [2024-11-19 11:31:05.351130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.037 [2024-11-19 11:31:05.361553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.037 [2024-11-19 11:31:05.361580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.037 [2024-11-19 11:31:05.374990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.037 [2024-11-19 11:31:05.375014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.037 [2024-11-19 11:31:05.384291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.037 [2024-11-19 11:31:05.384316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.037 [2024-11-19 11:31:05.395869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.037 [2024-11-19 11:31:05.395904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.037 [2024-11-19 11:31:05.406068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.037 [2024-11-19 11:31:05.406091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.037 [2024-11-19 11:31:05.420025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.037 [2024-11-19 11:31:05.420049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.037 [2024-11-19 11:31:05.429822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.037 [2024-11-19 11:31:05.429847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.037 [2024-11-19 11:31:05.443124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.037 [2024-11-19 11:31:05.443148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.037 [2024-11-19 11:31:05.452620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.037 [2024-11-19 11:31:05.452646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.037 [2024-11-19 11:31:05.463941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.037 [2024-11-19 11:31:05.463965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.037 [2024-11-19 11:31:05.474248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.037 [2024-11-19 11:31:05.474272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.037 [2024-11-19 11:31:05.489254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.037 [2024-11-19 11:31:05.489278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.037 [2024-11-19 11:31:05.499078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.037 [2024-11-19 11:31:05.499102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.037 [2024-11-19 11:31:05.510179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.037 [2024-11-19 11:31:05.510203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.037 [2024-11-19 11:31:05.520134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.037 [2024-11-19 11:31:05.520158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.037 [2024-11-19 11:31:05.532027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.037 [2024-11-19 11:31:05.532051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.296 [2024-11-19 11:31:05.542117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.296 [2024-11-19 11:31:05.542149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.296 [2024-11-19 11:31:05.553481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.296 [2024-11-19 11:31:05.553507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.296 [2024-11-19 11:31:05.567360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.296 [2024-11-19 11:31:05.567394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.296 [2024-11-19 11:31:05.576312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.296 [2024-11-19 11:31:05.576336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.296 [2024-11-19 11:31:05.587535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.296 [2024-11-19 11:31:05.587560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.296 [2024-11-19 11:31:05.598256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.296 [2024-11-19 11:31:05.598279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.296 [2024-11-19 11:31:05.611101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.296 [2024-11-19 11:31:05.611126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.296 [2024-11-19 11:31:05.620630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.296 [2024-11-19 11:31:05.620670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.296 [2024-11-19 11:31:05.632076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.296 [2024-11-19 11:31:05.632100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.296 [2024-11-19 11:31:05.642535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.296 [2024-11-19 11:31:05.642561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.296 [2024-11-19 11:31:05.656252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.296 [2024-11-19 11:31:05.656276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.296 [2024-11-19 11:31:05.665719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.296 [2024-11-19 11:31:05.665743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.296 [2024-11-19 11:31:05.677231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.296 [2024-11-19 11:31:05.677255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.296 [2024-11-19 11:31:05.690978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.296 [2024-11-19 11:31:05.691001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.296 [2024-11-19 11:31:05.700335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.296 [2024-11-19 11:31:05.700385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.296 [2024-11-19 11:31:05.711807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.296 [2024-11-19 11:31:05.711831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.296 [2024-11-19 11:31:05.720987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.296 [2024-11-19 11:31:05.721010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.296 [2024-11-19 11:31:05.732466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.296 [2024-11-19 11:31:05.732492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.296 [2024-11-19 11:31:05.742449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.296 [2024-11-19 11:31:05.742474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.296 [2024-11-19 11:31:05.753748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.296 [2024-11-19 11:31:05.753779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.296 [2024-11-19 11:31:05.768007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.296 [2024-11-19 11:31:05.768030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.296 [2024-11-19 11:31:05.777253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.296 [2024-11-19 11:31:05.777276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.296 [2024-11-19 11:31:05.788801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.296 [2024-11-19 11:31:05.788826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.554 [2024-11-19 11:31:05.801848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.554 [2024-11-19 11:31:05.801878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.554 [2024-11-19 11:31:05.816854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.554 [2024-11-19 11:31:05.816878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.554 [2024-11-19 11:31:05.826241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.554 [2024-11-19 11:31:05.826265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.554 [2024-11-19 11:31:05.837796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.554 [2024-11-19 11:31:05.837820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.555 [2024-11-19 11:31:05.852536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.555 [2024-11-19 11:31:05.852562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.555 [2024-11-19 11:31:05.862520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.555 [2024-11-19 11:31:05.862546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.555 [2024-11-19 11:31:05.874088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.555 [2024-11-19 11:31:05.874112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.555 [2024-11-19 11:31:05.885013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.555 [2024-11-19 11:31:05.885037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.555 [2024-11-19 11:31:05.899302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.555 [2024-11-19 11:31:05.899326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.555 [2024-11-19 11:31:05.908605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.555 [2024-11-19 11:31:05.908630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.555 [2024-11-19 11:31:05.920017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.555 [2024-11-19 11:31:05.920041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.555 [2024-11-19 11:31:05.930377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.555 [2024-11-19 11:31:05.930403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.555 [2024-11-19 11:31:05.941626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.555 [2024-11-19 11:31:05.941666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.555 [2024-11-19 11:31:05.956374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.555 [2024-11-19 11:31:05.956400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.555 [2024-11-19 11:31:05.965603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.555 [2024-11-19 11:31:05.965628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.555 [2024-11-19 11:31:05.977569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.555 [2024-11-19 11:31:05.977602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.555 [2024-11-19 11:31:05.991925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.555 [2024-11-19 11:31:05.991950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.555 [2024-11-19 11:31:06.001233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.555 [2024-11-19 11:31:06.001257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.555 [2024-11-19 11:31:06.012221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.555 [2024-11-19 11:31:06.012245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.555 [2024-11-19 11:31:06.022783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.555 [2024-11-19 11:31:06.022808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.555 [2024-11-19 11:31:06.034556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.555 [2024-11-19 11:31:06.034581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.555 [2024-11-19 11:31:06.044225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.555 [2024-11-19 11:31:06.044248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.813 [2024-11-19 11:31:06.056443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.813 [2024-11-19 11:31:06.056480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.813 [2024-11-19 11:31:06.066899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.813 [2024-11-19 11:31:06.066939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.813 [2024-11-19 11:31:06.077145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.813 [2024-11-19 11:31:06.077169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.813 [2024-11-19 11:31:06.087705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.813 [2024-11-19 11:31:06.087730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.813 [2024-11-19 11:31:06.098861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.813 [2024-11-19 11:31:06.098885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.813 11897.00 IOPS, 92.95 MiB/s [2024-11-19T10:31:06.310Z] [2024-11-19 11:31:06.108773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.813 [2024-11-19 11:31:06.108797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.813 [2024-11-19 11:31:06.120932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.813 [2024-11-19 11:31:06.120956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.813 [2024-11-19 11:31:06.136227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.813 [2024-11-19 11:31:06.136251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.813 [2024-11-19 11:31:06.146512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.813 [2024-11-19 11:31:06.146538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.813 [2024-11-19 11:31:06.158000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.813 [2024-11-19 11:31:06.158024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.813 [2024-11-19 11:31:06.169072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.813 [2024-11-19 11:31:06.169096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.813 [2024-11-19 11:31:06.183764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.813 [2024-11-19 11:31:06.183788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.813 [2024-11-19 11:31:06.192995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.814 [2024-11-19 11:31:06.193020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.814 [2024-11-19 11:31:06.204512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.814 [2024-11-19 11:31:06.204538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.814 [2024-11-19 11:31:06.214822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.814 [2024-11-19 11:31:06.214846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.814 [2024-11-19 11:31:06.225256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.814 [2024-11-19 11:31:06.225281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.814 [2024-11-19 11:31:06.238276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.814 [2024-11-19 11:31:06.238301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.814 [2024-11-19 11:31:06.248393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.814 [2024-11-19 11:31:06.248418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.814 [2024-11-19 11:31:06.260087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.814 [2024-11-19 11:31:06.260111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.814 [2024-11-19 11:31:06.270747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.814 [2024-11-19 11:31:06.270771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.814 [2024-11-19 11:31:06.283337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.814 [2024-11-19 11:31:06.283386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.814 [2024-11-19 11:31:06.292550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.814 [2024-11-19 11:31:06.292578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:10.814 [2024-11-19 11:31:06.304176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:10.814 [2024-11-19 11:31:06.304202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.072 [2024-11-19 11:31:06.315005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.072 [2024-11-19 11:31:06.315029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.072 [2024-11-19 11:31:06.325862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.072 [2024-11-19 11:31:06.325887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.072 [2024-11-19 11:31:06.340890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.072 [2024-11-19 11:31:06.340915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.072 [2024-11-19 11:31:06.350274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.072 [2024-11-19 11:31:06.350299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.072 [2024-11-19 11:31:06.361597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.072 [2024-11-19 11:31:06.361623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.072 [2024-11-19 11:31:06.371938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.072 [2024-11-19 11:31:06.371962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.072 [2024-11-19 11:31:06.384247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.072 [2024-11-19 11:31:06.384272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.072 [2024-11-19 11:31:06.395030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.072 [2024-11-19 11:31:06.395055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.072 [2024-11-19 11:31:06.406811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.072 [2024-11-19 11:31:06.406837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.072 [2024-11-19 11:31:06.417775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.073 [2024-11-19 11:31:06.417800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.073 [2024-11-19 11:31:06.432322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.073 [2024-11-19 11:31:06.432375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.073 [2024-11-19 11:31:06.442302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.073 [2024-11-19 11:31:06.442327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.073 [2024-11-19 11:31:06.454123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.073 [2024-11-19 11:31:06.454147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.073 [2024-11-19 11:31:06.464453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.073 [2024-11-19 11:31:06.464479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.073 [2024-11-19 11:31:06.480554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.073 [2024-11-19 11:31:06.480581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.073 [2024-11-19 11:31:06.490147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.073 [2024-11-19 11:31:06.490171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.073 [2024-11-19 11:31:06.505083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.073 [2024-11-19 11:31:06.505109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.073 [2024-11-19 11:31:06.514154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.073 [2024-11-19 11:31:06.514179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.073 [2024-11-19 11:31:06.527745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.073 [2024-11-19 11:31:06.527770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.073 [2024-11-19 11:31:06.537488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.073 [2024-11-19 11:31:06.537514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.073 [2024-11-19 11:31:06.549457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.073 [2024-11-19 11:31:06.549484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.073 [2024-11-19 11:31:06.563772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.073 [2024-11-19 11:31:06.563797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.331 [2024-11-19 11:31:06.573669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.331 [2024-11-19 11:31:06.573695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.331 [2024-11-19 11:31:06.585817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.331 [2024-11-19 11:31:06.585842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.331 [2024-11-19 11:31:06.601626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.331 [2024-11-19 11:31:06.601666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.331 [2024-11-19 11:31:06.611521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.331 [2024-11-19 11:31:06.611548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.331 [2024-11-19 11:31:06.623217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.331 [2024-11-19 11:31:06.623242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.331 [2024-11-19 11:31:06.634438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.331 [2024-11-19 11:31:06.634465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.331 [2024-11-19 11:31:06.645803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.331 [2024-11-19 11:31:06.645828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.331 [2024-11-19 11:31:06.661640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.331 [2024-11-19 11:31:06.661680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.331 [2024-11-19 11:31:06.672091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.332 [2024-11-19 11:31:06.672116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.332 [2024-11-19 11:31:06.683475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.332 [2024-11-19 11:31:06.683517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.332 [2024-11-19 11:31:06.694309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.332 [2024-11-19 11:31:06.694334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.332 [2024-11-19 11:31:06.705464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.332 [2024-11-19 11:31:06.705490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.332 [2024-11-19 11:31:06.719271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.332 [2024-11-19 11:31:06.719296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.332 [2024-11-19 11:31:06.728823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.332 [2024-11-19 11:31:06.728847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.332 [2024-11-19 11:31:06.740359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.332 [2024-11-19 11:31:06.740393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.332 [2024-11-19 11:31:06.751510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.332 [2024-11-19 11:31:06.751536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.332 [2024-11-19 11:31:06.762539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.332 [2024-11-19 11:31:06.762566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.332 [2024-11-19 11:31:06.773812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.332 [2024-11-19 11:31:06.773836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.332 [2024-11-19 11:31:06.788334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.332 [2024-11-19 11:31:06.788384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.332 [2024-11-19 11:31:06.797931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.332 [2024-11-19 11:31:06.797955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.332 [2024-11-19 11:31:06.811514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.332 [2024-11-19 11:31:06.811540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.332 [2024-11-19 11:31:06.821066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.332 [2024-11-19 11:31:06.821091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.590 [2024-11-19 11:31:06.833864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.590 [2024-11-19 11:31:06.833888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.590 [2024-11-19 11:31:06.848403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.590 [2024-11-19 11:31:06.848438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.590 [2024-11-19 11:31:06.858567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.590 [2024-11-19 11:31:06.858593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.590 [2024-11-19 11:31:06.870389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.590 [2024-11-19 11:31:06.870416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.590 [2024-11-19 11:31:06.880774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.590 [2024-11-19 11:31:06.880799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.590 [2024-11-19 11:31:06.891992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.590 [2024-11-19 11:31:06.892017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.590 [2024-11-19 11:31:06.902121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.590 [2024-11-19 11:31:06.902145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.590 [2024-11-19 11:31:06.917043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.590 [2024-11-19 11:31:06.917067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.591 [2024-11-19 11:31:06.926682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.591 [2024-11-19 11:31:06.926723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.591 [2024-11-19 11:31:06.938729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.591 [2024-11-19 11:31:06.938754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.591 [2024-11-19 11:31:06.948945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.591 [2024-11-19 11:31:06.948970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.591 [2024-11-19 11:31:06.962628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.591 [2024-11-19 11:31:06.962672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.591 [2024-11-19 11:31:06.972747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.591 [2024-11-19 11:31:06.972772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.591 [2024-11-19 11:31:06.984985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.591 [2024-11-19 11:31:06.985009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.591 [2024-11-19 11:31:06.997949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.591 [2024-11-19 11:31:06.997973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.591 [2024-11-19 11:31:07.012088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.591 [2024-11-19 11:31:07.012112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.591 [2024-11-19 11:31:07.021712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.591 [2024-11-19 11:31:07.021736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.591 [2024-11-19 11:31:07.035255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.591 [2024-11-19 11:31:07.035280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.591 [2024-11-19 11:31:07.044299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.591 [2024-11-19 11:31:07.044324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.591 [2024-11-19 11:31:07.056287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.591 [2024-11-19 11:31:07.056312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.591 [2024-11-19 11:31:07.072706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.591 [2024-11-19 11:31:07.072754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.591 [2024-11-19 11:31:07.082932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.591 [2024-11-19 11:31:07.082959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.850 [2024-11-19 11:31:07.095452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.850 [2024-11-19 11:31:07.095479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.850 [2024-11-19 11:31:07.106301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.850 [2024-11-19 11:31:07.106325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.850 11735.50 IOPS, 91.68 MiB/s [2024-11-19T10:31:07.347Z] [2024-11-19 11:31:07.118005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.850 [2024-11-19 11:31:07.118029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.850 [2024-11-19 11:31:07.128977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.850 [2024-11-19 11:31:07.129002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.850 [2024-11-19 11:31:07.145081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.850 [2024-11-19 11:31:07.145106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.850 [2024-11-19 11:31:07.154882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.850 [2024-11-19 11:31:07.154907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.850 [2024-11-19 11:31:07.166884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.850 [2024-11-19 11:31:07.166909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.850 [2024-11-19 11:31:07.177932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.850 [2024-11-19 11:31:07.177956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.850 [2024-11-19 11:31:07.191540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.850 [2024-11-19 11:31:07.191568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.850 [2024-11-19 11:31:07.201703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.850 [2024-11-19 11:31:07.201728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.850 [2024-11-19 11:31:07.217222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.850 [2024-11-19 11:31:07.217247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.850 [2024-11-19 11:31:07.226803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.850 [2024-11-19 11:31:07.226827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.850 [2024-11-19 11:31:07.238417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.850 [2024-11-19 11:31:07.238445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.850 [2024-11-19 11:31:07.248884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.850 [2024-11-19 11:31:07.248909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.850 [2024-11-19 11:31:07.263115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.850 [2024-11-19 11:31:07.263140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.850 [2024-11-19 11:31:07.273001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.850 [2024-11-19 11:31:07.273026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.850 [2024-11-19 11:31:07.284974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.850 [2024-11-19 11:31:07.285004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.850 [2024-11-19 11:31:07.301163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.850 [2024-11-19 11:31:07.301197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.850 [2024-11-19 11:31:07.310809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.850 [2024-11-19 11:31:07.310834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.850 [2024-11-19 11:31:07.322267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.850 [2024-11-19 11:31:07.322292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.850 [2024-11-19 11:31:07.336671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.850 [2024-11-19 11:31:07.336696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:11.850 [2024-11-19 11:31:07.345720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:11.850 [2024-11-19 11:31:07.345746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.109 [2024-11-19 11:31:07.360203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.109 [2024-11-19 11:31:07.360227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.109 [2024-11-19 11:31:07.369906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.109 [2024-11-19 11:31:07.369929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.109 [2024-11-19 11:31:07.384126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.109 [2024-11-19 11:31:07.384150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.109 [2024-11-19 11:31:07.393530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.109 [2024-11-19 11:31:07.393556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.109 [2024-11-19 11:31:07.404522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.109 [2024-11-19 11:31:07.404548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.109 [2024-11-19 11:31:07.414855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.109 [2024-11-19 11:31:07.414879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.109 [2024-11-19 11:31:07.424486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.109 [2024-11-19 11:31:07.424513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.109 [2024-11-19 11:31:07.434551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.109 [2024-11-19 11:31:07.434575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.109 [2024-11-19 11:31:07.445275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.109 [2024-11-19 11:31:07.445299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.109 [2024-11-19 11:31:07.459800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.109 [2024-11-19 11:31:07.459824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.109 [2024-11-19 11:31:07.469294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.109 [2024-11-19 11:31:07.469318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.109 [2024-11-19 11:31:07.480830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.109 [2024-11-19 11:31:07.480854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.109 [2024-11-19 11:31:07.496048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.109 [2024-11-19 11:31:07.496072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.109 [2024-11-19 11:31:07.505728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.109 [2024-11-19 11:31:07.505751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.109 [2024-11-19 11:31:07.520822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.109 [2024-11-19 11:31:07.520854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.109 [2024-11-19 11:31:07.530128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.109 [2024-11-19 11:31:07.530151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.109 [2024-11-19 11:31:07.544060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.109 [2024-11-19 11:31:07.544084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.109 [2024-11-19 11:31:07.553208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.109 [2024-11-19 11:31:07.553232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.109 [2024-11-19 11:31:07.564751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.109 [2024-11-19 11:31:07.564776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.109 [2024-11-19 11:31:07.580112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.109 [2024-11-19 11:31:07.580138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.109 [2024-11-19 11:31:07.590035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.109 [2024-11-19 11:31:07.590059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.109 [2024-11-19 11:31:07.604062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.109 [2024-11-19 11:31:07.604087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.398 [2024-11-19 11:31:07.613660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.398 [2024-11-19 11:31:07.613689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.398 [2024-11-19 11:31:07.627780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.398 [2024-11-19 11:31:07.627816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.398 [2024-11-19 11:31:07.637698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.398 [2024-11-19 11:31:07.637748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.398 [2024-11-19 11:31:07.649818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.398 [2024-11-19 11:31:07.649843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.398 [2024-11-19 11:31:07.663858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.398 [2024-11-19 11:31:07.663883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.398 [2024-11-19 11:31:07.673143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.398 [2024-11-19 11:31:07.673167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.398 [2024-11-19 11:31:07.684968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.398 [2024-11-19 11:31:07.684993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.398 [2024-11-19 11:31:07.700091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.398 [2024-11-19 11:31:07.700114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.398 [2024-11-19 11:31:07.709371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.398 [2024-11-19 11:31:07.709397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.398 [2024-11-19 11:31:07.720754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.398 [2024-11-19 11:31:07.720778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.398 [2024-11-19 11:31:07.731112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.398 [2024-11-19 11:31:07.731137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.398 [2024-11-19 11:31:07.742130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.398 [2024-11-19 11:31:07.742154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.398 [2024-11-19 11:31:07.755949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.398 [2024-11-19 11:31:07.755973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.398 [2024-11-19 11:31:07.765456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.398 [2024-11-19 11:31:07.765481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.398 [2024-11-19 11:31:07.777593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.398 [2024-11-19 11:31:07.777619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.398 [2024-11-19 11:31:07.791977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.398 [2024-11-19 11:31:07.792001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.398 [2024-11-19 11:31:07.801131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.398 [2024-11-19 11:31:07.801155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.398 [2024-11-19 11:31:07.812594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.398 [2024-11-19 11:31:07.812620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.398 [2024-11-19 11:31:07.823689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.398 [2024-11-19 11:31:07.823727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.398 [2024-11-19 11:31:07.834140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.398 [2024-11-19 11:31:07.834163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.398 [2024-11-19 11:31:07.847593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.398 [2024-11-19 11:31:07.847618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.398 [2024-11-19 11:31:07.857150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.398 [2024-11-19 11:31:07.857174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.398 [2024-11-19 11:31:07.869019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.398 [2024-11-19 11:31:07.869047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.681 [2024-11-19 11:31:07.883341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.681 [2024-11-19 11:31:07.883391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.681 [2024-11-19 11:31:07.893168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.681 [2024-11-19 11:31:07.893191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.681 [2024-11-19 11:31:07.905027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.681 [2024-11-19 11:31:07.905050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.681 [2024-11-19 11:31:07.919218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.681 [2024-11-19 11:31:07.919241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.681 [2024-11-19 11:31:07.929043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.681 [2024-11-19 11:31:07.929066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.681 [2024-11-19 11:31:07.940877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.681 [2024-11-19 11:31:07.940902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.681 [2024-11-19 11:31:07.955676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.681 [2024-11-19 11:31:07.955702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.681 [2024-11-19 11:31:07.964598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.681 [2024-11-19 11:31:07.964624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.681 [2024-11-19 11:31:07.975895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.681 [2024-11-19 11:31:07.975919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.681 [2024-11-19 11:31:07.986463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.681 [2024-11-19 11:31:07.986488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.681 [2024-11-19 11:31:07.998869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.681 [2024-11-19 11:31:07.998893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.681 [2024-11-19 11:31:08.008102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.681 [2024-11-19 11:31:08.008126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.681 [2024-11-19 11:31:08.019854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.681 [2024-11-19 11:31:08.019878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.681 [2024-11-19 11:31:08.030686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.681 [2024-11-19 11:31:08.030726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.681 [2024-11-19 11:31:08.041761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.681 [2024-11-19 11:31:08.041792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.681 [2024-11-19 11:31:08.055507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.681 [2024-11-19 11:31:08.055534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.681 [2024-11-19 11:31:08.064888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.681 [2024-11-19 11:31:08.064913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.681 [2024-11-19 11:31:08.076880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.681 [2024-11-19 11:31:08.076904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.681 [2024-11-19 11:31:08.087335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.681 [2024-11-19 11:31:08.087384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.681 [2024-11-19 11:31:08.097740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.681 [2024-11-19 11:31:08.097779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.681 [2024-11-19 11:31:08.111479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.681 [2024-11-19 11:31:08.111505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.681 11774.33 IOPS, 91.99 MiB/s [2024-11-19T10:31:08.178Z] [2024-11-19 11:31:08.121456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.681 [2024-11-19 11:31:08.121482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.681 [2024-11-19 11:31:08.133092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.681 [2024-11-19 11:31:08.133116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.681 [2024-11-19 11:31:08.147837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.681 [2024-11-19 11:31:08.147861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.681 [2024-11-19 11:31:08.157586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.681 [2024-11-19 11:31:08.157610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.681 [2024-11-19 11:31:08.169438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.681 [2024-11-19 11:31:08.169477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.940 [2024-11-19 11:31:08.182891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.940 [2024-11-19 11:31:08.182916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.940 [2024-11-19 11:31:08.192339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.940 [2024-11-19 11:31:08.192389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.940 [2024-11-19 11:31:08.203979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.940 [2024-11-19 11:31:08.204003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.940 [2024-11-19 11:31:08.214530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.940 [2024-11-19 11:31:08.214555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.940 [2024-11-19 11:31:08.225165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.940 [2024-11-19 11:31:08.225190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.940 [2024-11-19 11:31:08.236505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.940 [2024-11-19 11:31:08.236530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.940 [2024-11-19 11:31:08.247182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.940 [2024-11-19 11:31:08.247205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.940 [2024-11-19 11:31:08.257529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.940 [2024-11-19 11:31:08.257555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.940 [2024-11-19 11:31:08.270428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.940 [2024-11-19 11:31:08.270453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.940 [2024-11-19 11:31:08.280169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.940 [2024-11-19 11:31:08.280192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.940 [2024-11-19 11:31:08.295921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.940 [2024-11-19 11:31:08.295945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.940 [2024-11-19 11:31:08.305600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.940 [2024-11-19 11:31:08.305626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.940 [2024-11-19 11:31:08.317269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.940 [2024-11-19 11:31:08.317293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.940 [2024-11-19 11:31:08.332863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.940 [2024-11-19 11:31:08.332887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.940 [2024-11-19 11:31:08.342568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.940 [2024-11-19 11:31:08.342593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.940 [2024-11-19 11:31:08.354541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.940 [2024-11-19 11:31:08.354567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.940 [2024-11-19 11:31:08.366961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.940 [2024-11-19 11:31:08.366985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.940 [2024-11-19 11:31:08.375994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.940 [2024-11-19 11:31:08.376017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.940 [2024-11-19 11:31:08.387258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.940 [2024-11-19 11:31:08.387289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.940 [2024-11-19 11:31:08.397922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.940 [2024-11-19 11:31:08.397945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.940 [2024-11-19 11:31:08.412465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.940 [2024-11-19 11:31:08.412491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.940 [2024-11-19 11:31:08.422233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.940 [2024-11-19 11:31:08.422257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:12.940 [2024-11-19 11:31:08.433991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:12.940 [2024-11-19 11:31:08.434016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.199 [2024-11-19 11:31:08.445469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.199 [2024-11-19 11:31:08.445495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.199 [2024-11-19 11:31:08.460728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.199 [2024-11-19 11:31:08.460752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.199 [2024-11-19 11:31:08.470799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.199 [2024-11-19 11:31:08.470823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.199 [2024-11-19 11:31:08.482502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.199 [2024-11-19 11:31:08.482528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.199 [2024-11-19 11:31:08.494567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.199 [2024-11-19 11:31:08.494592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.199 [2024-11-19 11:31:08.503814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.199 [2024-11-19 11:31:08.503838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.199 [2024-11-19 11:31:08.515439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.199 [2024-11-19 11:31:08.515467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.199 [2024-11-19 11:31:08.526205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.199 [2024-11-19 11:31:08.526229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.199 [2024-11-19 11:31:08.537216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.199 [2024-11-19 11:31:08.537241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.199 [2024-11-19 11:31:08.551003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.199 [2024-11-19 11:31:08.551028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.199 [2024-11-19 11:31:08.560866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.199 [2024-11-19 11:31:08.560891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.199 [2024-11-19 11:31:08.572617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.199 [2024-11-19 11:31:08.572658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.199 [2024-11-19 11:31:08.583460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.199 [2024-11-19 11:31:08.583486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.199 [2024-11-19 11:31:08.594777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.199 [2024-11-19 11:31:08.594802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.199 [2024-11-19 11:31:08.607636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.199 [2024-11-19 11:31:08.607684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.199 [2024-11-19 11:31:08.617155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.199 [2024-11-19 11:31:08.617179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.199 [2024-11-19 11:31:08.628321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.199 [2024-11-19 11:31:08.628358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.199 [2024-11-19 11:31:08.638291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.199 [2024-11-19 11:31:08.638314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.199 [2024-11-19 11:31:08.653585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.199 [2024-11-19 11:31:08.653610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.199 [2024-11-19 11:31:08.662931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.199 [2024-11-19 11:31:08.662956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.199 [2024-11-19 11:31:08.674148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.199 [2024-11-19 11:31:08.674172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.199 [2024-11-19 11:31:08.689050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.199 [2024-11-19 11:31:08.689073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.458 [2024-11-19 11:31:08.704685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.458 [2024-11-19 11:31:08.704726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.458 [2024-11-19 11:31:08.714418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.458 [2024-11-19 11:31:08.714444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.458 [2024-11-19 11:31:08.725857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.458 [2024-11-19 11:31:08.725882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.458 [2024-11-19 11:31:08.741465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.458 [2024-11-19 11:31:08.741492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.458 [2024-11-19 11:31:08.750501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.458 [2024-11-19 11:31:08.750528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.459 [2024-11-19 11:31:08.762000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.459 [2024-11-19 11:31:08.762025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.459 [2024-11-19 11:31:08.775459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.459 [2024-11-19 11:31:08.775486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.459 [2024-11-19 11:31:08.785325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.459 [2024-11-19 11:31:08.785376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.459 [2024-11-19 11:31:08.797235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.459 [2024-11-19 11:31:08.797259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.459 [2024-11-19 11:31:08.812186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.459 [2024-11-19 11:31:08.812212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.459 [2024-11-19 11:31:08.821866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.459 [2024-11-19 11:31:08.821892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.459 [2024-11-19 11:31:08.835098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.459 [2024-11-19 11:31:08.835129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.459 [2024-11-19 11:31:08.844617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.459 [2024-11-19 11:31:08.844644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.459 [2024-11-19 11:31:08.856582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.459 [2024-11-19 11:31:08.856608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.459 [2024-11-19 11:31:08.866963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.459 [2024-11-19 11:31:08.866988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.459 [2024-11-19 11:31:08.877232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.459 [2024-11-19 11:31:08.877256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.459 [2024-11-19 11:31:08.890967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.459 [2024-11-19 11:31:08.890991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.459 [2024-11-19 11:31:08.900875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.459 [2024-11-19 11:31:08.900899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.459 [2024-11-19 11:31:08.912734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.459 [2024-11-19 11:31:08.912760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.459 [2024-11-19 11:31:08.929004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.459 [2024-11-19 11:31:08.929030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.459 [2024-11-19 11:31:08.938701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.459 [2024-11-19 11:31:08.938740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.459 [2024-11-19 11:31:08.950772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.459 [2024-11-19 11:31:08.950813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.717 [2024-11-19 11:31:08.961809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.717 [2024-11-19 11:31:08.961834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.717 [2024-11-19 11:31:08.975737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.717 [2024-11-19 11:31:08.975762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.717 [2024-11-19 11:31:08.985392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.717 [2024-11-19 11:31:08.985417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.717 [2024-11-19 11:31:08.996499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.717 [2024-11-19 11:31:08.996525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.717 [2024-11-19 11:31:09.006688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.717 [2024-11-19 11:31:09.006726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.717 [2024-11-19 11:31:09.016774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.717 [2024-11-19 11:31:09.016798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.717 [2024-11-19 11:31:09.026736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.717 [2024-11-19 11:31:09.026759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.718 [2024-11-19 11:31:09.039620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.718 [2024-11-19 11:31:09.039659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.718 [2024-11-19 11:31:09.049025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.718 [2024-11-19 11:31:09.049049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.718 [2024-11-19 11:31:09.059967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.718 [2024-11-19 11:31:09.059990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.718 [2024-11-19 11:31:09.070027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.718 [2024-11-19 11:31:09.070052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.718 [2024-11-19 11:31:09.084242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.718 [2024-11-19 11:31:09.084272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.718 [2024-11-19 11:31:09.093483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.718 [2024-11-19 11:31:09.093511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.718 [2024-11-19 11:31:09.105292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.718 [2024-11-19 11:31:09.105316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.718 11801.25 IOPS, 92.20 MiB/s [2024-11-19T10:31:09.215Z] [2024-11-19 11:31:09.121014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.718 [2024-11-19 11:31:09.121039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.718 [2024-11-19 11:31:09.130249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.718 [2024-11-19 11:31:09.130273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.718 [2024-11-19 11:31:09.141572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.718 [2024-11-19 11:31:09.141598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.718 [2024-11-19 11:31:09.155841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.718 [2024-11-19 11:31:09.155866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.718 [2024-11-19 11:31:09.165214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.718 [2024-11-19 11:31:09.165237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.718 [2024-11-19 11:31:09.176859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.718 [2024-11-19 11:31:09.176883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.718 [2024-11-19 11:31:09.187437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.718 [2024-11-19 11:31:09.187463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.718 [2024-11-19 11:31:09.197747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.718 [2024-11-19 11:31:09.197770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.718 [2024-11-19 11:31:09.209893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.718 [2024-11-19 11:31:09.209918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.976 [2024-11-19 11:31:09.220542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.976 [2024-11-19 11:31:09.220569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.976 [2024-11-19 11:31:09.231996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.976 [2024-11-19 11:31:09.232020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.976 [2024-11-19 11:31:09.242428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.976 [2024-11-19 11:31:09.242453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.976 [2024-11-19 11:31:09.254497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.976 [2024-11-19 11:31:09.254523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.976 [2024-11-19 11:31:09.265939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.976 [2024-11-19 11:31:09.265963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.976 [2024-11-19 11:31:09.281160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.976 [2024-11-19 11:31:09.281184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.976 [2024-11-19 11:31:09.290128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.976 [2024-11-19 11:31:09.290152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.976 [2024-11-19 11:31:09.302786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.976 [2024-11-19 11:31:09.302810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.976 [2024-11-19 11:31:09.312252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.976 [2024-11-19 11:31:09.312275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.976 [2024-11-19 11:31:09.323527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.976 [2024-11-19 11:31:09.323552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.976 [2024-11-19 11:31:09.334169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.976 [2024-11-19 11:31:09.334192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.976 [2024-11-19 11:31:09.348291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.976 [2024-11-19 11:31:09.348314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.976 [2024-11-19 11:31:09.357804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.976 [2024-11-19 11:31:09.357828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.976 [2024-11-19 11:31:09.369541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.976 [2024-11-19 11:31:09.369583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.976 [2024-11-19 11:31:09.383986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.976 [2024-11-19 11:31:09.384010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.976 [2024-11-19 11:31:09.392964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.976 [2024-11-19 11:31:09.392988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.976 [2024-11-19 11:31:09.404520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.976 [2024-11-19 11:31:09.404546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.976 [2024-11-19 11:31:09.419611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.976 [2024-11-19 11:31:09.419636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.976 [2024-11-19 11:31:09.429024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.976 [2024-11-19 11:31:09.429047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.976 [2024-11-19 11:31:09.440332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.976 [2024-11-19 11:31:09.440378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.976 [2024-11-19 11:31:09.450310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.976 [2024-11-19 11:31:09.450334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:13.976 [2024-11-19 11:31:09.465531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:13.976 [2024-11-19 11:31:09.465557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.233 [2024-11-19 11:31:09.480320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.233 [2024-11-19 11:31:09.480377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.233 [2024-11-19 11:31:09.489180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.233 [2024-11-19 11:31:09.489203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.233 [2024-11-19 11:31:09.500617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.233 [2024-11-19 11:31:09.500667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.233 [2024-11-19 11:31:09.511340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.233 [2024-11-19 11:31:09.511387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.233 [2024-11-19 11:31:09.521867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.233 [2024-11-19 11:31:09.521892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.233 [2024-11-19 11:31:09.535988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.233 [2024-11-19 11:31:09.536015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.233 [2024-11-19 11:31:09.545023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.233 [2024-11-19 11:31:09.545046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.233 [2024-11-19 11:31:09.556342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.233 [2024-11-19 11:31:09.556390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.233 [2024-11-19 11:31:09.566868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.233 [2024-11-19 11:31:09.566892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.233 [2024-11-19 11:31:09.577140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.233 [2024-11-19 11:31:09.577163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.233 [2024-11-19 11:31:09.591010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.234 [2024-11-19 11:31:09.591034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.234 [2024-11-19 11:31:09.600107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.234 [2024-11-19 11:31:09.600130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.234 [2024-11-19 11:31:09.611943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.234 [2024-11-19 11:31:09.611968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.234 [2024-11-19 11:31:09.622633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.234 [2024-11-19 11:31:09.622673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.234 [2024-11-19 11:31:09.635660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.234 [2024-11-19 11:31:09.635686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.234 [2024-11-19 11:31:09.644973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.234 [2024-11-19 11:31:09.644996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.234 [2024-11-19 11:31:09.656636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.234 [2024-11-19 11:31:09.656676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.234 [2024-11-19 11:31:09.667072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.234 [2024-11-19 11:31:09.667096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.234 [2024-11-19 11:31:09.677717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.234 [2024-11-19 11:31:09.677740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.234 [2024-11-19 11:31:09.691978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.234 [2024-11-19 11:31:09.692010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.234 [2024-11-19 11:31:09.701226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.234 [2024-11-19 11:31:09.701250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.234 [2024-11-19 11:31:09.712748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.234 [2024-11-19 11:31:09.712772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.234 [2024-11-19 11:31:09.723095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.234 [2024-11-19 11:31:09.723119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.492 [2024-11-19 11:31:09.734426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.492 [2024-11-19 11:31:09.734453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.492 [2024-11-19 11:31:09.745865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.492 [2024-11-19 11:31:09.745888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.492 [2024-11-19 11:31:09.760208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.492 [2024-11-19 11:31:09.760231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.492 [2024-11-19 11:31:09.770398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.492 [2024-11-19 11:31:09.770424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.492 [2024-11-19 11:31:09.782094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.492 [2024-11-19 11:31:09.782118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.492 [2024-11-19 11:31:09.795673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.492 [2024-11-19 11:31:09.795699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.492 [2024-11-19 11:31:09.804874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.492 [2024-11-19 11:31:09.804897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.492 [2024-11-19 11:31:09.816160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.492 [2024-11-19 11:31:09.816183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.492 [2024-11-19 11:31:09.826662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.492 [2024-11-19 11:31:09.826687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.492 [2024-11-19 11:31:09.839242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.492 [2024-11-19 11:31:09.839267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.492 [2024-11-19 11:31:09.848486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.492 [2024-11-19 11:31:09.848513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.492 [2024-11-19 11:31:09.860074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.492 [2024-11-19 11:31:09.860098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.492 [2024-11-19 11:31:09.871106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.492 [2024-11-19 11:31:09.871130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.492 [2024-11-19 11:31:09.881603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.492 [2024-11-19 11:31:09.881628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.492 [2024-11-19 11:31:09.894306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.492 [2024-11-19 11:31:09.894330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.492 [2024-11-19 11:31:09.907062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.492 [2024-11-19 11:31:09.907093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.492 [2024-11-19 11:31:09.916458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.492 [2024-11-19 11:31:09.916483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.492 [2024-11-19 11:31:09.927994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.492 [2024-11-19 11:31:09.928018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.492 [2024-11-19 11:31:09.938519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.492 [2024-11-19 11:31:09.938546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.492 [2024-11-19 11:31:09.949952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.492 [2024-11-19 11:31:09.949977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.492 [2024-11-19 11:31:09.963461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.492 [2024-11-19 11:31:09.963487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.492 [2024-11-19 11:31:09.972838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.492 [2024-11-19 11:31:09.972862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.492 [2024-11-19 11:31:09.984614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.492 [2024-11-19 11:31:09.984656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.750 [2024-11-19 11:31:09.996087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.750 [2024-11-19 11:31:09.996111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.750 [2024-11-19 11:31:10.007152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.750 [2024-11-19 11:31:10.007177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.750 [2024-11-19 11:31:10.018330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.750 [2024-11-19 11:31:10.018380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.750 [2024-11-19 11:31:10.029998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.750 [2024-11-19 11:31:10.030028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.750 [2024-11-19 11:31:10.051891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.750 [2024-11-19 11:31:10.051933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.750 [2024-11-19 11:31:10.062480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.750 [2024-11-19 11:31:10.062508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.750 [2024-11-19 11:31:10.073327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.750 [2024-11-19 11:31:10.073374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.750 [2024-11-19 11:31:10.087254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.750 [2024-11-19 11:31:10.087278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.750 [2024-11-19 11:31:10.096798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.750 [2024-11-19 11:31:10.096839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.750 [2024-11-19 11:31:10.109075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.750 [2024-11-19 11:31:10.109098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.750 11818.60 IOPS, 92.33 MiB/s [2024-11-19T10:31:10.247Z] [2024-11-19 11:31:10.121160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.750 [2024-11-19 11:31:10.121184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.750 [2024-11-19 11:31:10.126999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.750 [2024-11-19 11:31:10.127032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.750 00:29:14.750 Latency(us) 00:29:14.750 [2024-11-19T10:31:10.247Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:14.750 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:29:14.750 Nvme1n1 : 5.01 11817.61 92.33 0.00 0.00 10816.34 2839.89 18544.26 00:29:14.750 [2024-11-19T10:31:10.247Z] =================================================================================================================== 00:29:14.750 [2024-11-19T10:31:10.247Z] Total : 11817.61 92.33 0.00 0.00 10816.34 2839.89 18544.26 00:29:14.750 [2024-11-19 11:31:10.134991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.750 [2024-11-19 11:31:10.135013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.750 [2024-11-19 11:31:10.143010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.750 [2024-11-19 11:31:10.143033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.750 [2024-11-19 11:31:10.150993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.750 [2024-11-19 11:31:10.151016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.750 [2024-11-19 11:31:10.159042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.750 [2024-11-19 11:31:10.159085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.750 [2024-11-19 11:31:10.167045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.750 [2024-11-19 11:31:10.167089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.750 [2024-11-19 11:31:10.175042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.750 [2024-11-19 11:31:10.175085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.750 [2024-11-19 11:31:10.183042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.750 [2024-11-19 11:31:10.183087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.750 [2024-11-19 11:31:10.191042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.750 [2024-11-19 11:31:10.191088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.750 [2024-11-19 11:31:10.199042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.750 [2024-11-19 11:31:10.199090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.750 [2024-11-19 11:31:10.207036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.750 [2024-11-19 11:31:10.207081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.750 [2024-11-19 11:31:10.215042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.750 [2024-11-19 11:31:10.215090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.750 [2024-11-19 11:31:10.223040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.750 [2024-11-19 11:31:10.223088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.750 [2024-11-19 11:31:10.231040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.750 [2024-11-19 11:31:10.231087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:14.750 [2024-11-19 11:31:10.239036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:14.750 [2024-11-19 11:31:10.239083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:15.008 [2024-11-19 11:31:10.247042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:15.008 [2024-11-19 11:31:10.247084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:15.008 [2024-11-19 11:31:10.255039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:15.008 [2024-11-19 11:31:10.255081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:15.009 [2024-11-19 11:31:10.263039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:15.009 [2024-11-19 11:31:10.263081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:15.009 [2024-11-19 11:31:10.270986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:15.009 [2024-11-19 11:31:10.271006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:15.009 [2024-11-19 11:31:10.278985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:15.009 [2024-11-19 11:31:10.279003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:15.009 [2024-11-19 11:31:10.286985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:15.009 [2024-11-19 11:31:10.287003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:15.009 [2024-11-19 11:31:10.294984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:15.009 [2024-11-19 11:31:10.295002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:15.009 [2024-11-19 11:31:10.303026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:15.009 [2024-11-19 11:31:10.303065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:15.009 [2024-11-19 11:31:10.311032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:15.009 [2024-11-19 11:31:10.311075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:15.009 [2024-11-19 11:31:10.319035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:15.009 [2024-11-19 11:31:10.319080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:15.009 [2024-11-19 11:31:10.326986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:15.009 [2024-11-19 11:31:10.327005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:15.009 [2024-11-19 11:31:10.334985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:15.009 [2024-11-19 11:31:10.335004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:15.009 [2024-11-19 11:31:10.342984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:15.009 [2024-11-19 11:31:10.343003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:15.009 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2768900) - No such process 00:29:15.009 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2768900 00:29:15.009 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:15.009 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.009 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:15.009 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.009 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:15.009 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.009 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:15.009 delay0 00:29:15.009 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.009 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:29:15.009 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.009 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:15.009 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.009 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:29:15.009 [2024-11-19 11:31:10.461324] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:29:23.120 Initializing NVMe Controllers 00:29:23.120 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:23.120 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:23.121 Initialization complete. Launching workers. 00:29:23.121 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 240, failed: 21362 00:29:23.121 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 21488, failed to submit 114 00:29:23.121 success 21386, unsuccessful 102, failed 0 00:29:23.121 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:29:23.121 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:29:23.121 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:23.121 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:29:23.121 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:23.121 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:29:23.121 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:23.121 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:23.121 rmmod nvme_tcp 00:29:23.121 rmmod nvme_fabrics 00:29:23.121 rmmod nvme_keyring 00:29:23.121 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:23.121 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:29:23.121 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:29:23.121 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2767572 ']' 00:29:23.121 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2767572 00:29:23.121 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2767572 ']' 00:29:23.121 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2767572 00:29:23.121 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:29:23.121 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:23.121 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2767572 00:29:23.121 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:23.121 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:23.121 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2767572' 00:29:23.121 killing process with pid 2767572 00:29:23.121 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2767572 00:29:23.121 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2767572 00:29:23.121 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:23.121 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:23.121 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:23.121 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:29:23.121 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:29:23.121 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:29:23.121 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:23.121 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:23.121 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:23.121 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:23.121 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:23.121 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:24.497 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:24.497 00:29:24.497 real 0m29.164s 00:29:24.497 user 0m39.427s 00:29:24.497 sys 0m11.969s 00:29:24.497 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:24.497 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:24.497 ************************************ 00:29:24.497 END TEST nvmf_zcopy 00:29:24.497 ************************************ 00:29:24.497 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:29:24.497 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:24.497 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:24.497 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:24.497 ************************************ 00:29:24.497 START TEST nvmf_nmic 00:29:24.497 ************************************ 00:29:24.497 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:29:24.497 * Looking for test storage... 00:29:24.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:24.497 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:24.497 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:29:24.497 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:24.756 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:24.756 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:24.756 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:24.756 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:24.756 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:29:24.756 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:29:24.756 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:29:24.756 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:29:24.756 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:29:24.756 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:29:24.756 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:29:24.756 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:24.756 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:29:24.756 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:29:24.756 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:24.756 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:24.756 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:29:24.756 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:29:24.756 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:24.756 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:29:24.756 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:29:24.756 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:29:24.756 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:29:24.756 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:24.756 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:24.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.757 --rc genhtml_branch_coverage=1 00:29:24.757 --rc genhtml_function_coverage=1 00:29:24.757 --rc genhtml_legend=1 00:29:24.757 --rc geninfo_all_blocks=1 00:29:24.757 --rc geninfo_unexecuted_blocks=1 00:29:24.757 00:29:24.757 ' 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:24.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.757 --rc genhtml_branch_coverage=1 00:29:24.757 --rc genhtml_function_coverage=1 00:29:24.757 --rc genhtml_legend=1 00:29:24.757 --rc geninfo_all_blocks=1 00:29:24.757 --rc geninfo_unexecuted_blocks=1 00:29:24.757 00:29:24.757 ' 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:24.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.757 --rc genhtml_branch_coverage=1 00:29:24.757 --rc genhtml_function_coverage=1 00:29:24.757 --rc genhtml_legend=1 00:29:24.757 --rc geninfo_all_blocks=1 00:29:24.757 --rc geninfo_unexecuted_blocks=1 00:29:24.757 00:29:24.757 ' 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:24.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.757 --rc genhtml_branch_coverage=1 00:29:24.757 --rc genhtml_function_coverage=1 00:29:24.757 --rc genhtml_legend=1 00:29:24.757 --rc geninfo_all_blocks=1 00:29:24.757 --rc geninfo_unexecuted_blocks=1 00:29:24.757 00:29:24.757 ' 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:29:24.757 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:27.292 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:27.292 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:29:27.293 Found 0000:82:00.0 (0x8086 - 0x159b) 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:29:27.293 Found 0000:82:00.1 (0x8086 - 0x159b) 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:29:27.293 Found net devices under 0000:82:00.0: cvl_0_0 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:29:27.293 Found net devices under 0000:82:00.1: cvl_0_1 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:27.293 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:27.294 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:27.294 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:27.294 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:29:27.294 00:29:27.294 --- 10.0.0.2 ping statistics --- 00:29:27.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:27.294 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:29:27.294 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:27.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:27.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:29:27.294 00:29:27.294 --- 10.0.0.1 ping statistics --- 00:29:27.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:27.294 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:29:27.294 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:27.294 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:29:27.294 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:27.294 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:27.294 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:27.294 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:27.294 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:27.294 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:27.294 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:27.294 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:29:27.294 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:27.294 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:27.294 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:27.294 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2772688 00:29:27.294 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:29:27.294 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2772688 00:29:27.294 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2772688 ']' 00:29:27.294 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:27.294 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:27.294 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:27.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:27.294 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:27.294 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:27.552 [2024-11-19 11:31:22.814586] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:27.552 [2024-11-19 11:31:22.815604] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:29:27.552 [2024-11-19 11:31:22.815660] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:27.552 [2024-11-19 11:31:22.899782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:27.552 [2024-11-19 11:31:22.958229] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:27.552 [2024-11-19 11:31:22.958283] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:27.552 [2024-11-19 11:31:22.958311] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:27.552 [2024-11-19 11:31:22.958321] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:27.552 [2024-11-19 11:31:22.958330] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:27.552 [2024-11-19 11:31:22.959904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:27.552 [2024-11-19 11:31:22.960014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:27.552 [2024-11-19 11:31:22.960104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:27.552 [2024-11-19 11:31:22.960112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:27.552 [2024-11-19 11:31:23.044418] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:27.552 [2024-11-19 11:31:23.044664] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:27.552 [2024-11-19 11:31:23.044942] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:27.552 [2024-11-19 11:31:23.045631] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:27.552 [2024-11-19 11:31:23.045870] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:27.812 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:27.812 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:29:27.812 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:27.812 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:27.812 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:27.812 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:27.812 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:27.812 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.812 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:27.812 [2024-11-19 11:31:23.104826] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:27.812 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.812 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:27.812 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.812 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:27.812 Malloc0 00:29:27.812 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.812 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:29:27.812 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.812 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:27.812 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.812 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:27.812 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.812 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:27.812 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.812 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:27.812 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.812 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:27.812 [2024-11-19 11:31:23.173043] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:27.812 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.812 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:29:27.812 test case1: single bdev can't be used in multiple subsystems 00:29:27.812 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:29:27.812 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.812 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:27.812 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.812 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:27.812 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.812 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:27.812 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.812 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:29:27.812 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:29:27.812 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.812 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:27.812 [2024-11-19 11:31:23.196769] bdev.c:8199:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:29:27.812 [2024-11-19 11:31:23.196798] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:29:27.812 [2024-11-19 11:31:23.196828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:27.812 request: 00:29:27.812 { 00:29:27.812 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:29:27.812 "namespace": { 00:29:27.812 "bdev_name": "Malloc0", 00:29:27.812 "no_auto_visible": false 00:29:27.812 }, 00:29:27.812 "method": "nvmf_subsystem_add_ns", 00:29:27.812 "req_id": 1 00:29:27.812 } 00:29:27.812 Got JSON-RPC error response 00:29:27.812 response: 00:29:27.812 { 00:29:27.812 "code": -32602, 00:29:27.812 "message": "Invalid parameters" 00:29:27.812 } 00:29:27.812 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:27.812 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:29:27.812 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:29:27.812 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:29:27.812 Adding namespace failed - expected result. 00:29:27.812 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:29:27.812 test case2: host connect to nvmf target in multiple paths 00:29:27.812 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:27.812 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.812 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:27.812 [2024-11-19 11:31:23.204860] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:27.812 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.812 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:29:28.071 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:29:28.329 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:29:28.329 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:29:28.329 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:29:28.329 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:29:28.329 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:29:30.230 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:29:30.230 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:29:30.230 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:29:30.230 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:29:30.230 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:29:30.230 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:29:30.230 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:29:30.230 [global] 00:29:30.230 thread=1 00:29:30.230 invalidate=1 00:29:30.230 rw=write 00:29:30.230 time_based=1 00:29:30.230 runtime=1 00:29:30.230 ioengine=libaio 00:29:30.230 direct=1 00:29:30.230 bs=4096 00:29:30.230 iodepth=1 00:29:30.230 norandommap=0 00:29:30.230 numjobs=1 00:29:30.230 00:29:30.230 verify_dump=1 00:29:30.230 verify_backlog=512 00:29:30.230 verify_state_save=0 00:29:30.230 do_verify=1 00:29:30.230 verify=crc32c-intel 00:29:30.230 [job0] 00:29:30.230 filename=/dev/nvme0n1 00:29:30.230 Could not set queue depth (nvme0n1) 00:29:30.488 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:30.488 fio-3.35 00:29:30.488 Starting 1 thread 00:29:31.862 00:29:31.862 job0: (groupid=0, jobs=1): err= 0: pid=2773151: Tue Nov 19 11:31:27 2024 00:29:31.862 read: IOPS=118, BW=476KiB/s (487kB/s)(476KiB/1001msec) 00:29:31.862 slat (nsec): min=6690, max=37204, avg=10664.05, stdev=4841.23 00:29:31.862 clat (usec): min=197, max=41994, avg=7420.61, stdev=15618.58 00:29:31.862 lat (usec): min=206, max=42012, avg=7431.28, stdev=15621.19 00:29:31.862 clat percentiles (usec): 00:29:31.862 | 1.00th=[ 202], 5.00th=[ 206], 10.00th=[ 206], 20.00th=[ 210], 00:29:31.862 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 229], 00:29:31.862 | 70.00th=[ 235], 80.00th=[ 251], 90.00th=[41157], 95.00th=[41157], 00:29:31.862 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:29:31.862 | 99.99th=[42206] 00:29:31.862 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:29:31.862 slat (usec): min=7, max=28692, avg=64.72, stdev=1267.67 00:29:31.862 clat (usec): min=137, max=262, avg=158.21, stdev=24.45 00:29:31.862 lat (usec): min=145, max=28931, avg=222.94, stdev=1271.46 00:29:31.862 clat percentiles (usec): 00:29:31.862 | 1.00th=[ 141], 5.00th=[ 143], 10.00th=[ 143], 20.00th=[ 145], 00:29:31.862 | 30.00th=[ 145], 40.00th=[ 147], 50.00th=[ 149], 60.00th=[ 153], 00:29:31.862 | 70.00th=[ 161], 80.00th=[ 169], 90.00th=[ 176], 95.00th=[ 237], 00:29:31.862 | 99.00th=[ 245], 99.50th=[ 249], 99.90th=[ 265], 99.95th=[ 265], 00:29:31.862 | 99.99th=[ 265] 00:29:31.862 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:29:31.862 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:29:31.862 lat (usec) : 250=96.04%, 500=0.63% 00:29:31.862 lat (msec) : 50=3.33% 00:29:31.862 cpu : usr=0.30%, sys=0.80%, ctx=633, majf=0, minf=1 00:29:31.862 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:31.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:31.862 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:31.862 issued rwts: total=119,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:31.862 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:31.862 00:29:31.862 Run status group 0 (all jobs): 00:29:31.862 READ: bw=476KiB/s (487kB/s), 476KiB/s-476KiB/s (487kB/s-487kB/s), io=476KiB (487kB), run=1001-1001msec 00:29:31.862 WRITE: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:29:31.862 00:29:31.862 Disk stats (read/write): 00:29:31.862 nvme0n1: ios=45/512, merge=0/0, ticks=1766/82, in_queue=1848, util=98.70% 00:29:31.862 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:31.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:29:31.862 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:29:31.862 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:29:31.862 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:29:31.862 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:31.862 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:29:31.862 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:31.862 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:29:31.862 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:31.862 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:29:31.862 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:31.862 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:29:31.862 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:31.862 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:29:31.862 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:31.862 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:31.862 rmmod nvme_tcp 00:29:31.862 rmmod nvme_fabrics 00:29:31.862 rmmod nvme_keyring 00:29:31.862 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:31.862 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:29:31.862 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:29:31.862 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2772688 ']' 00:29:31.862 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2772688 00:29:31.862 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2772688 ']' 00:29:31.862 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2772688 00:29:31.862 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:29:31.862 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:31.862 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2772688 00:29:31.862 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:31.862 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:31.862 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2772688' 00:29:31.862 killing process with pid 2772688 00:29:31.862 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2772688 00:29:31.862 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2772688 00:29:32.120 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:32.120 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:32.120 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:32.120 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:29:32.120 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:29:32.120 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:32.120 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:29:32.120 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:32.121 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:32.121 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:32.121 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:32.121 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:34.658 00:29:34.658 real 0m9.697s 00:29:34.658 user 0m17.439s 00:29:34.658 sys 0m3.657s 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:34.658 ************************************ 00:29:34.658 END TEST nvmf_nmic 00:29:34.658 ************************************ 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:34.658 ************************************ 00:29:34.658 START TEST nvmf_fio_target 00:29:34.658 ************************************ 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:29:34.658 * Looking for test storage... 00:29:34.658 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:34.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.658 --rc genhtml_branch_coverage=1 00:29:34.658 --rc genhtml_function_coverage=1 00:29:34.658 --rc genhtml_legend=1 00:29:34.658 --rc geninfo_all_blocks=1 00:29:34.658 --rc geninfo_unexecuted_blocks=1 00:29:34.658 00:29:34.658 ' 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:34.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.658 --rc genhtml_branch_coverage=1 00:29:34.658 --rc genhtml_function_coverage=1 00:29:34.658 --rc genhtml_legend=1 00:29:34.658 --rc geninfo_all_blocks=1 00:29:34.658 --rc geninfo_unexecuted_blocks=1 00:29:34.658 00:29:34.658 ' 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:34.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.658 --rc genhtml_branch_coverage=1 00:29:34.658 --rc genhtml_function_coverage=1 00:29:34.658 --rc genhtml_legend=1 00:29:34.658 --rc geninfo_all_blocks=1 00:29:34.658 --rc geninfo_unexecuted_blocks=1 00:29:34.658 00:29:34.658 ' 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:34.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.658 --rc genhtml_branch_coverage=1 00:29:34.658 --rc genhtml_function_coverage=1 00:29:34.658 --rc genhtml_legend=1 00:29:34.658 --rc geninfo_all_blocks=1 00:29:34.658 --rc geninfo_unexecuted_blocks=1 00:29:34.658 00:29:34.658 ' 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:34.658 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:34.659 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:34.659 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:29:34.659 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:29:34.659 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:34.659 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:34.659 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:34.659 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:34.659 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:34.659 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:29:34.659 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:34.659 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:34.659 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:34.659 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.659 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.659 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.659 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:29:34.659 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.659 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:29:34.659 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:34.659 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:34.659 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:34.659 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:34.659 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:34.659 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:34.659 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:34.659 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:34.659 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:34.659 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:34.659 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:34.659 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:34.659 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:34.659 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:29:34.659 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:34.659 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:34.659 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:34.659 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:34.659 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:34.659 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:34.659 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:34.659 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:34.659 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:34.659 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:34.659 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:29:34.659 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:29:37.194 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:37.194 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:29:37.194 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:37.194 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:37.194 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:37.194 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:37.194 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:37.194 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:29:37.194 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:37.194 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:29:37.194 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:29:37.194 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:29:37.194 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:29:37.194 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:29:37.194 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:29:37.194 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:37.194 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:37.194 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:37.194 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:37.194 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:37.194 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:37.194 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:37.194 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:37.194 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:37.194 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:37.194 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:37.194 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:37.194 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:37.194 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:37.194 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:37.194 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:37.194 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:37.194 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:37.194 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:37.194 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:29:37.194 Found 0000:82:00.0 (0x8086 - 0x159b) 00:29:37.194 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:29:37.195 Found 0000:82:00.1 (0x8086 - 0x159b) 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:29:37.195 Found net devices under 0000:82:00.0: cvl_0_0 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:29:37.195 Found net devices under 0000:82:00.1: cvl_0_1 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:37.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:37.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:29:37.195 00:29:37.195 --- 10.0.0.2 ping statistics --- 00:29:37.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:37.195 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:37.195 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:37.195 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:29:37.195 00:29:37.195 --- 10.0.0.1 ping statistics --- 00:29:37.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:37.195 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2775568 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2775568 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2775568 ']' 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:37.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:37.195 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:29:37.196 [2024-11-19 11:31:32.593180] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:37.196 [2024-11-19 11:31:32.594188] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:29:37.196 [2024-11-19 11:31:32.594237] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:37.196 [2024-11-19 11:31:32.671852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:37.455 [2024-11-19 11:31:32.728463] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:37.455 [2024-11-19 11:31:32.728516] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:37.455 [2024-11-19 11:31:32.728540] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:37.455 [2024-11-19 11:31:32.728552] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:37.455 [2024-11-19 11:31:32.728577] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:37.455 [2024-11-19 11:31:32.730174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:37.455 [2024-11-19 11:31:32.730285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:37.455 [2024-11-19 11:31:32.730431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:37.455 [2024-11-19 11:31:32.730436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:37.455 [2024-11-19 11:31:32.825757] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:37.455 [2024-11-19 11:31:32.826020] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:37.455 [2024-11-19 11:31:32.826321] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:37.455 [2024-11-19 11:31:32.826981] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:37.455 [2024-11-19 11:31:32.827185] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:37.455 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:37.455 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:29:37.455 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:37.455 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:37.455 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:29:37.455 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:37.455 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:37.715 [2024-11-19 11:31:33.131131] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:37.715 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:38.282 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:29:38.282 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:38.282 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:29:38.282 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:38.848 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:29:38.848 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:39.106 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:29:39.106 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:29:39.365 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:39.653 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:29:39.653 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:39.911 11:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:29:39.911 11:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:40.170 11:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:29:40.170 11:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:29:40.428 11:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:29:40.685 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:29:40.685 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:40.943 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:29:40.943 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:41.509 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:41.509 [2024-11-19 11:31:36.951301] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:41.509 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:29:41.766 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:29:42.024 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:29:42.282 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:29:42.282 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:29:42.283 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:29:42.283 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:29:42.283 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:29:42.283 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:29:44.808 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:29:44.808 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:29:44.808 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:29:44.808 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:29:44.809 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:29:44.809 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:29:44.809 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:29:44.809 [global] 00:29:44.809 thread=1 00:29:44.809 invalidate=1 00:29:44.809 rw=write 00:29:44.809 time_based=1 00:29:44.809 runtime=1 00:29:44.809 ioengine=libaio 00:29:44.809 direct=1 00:29:44.809 bs=4096 00:29:44.809 iodepth=1 00:29:44.809 norandommap=0 00:29:44.809 numjobs=1 00:29:44.809 00:29:44.809 verify_dump=1 00:29:44.809 verify_backlog=512 00:29:44.809 verify_state_save=0 00:29:44.809 do_verify=1 00:29:44.809 verify=crc32c-intel 00:29:44.809 [job0] 00:29:44.809 filename=/dev/nvme0n1 00:29:44.809 [job1] 00:29:44.809 filename=/dev/nvme0n2 00:29:44.809 [job2] 00:29:44.809 filename=/dev/nvme0n3 00:29:44.809 [job3] 00:29:44.809 filename=/dev/nvme0n4 00:29:44.809 Could not set queue depth (nvme0n1) 00:29:44.809 Could not set queue depth (nvme0n2) 00:29:44.809 Could not set queue depth (nvme0n3) 00:29:44.809 Could not set queue depth (nvme0n4) 00:29:44.809 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:44.809 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:44.809 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:44.809 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:44.809 fio-3.35 00:29:44.809 Starting 4 threads 00:29:46.185 00:29:46.185 job0: (groupid=0, jobs=1): err= 0: pid=2776628: Tue Nov 19 11:31:41 2024 00:29:46.185 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:29:46.185 slat (nsec): min=6191, max=19204, avg=7147.97, stdev=1096.38 00:29:46.185 clat (usec): min=191, max=1617, avg=243.56, stdev=56.76 00:29:46.185 lat (usec): min=198, max=1624, avg=250.70, stdev=56.88 00:29:46.185 clat percentiles (usec): 00:29:46.185 | 1.00th=[ 202], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 212], 00:29:46.185 | 30.00th=[ 219], 40.00th=[ 225], 50.00th=[ 231], 60.00th=[ 239], 00:29:46.185 | 70.00th=[ 251], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 302], 00:29:46.185 | 99.00th=[ 494], 99.50th=[ 506], 99.90th=[ 537], 99.95th=[ 1336], 00:29:46.185 | 99.99th=[ 1614] 00:29:46.185 write: IOPS=2533, BW=9.90MiB/s (10.4MB/s)(9.91MiB/1001msec); 0 zone resets 00:29:46.185 slat (nsec): min=7465, max=49501, avg=9455.05, stdev=2691.29 00:29:46.185 clat (usec): min=145, max=1818, avg=177.95, stdev=41.56 00:29:46.185 lat (usec): min=153, max=1836, avg=187.41, stdev=42.29 00:29:46.185 clat percentiles (usec): 00:29:46.185 | 1.00th=[ 149], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 157], 00:29:46.185 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 172], 60.00th=[ 178], 00:29:46.185 | 70.00th=[ 184], 80.00th=[ 192], 90.00th=[ 212], 95.00th=[ 235], 00:29:46.185 | 99.00th=[ 269], 99.50th=[ 277], 99.90th=[ 314], 99.95th=[ 367], 00:29:46.185 | 99.99th=[ 1827] 00:29:46.185 bw ( KiB/s): min= 9464, max= 9464, per=31.49%, avg=9464.00, stdev= 0.00, samples=1 00:29:46.185 iops : min= 2366, max= 2366, avg=2366.00, stdev= 0.00, samples=1 00:29:46.185 lat (usec) : 250=85.03%, 500=14.55%, 750=0.35% 00:29:46.185 lat (msec) : 2=0.07% 00:29:46.185 cpu : usr=1.60%, sys=6.20%, ctx=4584, majf=0, minf=2 00:29:46.185 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:46.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.185 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.185 issued rwts: total=2048,2536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.185 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:46.185 job1: (groupid=0, jobs=1): err= 0: pid=2776631: Tue Nov 19 11:31:41 2024 00:29:46.185 read: IOPS=2073, BW=8296KiB/s (8495kB/s)(8304KiB/1001msec) 00:29:46.185 slat (nsec): min=5069, max=34513, avg=7602.07, stdev=3467.64 00:29:46.185 clat (usec): min=192, max=1098, avg=247.08, stdev=51.39 00:29:46.185 lat (usec): min=198, max=1113, avg=254.68, stdev=53.27 00:29:46.185 clat percentiles (usec): 00:29:46.185 | 1.00th=[ 202], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 215], 00:29:46.185 | 30.00th=[ 221], 40.00th=[ 227], 50.00th=[ 233], 60.00th=[ 239], 00:29:46.185 | 70.00th=[ 249], 80.00th=[ 273], 90.00th=[ 314], 95.00th=[ 326], 00:29:46.185 | 99.00th=[ 396], 99.50th=[ 416], 99.90th=[ 816], 99.95th=[ 971], 00:29:46.185 | 99.99th=[ 1106] 00:29:46.185 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:29:46.185 slat (usec): min=6, max=713, avg= 8.99, stdev=16.48 00:29:46.185 clat (usec): min=142, max=938, avg=170.56, stdev=27.40 00:29:46.185 lat (usec): min=149, max=947, avg=179.55, stdev=32.44 00:29:46.185 clat percentiles (usec): 00:29:46.185 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 157], 00:29:46.185 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 172], 00:29:46.185 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 192], 95.00th=[ 198], 00:29:46.185 | 99.00th=[ 225], 99.50th=[ 243], 99.90th=[ 396], 99.95th=[ 922], 00:29:46.185 | 99.99th=[ 938] 00:29:46.185 bw ( KiB/s): min=10344, max=10344, per=34.42%, avg=10344.00, stdev= 0.00, samples=1 00:29:46.185 iops : min= 2586, max= 2586, avg=2586.00, stdev= 0.00, samples=1 00:29:46.185 lat (usec) : 250=86.99%, 500=12.88%, 1000=0.11% 00:29:46.186 lat (msec) : 2=0.02% 00:29:46.186 cpu : usr=1.40%, sys=4.30%, ctx=4640, majf=0, minf=1 00:29:46.186 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:46.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.186 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.186 issued rwts: total=2076,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.186 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:46.186 job2: (groupid=0, jobs=1): err= 0: pid=2776632: Tue Nov 19 11:31:41 2024 00:29:46.186 read: IOPS=21, BW=86.4KiB/s (88.4kB/s)(88.0KiB/1019msec) 00:29:46.186 slat (nsec): min=8550, max=12967, avg=10947.77, stdev=1114.22 00:29:46.186 clat (usec): min=40930, max=42370, avg=41638.41, stdev=516.59 00:29:46.186 lat (usec): min=40940, max=42379, avg=41649.36, stdev=516.31 00:29:46.186 clat percentiles (usec): 00:29:46.186 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:29:46.186 | 30.00th=[41157], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:29:46.186 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:29:46.186 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:29:46.186 | 99.99th=[42206] 00:29:46.186 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:29:46.186 slat (nsec): min=7928, max=54890, avg=9638.49, stdev=2733.21 00:29:46.186 clat (usec): min=159, max=396, avg=187.66, stdev=22.75 00:29:46.186 lat (usec): min=167, max=406, avg=197.30, stdev=23.14 00:29:46.186 clat percentiles (usec): 00:29:46.186 | 1.00th=[ 161], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 174], 00:29:46.186 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 188], 00:29:46.186 | 70.00th=[ 192], 80.00th=[ 198], 90.00th=[ 210], 95.00th=[ 229], 00:29:46.186 | 99.00th=[ 265], 99.50th=[ 269], 99.90th=[ 396], 99.95th=[ 396], 00:29:46.186 | 99.99th=[ 396] 00:29:46.186 bw ( KiB/s): min= 4096, max= 4096, per=13.63%, avg=4096.00, stdev= 0.00, samples=1 00:29:46.186 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:29:46.186 lat (usec) : 250=93.26%, 500=2.62% 00:29:46.186 lat (msec) : 50=4.12% 00:29:46.186 cpu : usr=0.29%, sys=0.59%, ctx=534, majf=0, minf=1 00:29:46.186 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:46.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.186 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.186 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.186 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:46.186 job3: (groupid=0, jobs=1): err= 0: pid=2776633: Tue Nov 19 11:31:41 2024 00:29:46.186 read: IOPS=1960, BW=7840KiB/s (8028kB/s)(7848KiB/1001msec) 00:29:46.186 slat (nsec): min=6787, max=30997, avg=7977.71, stdev=1858.30 00:29:46.186 clat (usec): min=225, max=536, avg=275.40, stdev=35.18 00:29:46.186 lat (usec): min=232, max=543, avg=283.38, stdev=35.85 00:29:46.186 clat percentiles (usec): 00:29:46.186 | 1.00th=[ 233], 5.00th=[ 241], 10.00th=[ 247], 20.00th=[ 253], 00:29:46.186 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 277], 00:29:46.186 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 302], 95.00th=[ 330], 00:29:46.186 | 99.00th=[ 474], 99.50th=[ 502], 99.90th=[ 519], 99.95th=[ 537], 00:29:46.186 | 99.99th=[ 537] 00:29:46.186 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:29:46.186 slat (nsec): min=8375, max=29520, avg=9565.00, stdev=1577.83 00:29:46.186 clat (usec): min=159, max=4214, avg=201.92, stdev=93.03 00:29:46.186 lat (usec): min=168, max=4244, avg=211.48, stdev=93.57 00:29:46.186 clat percentiles (usec): 00:29:46.186 | 1.00th=[ 165], 5.00th=[ 172], 10.00th=[ 174], 20.00th=[ 180], 00:29:46.186 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 194], 60.00th=[ 198], 00:29:46.186 | 70.00th=[ 206], 80.00th=[ 217], 90.00th=[ 237], 95.00th=[ 249], 00:29:46.186 | 99.00th=[ 302], 99.50th=[ 326], 99.90th=[ 437], 99.95th=[ 490], 00:29:46.186 | 99.99th=[ 4228] 00:29:46.186 bw ( KiB/s): min= 8192, max= 8192, per=27.26%, avg=8192.00, stdev= 0.00, samples=1 00:29:46.186 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:29:46.186 lat (usec) : 250=55.26%, 500=44.44%, 750=0.27% 00:29:46.186 lat (msec) : 10=0.02% 00:29:46.186 cpu : usr=2.60%, sys=4.70%, ctx=4012, majf=0, minf=1 00:29:46.186 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:46.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.186 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.186 issued rwts: total=1962,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.186 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:46.186 00:29:46.186 Run status group 0 (all jobs): 00:29:46.186 READ: bw=23.4MiB/s (24.6MB/s), 86.4KiB/s-8296KiB/s (88.4kB/s-8495kB/s), io=23.9MiB (25.0MB), run=1001-1019msec 00:29:46.186 WRITE: bw=29.3MiB/s (30.8MB/s), 2010KiB/s-9.99MiB/s (2058kB/s-10.5MB/s), io=29.9MiB (31.4MB), run=1001-1019msec 00:29:46.186 00:29:46.186 Disk stats (read/write): 00:29:46.186 nvme0n1: ios=1831/2048, merge=0/0, ticks=470/348, in_queue=818, util=86.27% 00:29:46.186 nvme0n2: ios=1840/2048, merge=0/0, ticks=643/343, in_queue=986, util=100.00% 00:29:46.186 nvme0n3: ios=17/512, merge=0/0, ticks=711/94, in_queue=805, util=88.84% 00:29:46.186 nvme0n4: ios=1593/1882, merge=0/0, ticks=707/377, in_queue=1084, util=97.13% 00:29:46.186 11:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:29:46.186 [global] 00:29:46.186 thread=1 00:29:46.186 invalidate=1 00:29:46.186 rw=randwrite 00:29:46.186 time_based=1 00:29:46.186 runtime=1 00:29:46.186 ioengine=libaio 00:29:46.186 direct=1 00:29:46.186 bs=4096 00:29:46.186 iodepth=1 00:29:46.186 norandommap=0 00:29:46.186 numjobs=1 00:29:46.186 00:29:46.186 verify_dump=1 00:29:46.186 verify_backlog=512 00:29:46.186 verify_state_save=0 00:29:46.186 do_verify=1 00:29:46.186 verify=crc32c-intel 00:29:46.186 [job0] 00:29:46.186 filename=/dev/nvme0n1 00:29:46.186 [job1] 00:29:46.186 filename=/dev/nvme0n2 00:29:46.186 [job2] 00:29:46.186 filename=/dev/nvme0n3 00:29:46.186 [job3] 00:29:46.186 filename=/dev/nvme0n4 00:29:46.186 Could not set queue depth (nvme0n1) 00:29:46.186 Could not set queue depth (nvme0n2) 00:29:46.186 Could not set queue depth (nvme0n3) 00:29:46.186 Could not set queue depth (nvme0n4) 00:29:46.186 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:46.186 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:46.186 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:46.186 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:46.186 fio-3.35 00:29:46.186 Starting 4 threads 00:29:47.560 00:29:47.560 job0: (groupid=0, jobs=1): err= 0: pid=2776856: Tue Nov 19 11:31:42 2024 00:29:47.560 read: IOPS=178, BW=714KiB/s (731kB/s)(740KiB/1037msec) 00:29:47.560 slat (nsec): min=5872, max=62357, avg=14215.70, stdev=7408.71 00:29:47.560 clat (usec): min=235, max=41040, avg=4930.22, stdev=12924.67 00:29:47.560 lat (usec): min=244, max=41051, avg=4944.43, stdev=12923.69 00:29:47.560 clat percentiles (usec): 00:29:47.560 | 1.00th=[ 237], 5.00th=[ 249], 10.00th=[ 260], 20.00th=[ 281], 00:29:47.560 | 30.00th=[ 293], 40.00th=[ 302], 50.00th=[ 318], 60.00th=[ 334], 00:29:47.560 | 70.00th=[ 355], 80.00th=[ 375], 90.00th=[41157], 95.00th=[41157], 00:29:47.560 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:29:47.560 | 99.99th=[41157] 00:29:47.560 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:29:47.560 slat (nsec): min=7266, max=30506, avg=9741.72, stdev=3053.47 00:29:47.560 clat (usec): min=167, max=485, avg=224.87, stdev=29.81 00:29:47.560 lat (usec): min=175, max=497, avg=234.61, stdev=30.32 00:29:47.560 clat percentiles (usec): 00:29:47.560 | 1.00th=[ 174], 5.00th=[ 190], 10.00th=[ 200], 20.00th=[ 208], 00:29:47.560 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 225], 00:29:47.560 | 70.00th=[ 231], 80.00th=[ 239], 90.00th=[ 251], 95.00th=[ 265], 00:29:47.560 | 99.00th=[ 347], 99.50th=[ 400], 99.90th=[ 486], 99.95th=[ 486], 00:29:47.560 | 99.99th=[ 486] 00:29:47.560 bw ( KiB/s): min= 4096, max= 4096, per=18.15%, avg=4096.00, stdev= 0.00, samples=1 00:29:47.560 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:29:47.560 lat (usec) : 250=66.57%, 500=29.84%, 750=0.57% 00:29:47.560 lat (msec) : 50=3.01% 00:29:47.560 cpu : usr=0.39%, sys=0.68%, ctx=698, majf=0, minf=1 00:29:47.561 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:47.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.561 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.561 issued rwts: total=185,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:47.561 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:47.561 job1: (groupid=0, jobs=1): err= 0: pid=2776858: Tue Nov 19 11:31:42 2024 00:29:47.561 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:29:47.561 slat (nsec): min=6575, max=25557, avg=8122.26, stdev=1740.47 00:29:47.561 clat (usec): min=206, max=567, avg=257.75, stdev=43.18 00:29:47.561 lat (usec): min=214, max=574, avg=265.87, stdev=43.88 00:29:47.561 clat percentiles (usec): 00:29:47.561 | 1.00th=[ 223], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 239], 00:29:47.561 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 253], 00:29:47.561 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 293], 00:29:47.561 | 99.00th=[ 506], 99.50th=[ 519], 99.90th=[ 545], 99.95th=[ 545], 00:29:47.561 | 99.99th=[ 570] 00:29:47.561 write: IOPS=2264, BW=9059KiB/s (9276kB/s)(9068KiB/1001msec); 0 zone resets 00:29:47.561 slat (nsec): min=7287, max=45870, avg=10087.23, stdev=2584.51 00:29:47.561 clat (usec): min=142, max=570, avg=185.59, stdev=29.08 00:29:47.561 lat (usec): min=151, max=587, avg=195.68, stdev=30.09 00:29:47.561 clat percentiles (usec): 00:29:47.561 | 1.00th=[ 149], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 167], 00:29:47.561 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 180], 60.00th=[ 186], 00:29:47.561 | 70.00th=[ 192], 80.00th=[ 200], 90.00th=[ 217], 95.00th=[ 239], 00:29:47.561 | 99.00th=[ 297], 99.50th=[ 318], 99.90th=[ 367], 99.95th=[ 379], 00:29:47.561 | 99.99th=[ 570] 00:29:47.561 bw ( KiB/s): min= 8608, max= 8608, per=38.14%, avg=8608.00, stdev= 0.00, samples=1 00:29:47.561 iops : min= 2152, max= 2152, avg=2152.00, stdev= 0.00, samples=1 00:29:47.561 lat (usec) : 250=75.39%, 500=23.94%, 750=0.67% 00:29:47.561 cpu : usr=2.80%, sys=5.50%, ctx=4316, majf=0, minf=1 00:29:47.561 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:47.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.561 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.561 issued rwts: total=2048,2267,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:47.561 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:47.561 job2: (groupid=0, jobs=1): err= 0: pid=2776864: Tue Nov 19 11:31:42 2024 00:29:47.561 read: IOPS=2108, BW=8436KiB/s (8638kB/s)(8444KiB/1001msec) 00:29:47.561 slat (nsec): min=6318, max=27662, avg=8455.64, stdev=1560.58 00:29:47.561 clat (usec): min=194, max=985, avg=226.62, stdev=33.15 00:29:47.561 lat (usec): min=201, max=994, avg=235.08, stdev=33.63 00:29:47.561 clat percentiles (usec): 00:29:47.561 | 1.00th=[ 200], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 208], 00:29:47.561 | 30.00th=[ 212], 40.00th=[ 219], 50.00th=[ 223], 60.00th=[ 227], 00:29:47.561 | 70.00th=[ 231], 80.00th=[ 237], 90.00th=[ 247], 95.00th=[ 262], 00:29:47.561 | 99.00th=[ 318], 99.50th=[ 330], 99.90th=[ 523], 99.95th=[ 947], 00:29:47.561 | 99.99th=[ 988] 00:29:47.561 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:29:47.561 slat (nsec): min=7374, max=79217, avg=10682.98, stdev=4165.48 00:29:47.561 clat (usec): min=140, max=389, avg=181.05, stdev=25.49 00:29:47.561 lat (usec): min=156, max=398, avg=191.73, stdev=26.94 00:29:47.561 clat percentiles (usec): 00:29:47.561 | 1.00th=[ 153], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 161], 00:29:47.561 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 180], 00:29:47.561 | 70.00th=[ 188], 80.00th=[ 200], 90.00th=[ 221], 95.00th=[ 233], 00:29:47.561 | 99.00th=[ 258], 99.50th=[ 277], 99.90th=[ 314], 99.95th=[ 318], 00:29:47.561 | 99.99th=[ 392] 00:29:47.561 bw ( KiB/s): min= 9432, max= 9432, per=41.79%, avg=9432.00, stdev= 0.00, samples=1 00:29:47.561 iops : min= 2358, max= 2358, avg=2358.00, stdev= 0.00, samples=1 00:29:47.561 lat (usec) : 250=95.16%, 500=4.77%, 750=0.02%, 1000=0.04% 00:29:47.561 cpu : usr=2.50%, sys=6.40%, ctx=4674, majf=0, minf=1 00:29:47.561 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:47.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.561 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.561 issued rwts: total=2111,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:47.561 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:47.561 job3: (groupid=0, jobs=1): err= 0: pid=2776865: Tue Nov 19 11:31:42 2024 00:29:47.561 read: IOPS=21, BW=85.7KiB/s (87.7kB/s)(88.0KiB/1027msec) 00:29:47.561 slat (nsec): min=8028, max=18203, avg=12112.86, stdev=3295.71 00:29:47.561 clat (usec): min=40892, max=41994, avg=41060.56, stdev=254.11 00:29:47.561 lat (usec): min=40908, max=42009, avg=41072.67, stdev=254.15 00:29:47.561 clat percentiles (usec): 00:29:47.561 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:29:47.561 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:29:47.561 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:29:47.561 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:29:47.561 | 99.99th=[42206] 00:29:47.561 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:29:47.561 slat (nsec): min=6879, max=38260, avg=12172.88, stdev=5219.23 00:29:47.561 clat (usec): min=165, max=665, avg=225.55, stdev=59.41 00:29:47.561 lat (usec): min=177, max=675, avg=237.72, stdev=60.47 00:29:47.561 clat percentiles (usec): 00:29:47.561 | 1.00th=[ 172], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 188], 00:29:47.561 | 30.00th=[ 192], 40.00th=[ 200], 50.00th=[ 208], 60.00th=[ 219], 00:29:47.561 | 70.00th=[ 229], 80.00th=[ 251], 90.00th=[ 277], 95.00th=[ 351], 00:29:47.561 | 99.00th=[ 469], 99.50th=[ 494], 99.90th=[ 668], 99.95th=[ 668], 00:29:47.561 | 99.99th=[ 668] 00:29:47.561 bw ( KiB/s): min= 4096, max= 4096, per=18.15%, avg=4096.00, stdev= 0.00, samples=1 00:29:47.561 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:29:47.561 lat (usec) : 250=76.40%, 500=19.10%, 750=0.37% 00:29:47.561 lat (msec) : 50=4.12% 00:29:47.561 cpu : usr=0.58%, sys=0.29%, ctx=534, majf=0, minf=2 00:29:47.561 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:47.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.561 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.561 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:47.561 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:47.561 00:29:47.561 Run status group 0 (all jobs): 00:29:47.561 READ: bw=16.4MiB/s (17.2MB/s), 85.7KiB/s-8436KiB/s (87.7kB/s-8638kB/s), io=17.1MiB (17.9MB), run=1001-1037msec 00:29:47.561 WRITE: bw=22.0MiB/s (23.1MB/s), 1975KiB/s-9.99MiB/s (2022kB/s-10.5MB/s), io=22.9MiB (24.0MB), run=1001-1037msec 00:29:47.561 00:29:47.561 Disk stats (read/write): 00:29:47.561 nvme0n1: ios=206/512, merge=0/0, ticks=1574/115, in_queue=1689, util=85.87% 00:29:47.561 nvme0n2: ios=1713/2048, merge=0/0, ticks=675/375, in_queue=1050, util=89.75% 00:29:47.561 nvme0n3: ios=1935/2048, merge=0/0, ticks=534/369, in_queue=903, util=94.90% 00:29:47.561 nvme0n4: ios=74/512, merge=0/0, ticks=776/111, in_queue=887, util=95.49% 00:29:47.561 11:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:29:47.561 [global] 00:29:47.561 thread=1 00:29:47.561 invalidate=1 00:29:47.561 rw=write 00:29:47.561 time_based=1 00:29:47.561 runtime=1 00:29:47.561 ioengine=libaio 00:29:47.561 direct=1 00:29:47.561 bs=4096 00:29:47.561 iodepth=128 00:29:47.561 norandommap=0 00:29:47.561 numjobs=1 00:29:47.561 00:29:47.561 verify_dump=1 00:29:47.561 verify_backlog=512 00:29:47.561 verify_state_save=0 00:29:47.561 do_verify=1 00:29:47.561 verify=crc32c-intel 00:29:47.561 [job0] 00:29:47.561 filename=/dev/nvme0n1 00:29:47.561 [job1] 00:29:47.561 filename=/dev/nvme0n2 00:29:47.561 [job2] 00:29:47.561 filename=/dev/nvme0n3 00:29:47.561 [job3] 00:29:47.561 filename=/dev/nvme0n4 00:29:47.561 Could not set queue depth (nvme0n1) 00:29:47.561 Could not set queue depth (nvme0n2) 00:29:47.561 Could not set queue depth (nvme0n3) 00:29:47.561 Could not set queue depth (nvme0n4) 00:29:47.561 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:47.561 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:47.561 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:47.561 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:47.561 fio-3.35 00:29:47.561 Starting 4 threads 00:29:48.936 00:29:48.937 job0: (groupid=0, jobs=1): err= 0: pid=2777125: Tue Nov 19 11:31:44 2024 00:29:48.937 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:29:48.937 slat (usec): min=2, max=18180, avg=117.46, stdev=829.01 00:29:48.937 clat (usec): min=3543, max=55325, avg=15945.53, stdev=7299.20 00:29:48.937 lat (usec): min=3558, max=55333, avg=16062.99, stdev=7340.95 00:29:48.937 clat percentiles (usec): 00:29:48.937 | 1.00th=[ 3654], 5.00th=[ 7635], 10.00th=[ 8586], 20.00th=[11600], 00:29:48.937 | 30.00th=[11994], 40.00th=[12256], 50.00th=[13829], 60.00th=[16319], 00:29:48.937 | 70.00th=[18482], 80.00th=[20317], 90.00th=[23725], 95.00th=[31065], 00:29:48.937 | 99.00th=[35914], 99.50th=[54264], 99.90th=[54789], 99.95th=[55313], 00:29:48.937 | 99.99th=[55313] 00:29:48.937 write: IOPS=3973, BW=15.5MiB/s (16.3MB/s)(15.6MiB/1005msec); 0 zone resets 00:29:48.937 slat (usec): min=3, max=16136, avg=130.95, stdev=714.86 00:29:48.937 clat (usec): min=755, max=59133, avg=17631.99, stdev=10243.56 00:29:48.937 lat (usec): min=777, max=59151, avg=17762.94, stdev=10323.19 00:29:48.937 clat percentiles (usec): 00:29:48.937 | 1.00th=[ 3949], 5.00th=[ 9372], 10.00th=[10159], 20.00th=[11731], 00:29:48.937 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12518], 60.00th=[14615], 00:29:48.937 | 70.00th=[18744], 80.00th=[21627], 90.00th=[31327], 95.00th=[45351], 00:29:48.937 | 99.00th=[51119], 99.50th=[54264], 99.90th=[58983], 99.95th=[58983], 00:29:48.937 | 99.99th=[58983] 00:29:48.937 bw ( KiB/s): min=12544, max=18376, per=23.22%, avg=15460.00, stdev=4123.85, samples=2 00:29:48.937 iops : min= 3136, max= 4594, avg=3865.00, stdev=1030.96, samples=2 00:29:48.937 lat (usec) : 1000=0.01% 00:29:48.937 lat (msec) : 2=0.22%, 4=1.76%, 10=9.82%, 20=63.51%, 50=23.74% 00:29:48.937 lat (msec) : 100=0.94% 00:29:48.937 cpu : usr=5.08%, sys=7.07%, ctx=425, majf=0, minf=1 00:29:48.937 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:29:48.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.937 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:48.937 issued rwts: total=3584,3993,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.937 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:48.937 job1: (groupid=0, jobs=1): err= 0: pid=2777143: Tue Nov 19 11:31:44 2024 00:29:48.937 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:29:48.937 slat (usec): min=2, max=34392, avg=144.52, stdev=1135.07 00:29:48.937 clat (usec): min=7851, max=95514, avg=18968.71, stdev=13189.92 00:29:48.937 lat (usec): min=7863, max=95555, avg=19113.23, stdev=13290.75 00:29:48.937 clat percentiles (usec): 00:29:48.937 | 1.00th=[ 8848], 5.00th=[ 9896], 10.00th=[10421], 20.00th=[11207], 00:29:48.937 | 30.00th=[11600], 40.00th=[12518], 50.00th=[12911], 60.00th=[15533], 00:29:48.937 | 70.00th=[19006], 80.00th=[27132], 90.00th=[34341], 95.00th=[45351], 00:29:48.937 | 99.00th=[73925], 99.50th=[73925], 99.90th=[77071], 99.95th=[82314], 00:29:48.937 | 99.99th=[95945] 00:29:48.937 write: IOPS=3582, BW=14.0MiB/s (14.7MB/s)(14.1MiB/1005msec); 0 zone resets 00:29:48.937 slat (usec): min=3, max=37007, avg=123.80, stdev=1017.61 00:29:48.937 clat (usec): min=1278, max=71662, avg=16455.15, stdev=10097.52 00:29:48.937 lat (usec): min=6017, max=71682, avg=16578.96, stdev=10194.70 00:29:48.937 clat percentiles (usec): 00:29:48.937 | 1.00th=[ 7439], 5.00th=[10159], 10.00th=[10683], 20.00th=[11338], 00:29:48.937 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12256], 60.00th=[12780], 00:29:48.937 | 70.00th=[13566], 80.00th=[21103], 90.00th=[28705], 95.00th=[38536], 00:29:48.937 | 99.00th=[58983], 99.50th=[59507], 99.90th=[59507], 99.95th=[68682], 00:29:48.937 | 99.99th=[71828] 00:29:48.937 bw ( KiB/s): min=12288, max=16384, per=21.54%, avg=14336.00, stdev=2896.31, samples=2 00:29:48.937 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:29:48.937 lat (msec) : 2=0.01%, 10=4.91%, 20=71.01%, 50=21.24%, 100=2.83% 00:29:48.937 cpu : usr=3.49%, sys=8.57%, ctx=230, majf=0, minf=1 00:29:48.937 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:29:48.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.937 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:48.937 issued rwts: total=3584,3600,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.937 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:48.937 job2: (groupid=0, jobs=1): err= 0: pid=2777172: Tue Nov 19 11:31:44 2024 00:29:48.937 read: IOPS=4669, BW=18.2MiB/s (19.1MB/s)(18.3MiB/1005msec) 00:29:48.937 slat (usec): min=2, max=27085, avg=104.11, stdev=915.54 00:29:48.937 clat (usec): min=3267, max=36791, avg=13255.39, stdev=4440.10 00:29:48.937 lat (usec): min=6533, max=36804, avg=13359.50, stdev=4508.10 00:29:48.937 clat percentiles (usec): 00:29:48.937 | 1.00th=[ 6849], 5.00th=[ 8455], 10.00th=[ 9110], 20.00th=[ 9896], 00:29:48.937 | 30.00th=[10683], 40.00th=[11338], 50.00th=[12125], 60.00th=[13829], 00:29:48.937 | 70.00th=[14615], 80.00th=[15533], 90.00th=[17957], 95.00th=[19792], 00:29:48.937 | 99.00th=[33424], 99.50th=[33817], 99.90th=[34341], 99.95th=[34341], 00:29:48.937 | 99.99th=[36963] 00:29:48.937 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:29:48.937 slat (usec): min=3, max=12388, avg=88.36, stdev=663.65 00:29:48.937 clat (usec): min=4068, max=33006, avg=12751.53, stdev=3895.02 00:29:48.937 lat (usec): min=4076, max=33017, avg=12839.89, stdev=3921.20 00:29:48.937 clat percentiles (usec): 00:29:48.937 | 1.00th=[ 6128], 5.00th=[ 7177], 10.00th=[ 7832], 20.00th=[ 9765], 00:29:48.937 | 30.00th=[11076], 40.00th=[12125], 50.00th=[12518], 60.00th=[13566], 00:29:48.937 | 70.00th=[14484], 80.00th=[15139], 90.00th=[16319], 95.00th=[16909], 00:29:48.937 | 99.00th=[26608], 99.50th=[32900], 99.90th=[32900], 99.95th=[32900], 00:29:48.937 | 99.99th=[32900] 00:29:48.937 bw ( KiB/s): min=20096, max=20528, per=30.51%, avg=20312.00, stdev=305.47, samples=2 00:29:48.937 iops : min= 5024, max= 5132, avg=5078.00, stdev=76.37, samples=2 00:29:48.937 lat (msec) : 4=0.01%, 10=21.56%, 20=74.33%, 50=4.10% 00:29:48.937 cpu : usr=5.88%, sys=10.46%, ctx=302, majf=0, minf=1 00:29:48.937 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:29:48.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.937 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:48.937 issued rwts: total=4693,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.937 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:48.937 job3: (groupid=0, jobs=1): err= 0: pid=2777183: Tue Nov 19 11:31:44 2024 00:29:48.937 read: IOPS=4041, BW=15.8MiB/s (16.6MB/s)(15.9MiB/1010msec) 00:29:48.937 slat (usec): min=2, max=19312, avg=125.91, stdev=895.48 00:29:48.937 clat (usec): min=1249, max=51003, avg=17282.79, stdev=10465.14 00:29:48.937 lat (usec): min=1268, max=57547, avg=17408.70, stdev=10523.90 00:29:48.937 clat percentiles (usec): 00:29:48.937 | 1.00th=[ 2180], 5.00th=[ 2868], 10.00th=[10683], 20.00th=[12256], 00:29:48.937 | 30.00th=[12518], 40.00th=[13435], 50.00th=[13698], 60.00th=[14222], 00:29:48.937 | 70.00th=[15139], 80.00th=[21103], 90.00th=[34866], 95.00th=[44303], 00:29:48.937 | 99.00th=[50594], 99.50th=[50594], 99.90th=[51119], 99.95th=[51119], 00:29:48.937 | 99.99th=[51119] 00:29:48.937 write: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec); 0 zone resets 00:29:48.937 slat (usec): min=3, max=19551, avg=101.46, stdev=623.06 00:29:48.937 clat (usec): min=821, max=35318, avg=14055.38, stdev=3015.77 00:29:48.937 lat (usec): min=840, max=35350, avg=14156.84, stdev=3062.40 00:29:48.937 clat percentiles (usec): 00:29:48.937 | 1.00th=[ 8586], 5.00th=[11076], 10.00th=[11863], 20.00th=[12256], 00:29:48.937 | 30.00th=[12518], 40.00th=[13173], 50.00th=[13566], 60.00th=[13829], 00:29:48.937 | 70.00th=[14091], 80.00th=[15270], 90.00th=[17433], 95.00th=[20579], 00:29:48.937 | 99.00th=[24249], 99.50th=[27657], 99.90th=[33162], 99.95th=[33162], 00:29:48.937 | 99.99th=[35390] 00:29:48.937 bw ( KiB/s): min=16176, max=16592, per=24.61%, avg=16384.00, stdev=294.16, samples=2 00:29:48.937 iops : min= 4044, max= 4148, avg=4096.00, stdev=73.54, samples=2 00:29:48.937 lat (usec) : 1000=0.02% 00:29:48.937 lat (msec) : 2=0.42%, 4=2.63%, 10=2.12%, 20=80.62%, 50=13.26% 00:29:48.937 lat (msec) : 100=0.94% 00:29:48.937 cpu : usr=5.75%, sys=9.02%, ctx=505, majf=0, minf=1 00:29:48.937 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:29:48.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.937 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:48.937 issued rwts: total=4082,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.937 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:48.937 00:29:48.937 Run status group 0 (all jobs): 00:29:48.937 READ: bw=61.7MiB/s (64.7MB/s), 13.9MiB/s-18.2MiB/s (14.6MB/s-19.1MB/s), io=62.3MiB (65.3MB), run=1005-1010msec 00:29:48.937 WRITE: bw=65.0MiB/s (68.2MB/s), 14.0MiB/s-19.9MiB/s (14.7MB/s-20.9MB/s), io=65.7MiB (68.8MB), run=1005-1010msec 00:29:48.937 00:29:48.937 Disk stats (read/write): 00:29:48.937 nvme0n1: ios=3116/3391, merge=0/0, ticks=33754/41708, in_queue=75462, util=94.59% 00:29:48.937 nvme0n2: ios=3114/3160, merge=0/0, ticks=24055/24967, in_queue=49022, util=92.07% 00:29:48.937 nvme0n3: ios=4051/4096, merge=0/0, ticks=45317/43212, in_queue=88529, util=90.49% 00:29:48.937 nvme0n4: ios=3584/3879, merge=0/0, ticks=24926/25410, in_queue=50336, util=89.67% 00:29:48.937 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:29:48.937 [global] 00:29:48.937 thread=1 00:29:48.937 invalidate=1 00:29:48.937 rw=randwrite 00:29:48.938 time_based=1 00:29:48.938 runtime=1 00:29:48.938 ioengine=libaio 00:29:48.938 direct=1 00:29:48.938 bs=4096 00:29:48.938 iodepth=128 00:29:48.938 norandommap=0 00:29:48.938 numjobs=1 00:29:48.938 00:29:48.938 verify_dump=1 00:29:48.938 verify_backlog=512 00:29:48.938 verify_state_save=0 00:29:48.938 do_verify=1 00:29:48.938 verify=crc32c-intel 00:29:48.938 [job0] 00:29:48.938 filename=/dev/nvme0n1 00:29:48.938 [job1] 00:29:48.938 filename=/dev/nvme0n2 00:29:48.938 [job2] 00:29:48.938 filename=/dev/nvme0n3 00:29:48.938 [job3] 00:29:48.938 filename=/dev/nvme0n4 00:29:48.938 Could not set queue depth (nvme0n1) 00:29:48.938 Could not set queue depth (nvme0n2) 00:29:48.938 Could not set queue depth (nvme0n3) 00:29:48.938 Could not set queue depth (nvme0n4) 00:29:48.938 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:48.938 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:48.938 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:48.938 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:48.938 fio-3.35 00:29:48.938 Starting 4 threads 00:29:50.313 00:29:50.313 job0: (groupid=0, jobs=1): err= 0: pid=2777436: Tue Nov 19 11:31:45 2024 00:29:50.313 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:29:50.313 slat (usec): min=2, max=10495, avg=117.41, stdev=737.46 00:29:50.313 clat (usec): min=4682, max=36228, avg=15281.27, stdev=5139.02 00:29:50.313 lat (usec): min=4690, max=41293, avg=15398.68, stdev=5185.45 00:29:50.313 clat percentiles (usec): 00:29:50.313 | 1.00th=[ 4817], 5.00th=[ 9110], 10.00th=[ 9896], 20.00th=[10814], 00:29:50.313 | 30.00th=[11994], 40.00th=[13042], 50.00th=[14353], 60.00th=[16450], 00:29:50.313 | 70.00th=[17433], 80.00th=[18744], 90.00th=[22152], 95.00th=[25560], 00:29:50.313 | 99.00th=[30540], 99.50th=[31065], 99.90th=[31065], 99.95th=[31065], 00:29:50.313 | 99.99th=[36439] 00:29:50.313 write: IOPS=4430, BW=17.3MiB/s (18.1MB/s)(17.3MiB/1001msec); 0 zone resets 00:29:50.313 slat (usec): min=3, max=11121, avg=108.01, stdev=631.06 00:29:50.313 clat (usec): min=411, max=35468, avg=14306.84, stdev=5300.70 00:29:50.313 lat (usec): min=3350, max=35473, avg=14414.85, stdev=5334.90 00:29:50.313 clat percentiles (usec): 00:29:50.313 | 1.00th=[ 5342], 5.00th=[ 7635], 10.00th=[ 9372], 20.00th=[10814], 00:29:50.313 | 30.00th=[11207], 40.00th=[11600], 50.00th=[12780], 60.00th=[13566], 00:29:50.313 | 70.00th=[16450], 80.00th=[18482], 90.00th=[22414], 95.00th=[24511], 00:29:50.313 | 99.00th=[28967], 99.50th=[35390], 99.90th=[35390], 99.95th=[35390], 00:29:50.313 | 99.99th=[35390] 00:29:50.313 bw ( KiB/s): min=20480, max=20480, per=32.24%, avg=20480.00, stdev= 0.00, samples=1 00:29:50.313 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:29:50.313 lat (usec) : 500=0.01% 00:29:50.313 lat (msec) : 4=0.45%, 10=12.35%, 20=73.17%, 50=14.02% 00:29:50.313 cpu : usr=2.80%, sys=5.90%, ctx=416, majf=0, minf=1 00:29:50.313 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:29:50.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:50.313 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:50.313 issued rwts: total=4096,4435,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:50.313 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:50.313 job1: (groupid=0, jobs=1): err= 0: pid=2777437: Tue Nov 19 11:31:45 2024 00:29:50.313 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:29:50.313 slat (usec): min=2, max=44309, avg=135.68, stdev=1038.34 00:29:50.313 clat (usec): min=3976, max=60104, avg=17249.61, stdev=9559.01 00:29:50.313 lat (usec): min=3980, max=60111, avg=17385.29, stdev=9605.66 00:29:50.313 clat percentiles (usec): 00:29:50.313 | 1.00th=[ 7635], 5.00th=[ 9896], 10.00th=[10683], 20.00th=[11338], 00:29:50.313 | 30.00th=[11731], 40.00th=[12256], 50.00th=[13304], 60.00th=[16057], 00:29:50.313 | 70.00th=[18220], 80.00th=[22414], 90.00th=[28967], 95.00th=[33162], 00:29:50.313 | 99.00th=[56361], 99.50th=[56361], 99.90th=[60031], 99.95th=[60031], 00:29:50.313 | 99.99th=[60031] 00:29:50.313 write: IOPS=3599, BW=14.1MiB/s (14.7MB/s)(14.1MiB/1002msec); 0 zone resets 00:29:50.313 slat (usec): min=3, max=22301, avg=127.18, stdev=722.99 00:29:50.313 clat (usec): min=415, max=74904, avg=18107.50, stdev=11805.72 00:29:50.313 lat (usec): min=428, max=74911, avg=18234.68, stdev=11848.80 00:29:50.313 clat percentiles (usec): 00:29:50.313 | 1.00th=[ 963], 5.00th=[ 3261], 10.00th=[ 8717], 20.00th=[11207], 00:29:50.313 | 30.00th=[11731], 40.00th=[12256], 50.00th=[13829], 60.00th=[16319], 00:29:50.313 | 70.00th=[21365], 80.00th=[23987], 90.00th=[36439], 95.00th=[42206], 00:29:50.313 | 99.00th=[57934], 99.50th=[73925], 99.90th=[74974], 99.95th=[74974], 00:29:50.313 | 99.99th=[74974] 00:29:50.313 bw ( KiB/s): min=12288, max=16384, per=22.57%, avg=14336.00, stdev=2896.31, samples=2 00:29:50.313 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:29:50.313 lat (usec) : 500=0.04%, 750=0.04%, 1000=0.54% 00:29:50.313 lat (msec) : 2=0.78%, 4=2.17%, 10=6.40%, 20=61.94%, 50=25.57% 00:29:50.313 lat (msec) : 100=2.52% 00:29:50.313 cpu : usr=1.70%, sys=4.60%, ctx=458, majf=0, minf=1 00:29:50.313 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:29:50.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:50.313 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:50.313 issued rwts: total=3584,3607,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:50.313 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:50.313 job2: (groupid=0, jobs=1): err= 0: pid=2777438: Tue Nov 19 11:31:45 2024 00:29:50.313 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:29:50.313 slat (usec): min=2, max=18101, avg=113.18, stdev=709.73 00:29:50.313 clat (usec): min=7969, max=35094, avg=14675.73, stdev=4413.30 00:29:50.313 lat (usec): min=7979, max=35099, avg=14788.90, stdev=4443.86 00:29:50.313 clat percentiles (usec): 00:29:50.313 | 1.00th=[ 8717], 5.00th=[ 9765], 10.00th=[11076], 20.00th=[11600], 00:29:50.313 | 30.00th=[12387], 40.00th=[13042], 50.00th=[13698], 60.00th=[14353], 00:29:50.313 | 70.00th=[15533], 80.00th=[16909], 90.00th=[18744], 95.00th=[21890], 00:29:50.314 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:29:50.314 | 99.99th=[34866] 00:29:50.314 write: IOPS=4401, BW=17.2MiB/s (18.0MB/s)(17.2MiB/1002msec); 0 zone resets 00:29:50.314 slat (usec): min=3, max=11774, avg=114.34, stdev=615.95 00:29:50.314 clat (usec): min=501, max=36150, avg=15090.78, stdev=4758.99 00:29:50.314 lat (usec): min=5078, max=36174, avg=15205.12, stdev=4797.15 00:29:50.314 clat percentiles (usec): 00:29:50.314 | 1.00th=[ 6980], 5.00th=[10945], 10.00th=[11994], 20.00th=[12387], 00:29:50.314 | 30.00th=[12780], 40.00th=[12911], 50.00th=[12911], 60.00th=[13566], 00:29:50.314 | 70.00th=[15795], 80.00th=[17695], 90.00th=[20317], 95.00th=[27132], 00:29:50.314 | 99.00th=[33162], 99.50th=[34341], 99.90th=[35914], 99.95th=[35914], 00:29:50.314 | 99.99th=[35914] 00:29:50.314 bw ( KiB/s): min=15824, max=18468, per=26.99%, avg=17146.00, stdev=1869.59, samples=2 00:29:50.314 iops : min= 3956, max= 4617, avg=4286.50, stdev=467.40, samples=2 00:29:50.314 lat (usec) : 750=0.01% 00:29:50.314 lat (msec) : 10=4.14%, 20=87.11%, 50=8.74% 00:29:50.314 cpu : usr=4.10%, sys=6.19%, ctx=454, majf=0, minf=1 00:29:50.314 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:29:50.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:50.314 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:50.314 issued rwts: total=4096,4410,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:50.314 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:50.314 job3: (groupid=0, jobs=1): err= 0: pid=2777439: Tue Nov 19 11:31:45 2024 00:29:50.314 read: IOPS=3670, BW=14.3MiB/s (15.0MB/s)(14.9MiB/1042msec) 00:29:50.314 slat (usec): min=3, max=9678, avg=139.30, stdev=768.55 00:29:50.314 clat (usec): min=9002, max=55874, avg=19426.45, stdev=8479.62 00:29:50.314 lat (usec): min=9014, max=55890, avg=19565.75, stdev=8514.61 00:29:50.314 clat percentiles (usec): 00:29:50.314 | 1.00th=[ 9634], 5.00th=[11076], 10.00th=[11994], 20.00th=[12649], 00:29:50.314 | 30.00th=[14091], 40.00th=[15270], 50.00th=[16581], 60.00th=[18220], 00:29:50.314 | 70.00th=[21627], 80.00th=[26084], 90.00th=[29754], 95.00th=[34341], 00:29:50.314 | 99.00th=[52691], 99.50th=[54264], 99.90th=[55837], 99.95th=[55837], 00:29:50.314 | 99.99th=[55837] 00:29:50.314 write: IOPS=3930, BW=15.4MiB/s (16.1MB/s)(16.0MiB/1042msec); 0 zone resets 00:29:50.314 slat (usec): min=4, max=7244, avg=103.67, stdev=547.86 00:29:50.314 clat (usec): min=7878, max=26966, avg=13901.16, stdev=2089.24 00:29:50.314 lat (usec): min=7892, max=26973, avg=14004.83, stdev=2113.95 00:29:50.314 clat percentiles (usec): 00:29:50.314 | 1.00th=[ 8979], 5.00th=[11076], 10.00th=[12256], 20.00th=[12387], 00:29:50.314 | 30.00th=[12649], 40.00th=[13173], 50.00th=[13435], 60.00th=[13829], 00:29:50.314 | 70.00th=[14222], 80.00th=[15664], 90.00th=[16450], 95.00th=[17695], 00:29:50.314 | 99.00th=[21365], 99.50th=[21365], 99.90th=[22152], 99.95th=[22152], 00:29:50.314 | 99.99th=[26870] 00:29:50.314 bw ( KiB/s): min=14640, max=18128, per=25.79%, avg=16384.00, stdev=2466.39, samples=2 00:29:50.314 iops : min= 3660, max= 4532, avg=4096.00, stdev=616.60, samples=2 00:29:50.314 lat (msec) : 10=2.50%, 20=79.09%, 50=17.73%, 100=0.68% 00:29:50.314 cpu : usr=4.71%, sys=7.20%, ctx=371, majf=0, minf=1 00:29:50.314 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:29:50.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:50.314 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:50.314 issued rwts: total=3825,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:50.314 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:50.314 00:29:50.314 Run status group 0 (all jobs): 00:29:50.314 READ: bw=58.5MiB/s (61.3MB/s), 14.0MiB/s-16.0MiB/s (14.7MB/s-16.8MB/s), io=60.9MiB (63.9MB), run=1001-1042msec 00:29:50.314 WRITE: bw=62.0MiB/s (65.0MB/s), 14.1MiB/s-17.3MiB/s (14.7MB/s-18.1MB/s), io=64.6MiB (67.8MB), run=1001-1042msec 00:29:50.314 00:29:50.314 Disk stats (read/write): 00:29:50.314 nvme0n1: ios=3486/3584, merge=0/0, ticks=26589/24769, in_queue=51358, util=96.39% 00:29:50.314 nvme0n2: ios=2573/2839, merge=0/0, ticks=17554/23759, in_queue=41313, util=84.22% 00:29:50.314 nvme0n3: ios=3584/3791, merge=0/0, ticks=19114/18789, in_queue=37903, util=88.40% 00:29:50.314 nvme0n4: ios=3129/3447, merge=0/0, ticks=19286/14540, in_queue=33826, util=95.82% 00:29:50.314 11:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:29:50.314 11:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2777577 00:29:50.314 11:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:29:50.314 11:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:29:50.314 [global] 00:29:50.314 thread=1 00:29:50.314 invalidate=1 00:29:50.314 rw=read 00:29:50.314 time_based=1 00:29:50.314 runtime=10 00:29:50.314 ioengine=libaio 00:29:50.314 direct=1 00:29:50.314 bs=4096 00:29:50.314 iodepth=1 00:29:50.314 norandommap=1 00:29:50.314 numjobs=1 00:29:50.314 00:29:50.314 [job0] 00:29:50.314 filename=/dev/nvme0n1 00:29:50.314 [job1] 00:29:50.314 filename=/dev/nvme0n2 00:29:50.314 [job2] 00:29:50.314 filename=/dev/nvme0n3 00:29:50.314 [job3] 00:29:50.314 filename=/dev/nvme0n4 00:29:50.314 Could not set queue depth (nvme0n1) 00:29:50.314 Could not set queue depth (nvme0n2) 00:29:50.314 Could not set queue depth (nvme0n3) 00:29:50.314 Could not set queue depth (nvme0n4) 00:29:50.572 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:50.572 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:50.572 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:50.572 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:50.572 fio-3.35 00:29:50.572 Starting 4 threads 00:29:53.854 11:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:29:53.854 11:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:29:53.854 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=294912, buflen=4096 00:29:53.854 fio: pid=2777673, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:29:53.854 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:53.854 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:29:53.854 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=42405888, buflen=4096 00:29:53.854 fio: pid=2777672, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:29:54.111 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=45723648, buflen=4096 00:29:54.112 fio: pid=2777670, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:29:54.369 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:54.369 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:29:54.628 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:54.628 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:29:54.628 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=4300800, buflen=4096 00:29:54.628 fio: pid=2777671, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:29:54.628 00:29:54.628 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2777670: Tue Nov 19 11:31:49 2024 00:29:54.628 read: IOPS=3172, BW=12.4MiB/s (13.0MB/s)(43.6MiB/3519msec) 00:29:54.628 slat (usec): min=4, max=8094, avg=12.21, stdev=134.99 00:29:54.628 clat (usec): min=195, max=41117, avg=298.57, stdev=548.85 00:29:54.628 lat (usec): min=200, max=41133, avg=310.78, stdev=565.65 00:29:54.628 clat percentiles (usec): 00:29:54.628 | 1.00th=[ 210], 5.00th=[ 227], 10.00th=[ 245], 20.00th=[ 255], 00:29:54.628 | 30.00th=[ 262], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 285], 00:29:54.628 | 70.00th=[ 297], 80.00th=[ 314], 90.00th=[ 355], 95.00th=[ 424], 00:29:54.628 | 99.00th=[ 537], 99.50th=[ 586], 99.90th=[ 709], 99.95th=[ 824], 00:29:54.628 | 99.99th=[41157] 00:29:54.628 bw ( KiB/s): min= 9704, max=14040, per=52.76%, avg=12464.00, stdev=1613.46, samples=6 00:29:54.628 iops : min= 2426, max= 3510, avg=3116.00, stdev=403.36, samples=6 00:29:54.628 lat (usec) : 250=13.14%, 500=85.10%, 750=1.68%, 1000=0.04% 00:29:54.628 lat (msec) : 2=0.02%, 50=0.02% 00:29:54.628 cpu : usr=1.08%, sys=3.75%, ctx=11169, majf=0, minf=2 00:29:54.628 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:54.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.628 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.628 issued rwts: total=11164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:54.628 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:54.628 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2777671: Tue Nov 19 11:31:49 2024 00:29:54.628 read: IOPS=274, BW=1096KiB/s (1122kB/s)(4200KiB/3833msec) 00:29:54.628 slat (usec): min=5, max=9877, avg=27.57, stdev=344.87 00:29:54.628 clat (usec): min=207, max=53805, avg=3596.77, stdev=11168.80 00:29:54.628 lat (usec): min=213, max=53818, avg=3619.67, stdev=11212.36 00:29:54.628 clat percentiles (usec): 00:29:54.628 | 1.00th=[ 212], 5.00th=[ 221], 10.00th=[ 227], 20.00th=[ 239], 00:29:54.628 | 30.00th=[ 249], 40.00th=[ 260], 50.00th=[ 273], 60.00th=[ 285], 00:29:54.628 | 70.00th=[ 302], 80.00th=[ 326], 90.00th=[ 537], 95.00th=[41157], 00:29:54.628 | 99.00th=[41157], 99.50th=[41157], 99.90th=[46924], 99.95th=[53740], 00:29:54.628 | 99.99th=[53740] 00:29:54.628 bw ( KiB/s): min= 96, max= 7600, per=5.02%, avg=1187.29, stdev=2827.88, samples=7 00:29:54.628 iops : min= 24, max= 1900, avg=296.71, stdev=707.02, samples=7 00:29:54.628 lat (usec) : 250=31.21%, 500=57.75%, 750=2.47%, 1000=0.29% 00:29:54.628 lat (msec) : 2=0.10%, 50=7.99%, 100=0.10% 00:29:54.628 cpu : usr=0.08%, sys=0.44%, ctx=1055, majf=0, minf=2 00:29:54.628 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:54.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.628 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.628 issued rwts: total=1051,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:54.628 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:54.628 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2777672: Tue Nov 19 11:31:49 2024 00:29:54.628 read: IOPS=3209, BW=12.5MiB/s (13.1MB/s)(40.4MiB/3226msec) 00:29:54.628 slat (usec): min=6, max=15210, avg=12.01, stdev=167.46 00:29:54.628 clat (usec): min=196, max=1319, avg=294.88, stdev=63.40 00:29:54.628 lat (usec): min=202, max=15523, avg=306.89, stdev=180.14 00:29:54.628 clat percentiles (usec): 00:29:54.628 | 1.00th=[ 215], 5.00th=[ 235], 10.00th=[ 249], 20.00th=[ 255], 00:29:54.628 | 30.00th=[ 262], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 285], 00:29:54.628 | 70.00th=[ 293], 80.00th=[ 318], 90.00th=[ 375], 95.00th=[ 449], 00:29:54.629 | 99.00th=[ 519], 99.50th=[ 537], 99.90th=[ 635], 99.95th=[ 709], 00:29:54.629 | 99.99th=[ 1004] 00:29:54.629 bw ( KiB/s): min=10984, max=14144, per=54.45%, avg=12864.00, stdev=1219.88, samples=6 00:29:54.629 iops : min= 2746, max= 3536, avg=3216.00, stdev=304.97, samples=6 00:29:54.629 lat (usec) : 250=10.74%, 500=87.64%, 750=1.56%, 1000=0.03% 00:29:54.629 lat (msec) : 2=0.02% 00:29:54.629 cpu : usr=1.92%, sys=4.93%, ctx=10357, majf=0, minf=1 00:29:54.629 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:54.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.629 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.629 issued rwts: total=10354,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:54.629 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:54.629 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2777673: Tue Nov 19 11:31:49 2024 00:29:54.629 read: IOPS=24, BW=98.1KiB/s (100kB/s)(288KiB/2937msec) 00:29:54.629 slat (nsec): min=13429, max=26685, avg=14390.63, stdev=1735.85 00:29:54.629 clat (usec): min=462, max=42038, avg=40448.71, stdev=4782.21 00:29:54.629 lat (usec): min=489, max=42052, avg=40463.06, stdev=4780.74 00:29:54.629 clat percentiles (usec): 00:29:54.629 | 1.00th=[ 461], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:29:54.629 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:29:54.629 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:29:54.629 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:29:54.629 | 99.99th=[42206] 00:29:54.629 bw ( KiB/s): min= 96, max= 104, per=0.41%, avg=97.60, stdev= 3.58, samples=5 00:29:54.629 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:29:54.629 lat (usec) : 500=1.37% 00:29:54.629 lat (msec) : 50=97.26% 00:29:54.629 cpu : usr=0.07%, sys=0.00%, ctx=74, majf=0, minf=1 00:29:54.629 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:54.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.629 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.629 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:54.629 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:54.629 00:29:54.629 Run status group 0 (all jobs): 00:29:54.629 READ: bw=23.1MiB/s (24.2MB/s), 98.1KiB/s-12.5MiB/s (100kB/s-13.1MB/s), io=88.4MiB (92.7MB), run=2937-3833msec 00:29:54.629 00:29:54.629 Disk stats (read/write): 00:29:54.629 nvme0n1: ios=10710/0, merge=0/0, ticks=3588/0, in_queue=3588, util=99.08% 00:29:54.629 nvme0n2: ios=1079/0, merge=0/0, ticks=3749/0, in_queue=3749, util=99.25% 00:29:54.629 nvme0n3: ios=10039/0, merge=0/0, ticks=3858/0, in_queue=3858, util=98.75% 00:29:54.629 nvme0n4: ios=119/0, merge=0/0, ticks=3878/0, in_queue=3878, util=99.36% 00:29:54.887 11:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:54.887 11:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:29:55.146 11:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:55.146 11:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:29:55.404 11:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:55.404 11:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:29:55.662 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:55.662 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:29:55.920 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:29:55.920 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2777577 00:29:55.920 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:29:55.920 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:56.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:56.178 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:29:56.178 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:29:56.178 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:29:56.178 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:56.178 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:29:56.178 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:56.178 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:29:56.178 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:29:56.178 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:29:56.178 nvmf hotplug test: fio failed as expected 00:29:56.178 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:56.436 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:29:56.436 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:29:56.436 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:29:56.436 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:29:56.436 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:29:56.436 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:56.436 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:29:56.436 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:56.436 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:29:56.436 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:56.436 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:56.436 rmmod nvme_tcp 00:29:56.436 rmmod nvme_fabrics 00:29:56.436 rmmod nvme_keyring 00:29:56.436 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:56.436 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:29:56.436 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:29:56.436 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2775568 ']' 00:29:56.436 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2775568 00:29:56.436 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2775568 ']' 00:29:56.436 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2775568 00:29:56.436 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:29:56.436 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:56.436 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2775568 00:29:56.436 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:56.436 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:56.436 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2775568' 00:29:56.436 killing process with pid 2775568 00:29:56.436 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2775568 00:29:56.436 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2775568 00:29:56.696 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:56.696 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:56.696 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:56.696 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:29:56.696 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:56.696 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:29:56.696 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:29:56.696 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:56.696 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:56.696 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:56.696 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:56.696 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:59.234 00:29:59.234 real 0m24.508s 00:29:59.234 user 1m8.736s 00:29:59.234 sys 0m10.829s 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:29:59.234 ************************************ 00:29:59.234 END TEST nvmf_fio_target 00:29:59.234 ************************************ 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:59.234 ************************************ 00:29:59.234 START TEST nvmf_bdevio 00:29:59.234 ************************************ 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:29:59.234 * Looking for test storage... 00:29:59.234 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:59.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.234 --rc genhtml_branch_coverage=1 00:29:59.234 --rc genhtml_function_coverage=1 00:29:59.234 --rc genhtml_legend=1 00:29:59.234 --rc geninfo_all_blocks=1 00:29:59.234 --rc geninfo_unexecuted_blocks=1 00:29:59.234 00:29:59.234 ' 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:59.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.234 --rc genhtml_branch_coverage=1 00:29:59.234 --rc genhtml_function_coverage=1 00:29:59.234 --rc genhtml_legend=1 00:29:59.234 --rc geninfo_all_blocks=1 00:29:59.234 --rc geninfo_unexecuted_blocks=1 00:29:59.234 00:29:59.234 ' 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:59.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.234 --rc genhtml_branch_coverage=1 00:29:59.234 --rc genhtml_function_coverage=1 00:29:59.234 --rc genhtml_legend=1 00:29:59.234 --rc geninfo_all_blocks=1 00:29:59.234 --rc geninfo_unexecuted_blocks=1 00:29:59.234 00:29:59.234 ' 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:59.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.234 --rc genhtml_branch_coverage=1 00:29:59.234 --rc genhtml_function_coverage=1 00:29:59.234 --rc genhtml_legend=1 00:29:59.234 --rc geninfo_all_blocks=1 00:29:59.234 --rc geninfo_unexecuted_blocks=1 00:29:59.234 00:29:59.234 ' 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:59.234 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:29:59.235 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:59.235 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:59.235 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:59.235 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.235 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.235 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.235 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:29:59.235 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.235 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:29:59.235 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:59.235 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:59.235 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:59.235 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:59.235 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:59.235 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:59.235 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:59.235 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:59.235 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:59.235 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:59.235 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:59.235 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:59.235 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:29:59.235 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:59.235 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:59.235 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:59.235 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:59.235 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:59.235 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:59.235 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:59.235 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:59.235 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:59.235 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:59.235 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:29:59.235 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:01.766 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:01.766 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:30:01.766 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:01.766 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:01.766 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:01.766 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:01.766 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:01.766 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:30:01.766 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:01.766 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:30:01.767 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:30:01.767 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:30:01.767 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:30:01.767 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:30:01.767 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:30:01.767 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:01.767 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:01.767 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:01.767 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:01.767 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:01.767 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:01.767 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:01.767 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:01.767 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:01.767 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:01.767 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:01.767 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:01.767 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:01.767 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:01.767 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:01.767 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:01.767 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:01.767 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:01.767 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:01.767 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:30:01.767 Found 0000:82:00.0 (0x8086 - 0x159b) 00:30:01.767 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:01.767 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:01.767 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:01.767 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:01.767 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:01.767 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:01.767 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:30:01.767 Found 0000:82:00.1 (0x8086 - 0x159b) 00:30:01.767 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:01.767 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:01.767 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:01.767 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:01.767 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:01.767 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:01.767 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:01.767 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:01.767 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:01.767 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:01.767 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:01.767 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:01.767 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:01.767 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:01.767 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:01.767 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:30:01.767 Found net devices under 0000:82:00.0: cvl_0_0 00:30:01.767 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:01.767 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:01.767 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:01.767 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:01.767 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:01.767 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:01.767 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:01.767 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:01.767 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:30:01.767 Found net devices under 0000:82:00.1: cvl_0_1 00:30:01.767 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:01.767 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:01.767 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:30:01.767 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:01.767 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:01.767 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:01.767 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:01.767 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:01.767 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:01.767 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:01.767 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:01.767 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:01.767 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:01.767 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:01.767 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:01.767 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:01.767 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:01.767 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:01.767 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:01.767 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:01.767 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:01.767 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:01.767 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:01.767 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:01.767 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:01.767 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:01.767 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:01.767 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:01.767 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:01.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:01.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:30:01.767 00:30:01.767 --- 10.0.0.2 ping statistics --- 00:30:01.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:01.767 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:30:01.767 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:01.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:01.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:30:01.768 00:30:01.768 --- 10.0.0.1 ping statistics --- 00:30:01.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:01.768 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:30:01.768 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:01.768 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:30:01.768 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:01.768 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:01.768 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:01.768 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:01.768 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:01.768 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:01.768 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:01.768 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:30:01.768 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:01.768 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:01.768 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:01.768 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2780705 00:30:01.768 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:30:01.768 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2780705 00:30:01.768 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2780705 ']' 00:30:01.768 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:01.768 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:01.768 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:01.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:01.768 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:01.768 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:01.768 [2024-11-19 11:31:57.211583] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:01.768 [2024-11-19 11:31:57.212731] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:30:01.768 [2024-11-19 11:31:57.212781] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:02.027 [2024-11-19 11:31:57.296982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:02.027 [2024-11-19 11:31:57.355003] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:02.027 [2024-11-19 11:31:57.355056] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:02.027 [2024-11-19 11:31:57.355080] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:02.027 [2024-11-19 11:31:57.355091] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:02.027 [2024-11-19 11:31:57.355101] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:02.027 [2024-11-19 11:31:57.356629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:02.027 [2024-11-19 11:31:57.356699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:02.027 [2024-11-19 11:31:57.356753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:02.027 [2024-11-19 11:31:57.356756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:02.027 [2024-11-19 11:31:57.446282] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:02.027 [2024-11-19 11:31:57.446473] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:02.027 [2024-11-19 11:31:57.446774] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:02.027 [2024-11-19 11:31:57.447446] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:02.027 [2024-11-19 11:31:57.447698] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:02.027 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:02.027 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:30:02.027 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:02.027 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:02.027 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:02.027 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:02.027 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:02.027 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.027 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:02.027 [2024-11-19 11:31:57.501417] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:02.285 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.286 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:02.286 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.286 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:02.286 Malloc0 00:30:02.286 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.286 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:02.286 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.286 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:02.286 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.286 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:02.286 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.286 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:02.286 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.286 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:02.286 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.286 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:02.286 [2024-11-19 11:31:57.573630] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:02.286 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.286 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:30:02.286 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:30:02.286 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:30:02.286 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:30:02.286 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:02.286 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:02.286 { 00:30:02.286 "params": { 00:30:02.286 "name": "Nvme$subsystem", 00:30:02.286 "trtype": "$TEST_TRANSPORT", 00:30:02.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:02.286 "adrfam": "ipv4", 00:30:02.286 "trsvcid": "$NVMF_PORT", 00:30:02.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:02.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:02.286 "hdgst": ${hdgst:-false}, 00:30:02.286 "ddgst": ${ddgst:-false} 00:30:02.286 }, 00:30:02.286 "method": "bdev_nvme_attach_controller" 00:30:02.286 } 00:30:02.286 EOF 00:30:02.286 )") 00:30:02.286 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:30:02.286 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:30:02.286 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:30:02.286 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:02.286 "params": { 00:30:02.286 "name": "Nvme1", 00:30:02.286 "trtype": "tcp", 00:30:02.286 "traddr": "10.0.0.2", 00:30:02.286 "adrfam": "ipv4", 00:30:02.286 "trsvcid": "4420", 00:30:02.286 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:02.286 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:02.286 "hdgst": false, 00:30:02.286 "ddgst": false 00:30:02.286 }, 00:30:02.286 "method": "bdev_nvme_attach_controller" 00:30:02.286 }' 00:30:02.286 [2024-11-19 11:31:57.624738] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:30:02.286 [2024-11-19 11:31:57.624822] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2780734 ] 00:30:02.286 [2024-11-19 11:31:57.702622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:02.286 [2024-11-19 11:31:57.767682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:02.286 [2024-11-19 11:31:57.767732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:02.286 [2024-11-19 11:31:57.767736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:02.853 I/O targets: 00:30:02.853 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:30:02.853 00:30:02.853 00:30:02.853 CUnit - A unit testing framework for C - Version 2.1-3 00:30:02.853 http://cunit.sourceforge.net/ 00:30:02.853 00:30:02.853 00:30:02.853 Suite: bdevio tests on: Nvme1n1 00:30:02.853 Test: blockdev write read block ...passed 00:30:02.853 Test: blockdev write zeroes read block ...passed 00:30:02.853 Test: blockdev write zeroes read no split ...passed 00:30:02.853 Test: blockdev write zeroes read split ...passed 00:30:02.853 Test: blockdev write zeroes read split partial ...passed 00:30:02.853 Test: blockdev reset ...[2024-11-19 11:31:58.215295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:30:02.853 [2024-11-19 11:31:58.215418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1141640 (9): Bad file descriptor 00:30:02.853 [2024-11-19 11:31:58.227391] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:30:02.853 passed 00:30:02.853 Test: blockdev write read 8 blocks ...passed 00:30:02.853 Test: blockdev write read size > 128k ...passed 00:30:02.853 Test: blockdev write read invalid size ...passed 00:30:02.853 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:02.853 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:02.853 Test: blockdev write read max offset ...passed 00:30:03.112 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:03.112 Test: blockdev writev readv 8 blocks ...passed 00:30:03.112 Test: blockdev writev readv 30 x 1block ...passed 00:30:03.112 Test: blockdev writev readv block ...passed 00:30:03.112 Test: blockdev writev readv size > 128k ...passed 00:30:03.112 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:03.112 Test: blockdev comparev and writev ...[2024-11-19 11:31:58.441723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:03.112 [2024-11-19 11:31:58.441760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:03.112 [2024-11-19 11:31:58.441784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:03.112 [2024-11-19 11:31:58.441801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.112 [2024-11-19 11:31:58.442311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:03.112 [2024-11-19 11:31:58.442336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:03.112 [2024-11-19 11:31:58.442378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:03.112 [2024-11-19 11:31:58.442398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:03.112 [2024-11-19 11:31:58.442900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:03.112 [2024-11-19 11:31:58.442924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:03.112 [2024-11-19 11:31:58.442946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:03.112 [2024-11-19 11:31:58.442962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:03.112 [2024-11-19 11:31:58.443435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:03.112 [2024-11-19 11:31:58.443471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:03.112 [2024-11-19 11:31:58.443493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:03.112 [2024-11-19 11:31:58.443509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:03.112 passed 00:30:03.112 Test: blockdev nvme passthru rw ...passed 00:30:03.112 Test: blockdev nvme passthru vendor specific ...[2024-11-19 11:31:58.526746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:03.112 [2024-11-19 11:31:58.526779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:03.112 [2024-11-19 11:31:58.526931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:03.112 [2024-11-19 11:31:58.526953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:03.112 [2024-11-19 11:31:58.527103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:03.112 [2024-11-19 11:31:58.527127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:03.112 [2024-11-19 11:31:58.527271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:03.112 [2024-11-19 11:31:58.527295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:03.112 passed 00:30:03.112 Test: blockdev nvme admin passthru ...passed 00:30:03.112 Test: blockdev copy ...passed 00:30:03.112 00:30:03.112 Run Summary: Type Total Ran Passed Failed Inactive 00:30:03.112 suites 1 1 n/a 0 0 00:30:03.112 tests 23 23 23 0 0 00:30:03.112 asserts 152 152 152 0 n/a 00:30:03.112 00:30:03.112 Elapsed time = 0.967 seconds 00:30:03.370 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:03.370 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.370 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:03.370 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.370 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:30:03.370 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:30:03.370 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:03.370 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:30:03.370 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:03.370 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:30:03.370 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:03.370 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:03.370 rmmod nvme_tcp 00:30:03.370 rmmod nvme_fabrics 00:30:03.370 rmmod nvme_keyring 00:30:03.370 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:03.370 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:30:03.370 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:30:03.371 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2780705 ']' 00:30:03.371 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2780705 00:30:03.371 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2780705 ']' 00:30:03.371 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2780705 00:30:03.371 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:30:03.371 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:03.371 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2780705 00:30:03.629 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:30:03.629 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:30:03.629 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2780705' 00:30:03.629 killing process with pid 2780705 00:30:03.629 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2780705 00:30:03.629 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2780705 00:30:03.907 11:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:03.907 11:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:03.907 11:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:03.907 11:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:30:03.907 11:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:30:03.907 11:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:03.907 11:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:30:03.907 11:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:03.908 11:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:03.908 11:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:03.908 11:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:03.908 11:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:05.812 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:05.812 00:30:05.812 real 0m6.985s 00:30:05.812 user 0m8.876s 00:30:05.812 sys 0m2.961s 00:30:05.812 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:05.812 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:05.812 ************************************ 00:30:05.812 END TEST nvmf_bdevio 00:30:05.812 ************************************ 00:30:05.812 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:30:05.812 00:30:05.812 real 4m2.159s 00:30:05.812 user 8m56.544s 00:30:05.812 sys 1m32.201s 00:30:05.812 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:05.812 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:05.812 ************************************ 00:30:05.812 END TEST nvmf_target_core_interrupt_mode 00:30:05.812 ************************************ 00:30:05.812 11:32:01 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:30:05.812 11:32:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:05.812 11:32:01 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:05.812 11:32:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:05.812 ************************************ 00:30:05.812 START TEST nvmf_interrupt 00:30:05.812 ************************************ 00:30:05.812 11:32:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:30:05.812 * Looking for test storage... 00:30:05.812 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:05.812 11:32:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:05.812 11:32:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:30:05.812 11:32:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:06.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.071 --rc genhtml_branch_coverage=1 00:30:06.071 --rc genhtml_function_coverage=1 00:30:06.071 --rc genhtml_legend=1 00:30:06.071 --rc geninfo_all_blocks=1 00:30:06.071 --rc geninfo_unexecuted_blocks=1 00:30:06.071 00:30:06.071 ' 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:06.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.071 --rc genhtml_branch_coverage=1 00:30:06.071 --rc genhtml_function_coverage=1 00:30:06.071 --rc genhtml_legend=1 00:30:06.071 --rc geninfo_all_blocks=1 00:30:06.071 --rc geninfo_unexecuted_blocks=1 00:30:06.071 00:30:06.071 ' 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:06.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.071 --rc genhtml_branch_coverage=1 00:30:06.071 --rc genhtml_function_coverage=1 00:30:06.071 --rc genhtml_legend=1 00:30:06.071 --rc geninfo_all_blocks=1 00:30:06.071 --rc geninfo_unexecuted_blocks=1 00:30:06.071 00:30:06.071 ' 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:06.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.071 --rc genhtml_branch_coverage=1 00:30:06.071 --rc genhtml_function_coverage=1 00:30:06.071 --rc genhtml_legend=1 00:30:06.071 --rc geninfo_all_blocks=1 00:30:06.071 --rc geninfo_unexecuted_blocks=1 00:30:06.071 00:30:06.071 ' 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.071 11:32:01 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:30:06.072 11:32:01 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.072 11:32:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:30:06.072 11:32:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:06.072 11:32:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:06.072 11:32:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:06.072 11:32:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:06.072 11:32:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:06.072 11:32:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:06.072 11:32:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:06.072 11:32:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:06.072 11:32:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:06.072 11:32:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:06.072 11:32:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:30:06.072 11:32:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:30:06.072 11:32:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:30:06.072 11:32:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:06.072 11:32:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:06.072 11:32:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:06.072 11:32:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:06.072 11:32:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:06.072 11:32:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:06.072 11:32:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:06.072 11:32:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:06.072 11:32:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:06.072 11:32:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:06.072 11:32:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:30:06.072 11:32:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:30:08.601 Found 0000:82:00.0 (0x8086 - 0x159b) 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:30:08.601 Found 0000:82:00.1 (0x8086 - 0x159b) 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:08.601 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:30:08.601 Found net devices under 0000:82:00.0: cvl_0_0 00:30:08.602 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:08.602 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:08.602 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:08.602 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:08.602 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:08.602 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:08.602 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:08.602 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:08.602 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:30:08.602 Found net devices under 0000:82:00.1: cvl_0_1 00:30:08.602 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:08.602 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:08.602 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:30:08.602 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:08.602 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:08.602 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:08.602 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:08.602 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:08.602 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:08.602 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:08.602 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:08.602 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:08.602 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:08.602 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:08.602 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:08.602 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:08.602 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:08.602 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:08.602 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:08.602 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:08.602 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:08.859 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:08.859 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:08.859 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:08.859 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:08.859 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:08.859 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:08.859 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:08.859 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:08.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:08.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:30:08.859 00:30:08.859 --- 10.0.0.2 ping statistics --- 00:30:08.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:08.859 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:30:08.859 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:08.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:08.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:30:08.859 00:30:08.859 --- 10.0.0.1 ping statistics --- 00:30:08.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:08.859 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:30:08.859 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:08.859 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:30:08.859 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:08.859 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:08.859 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:08.859 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:08.859 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:08.859 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:08.859 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:08.859 11:32:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:30:08.859 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:08.859 11:32:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:08.860 11:32:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:08.860 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=2783238 00:30:08.860 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:30:08.860 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 2783238 00:30:08.860 11:32:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 2783238 ']' 00:30:08.860 11:32:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:08.860 11:32:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:08.860 11:32:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:08.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:08.860 11:32:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:08.860 11:32:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:08.860 [2024-11-19 11:32:04.246905] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:08.860 [2024-11-19 11:32:04.247923] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:30:08.860 [2024-11-19 11:32:04.247975] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:08.860 [2024-11-19 11:32:04.330588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:09.118 [2024-11-19 11:32:04.390635] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:09.118 [2024-11-19 11:32:04.390699] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:09.118 [2024-11-19 11:32:04.390712] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:09.118 [2024-11-19 11:32:04.390723] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:09.118 [2024-11-19 11:32:04.390732] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:09.118 [2024-11-19 11:32:04.392170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:09.118 [2024-11-19 11:32:04.392176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:09.118 [2024-11-19 11:32:04.486764] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:09.118 [2024-11-19 11:32:04.486799] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:09.118 [2024-11-19 11:32:04.487049] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:09.118 11:32:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:09.118 11:32:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:30:09.118 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:09.118 11:32:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:09.118 11:32:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:09.118 11:32:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:09.118 11:32:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:30:09.118 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:30:09.118 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:30:09.118 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:30:09.118 5000+0 records in 00:30:09.118 5000+0 records out 00:30:09.118 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0148317 s, 690 MB/s 00:30:09.118 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:30:09.118 11:32:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.118 11:32:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:09.118 AIO0 00:30:09.118 11:32:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.118 11:32:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:30:09.118 11:32:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.118 11:32:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:09.118 [2024-11-19 11:32:04.592782] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:09.118 11:32:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.118 11:32:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:30:09.118 11:32:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.118 11:32:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:09.118 11:32:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.118 11:32:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:30:09.118 11:32:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.118 11:32:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:09.118 11:32:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.118 11:32:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:09.118 11:32:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.118 11:32:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:09.377 [2024-11-19 11:32:04.617050] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:09.377 11:32:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.377 11:32:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:30:09.377 11:32:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2783238 0 00:30:09.377 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2783238 0 idle 00:30:09.377 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2783238 00:30:09.377 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:09.377 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:09.377 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:09.377 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:09.377 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:09.377 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:09.377 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:09.377 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:09.377 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:09.377 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:09.377 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2783238 -w 256 00:30:09.377 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2783238 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.29 reactor_0' 00:30:09.377 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2783238 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.29 reactor_0 00:30:09.377 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:09.377 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:09.377 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:09.377 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:09.377 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:09.377 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:09.377 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:09.377 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:09.377 11:32:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:30:09.377 11:32:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2783238 1 00:30:09.377 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2783238 1 idle 00:30:09.377 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2783238 00:30:09.377 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:30:09.377 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:09.377 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:09.377 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:09.377 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:09.377 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:09.377 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:09.377 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:09.377 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:09.377 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2783238 -w 256 00:30:09.377 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:30:09.635 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2783245 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.00 reactor_1' 00:30:09.635 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2783245 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.00 reactor_1 00:30:09.635 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:09.635 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:09.635 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:09.635 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:09.635 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:09.635 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:09.635 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:09.635 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:09.635 11:32:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:30:09.635 11:32:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2783399 00:30:09.635 11:32:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:09.635 11:32:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:30:09.635 11:32:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:30:09.636 11:32:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2783238 0 00:30:09.636 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2783238 0 busy 00:30:09.636 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2783238 00:30:09.636 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:09.636 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:30:09.636 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:30:09.636 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:09.636 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:30:09.636 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:09.636 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:09.636 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:09.636 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2783238 -w 256 00:30:09.636 11:32:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:09.636 11:32:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2783238 root 20 0 128.2g 48384 34944 R 18.8 0.1 0:00.32 reactor_0' 00:30:09.893 11:32:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2783238 root 20 0 128.2g 48384 34944 R 18.8 0.1 0:00.32 reactor_0 00:30:09.893 11:32:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:09.893 11:32:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:09.893 11:32:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=18.8 00:30:09.893 11:32:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=18 00:30:09.894 11:32:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:30:09.894 11:32:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:30:09.894 11:32:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:30:10.828 11:32:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:30:10.828 11:32:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:10.828 11:32:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2783238 -w 256 00:30:10.828 11:32:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:10.828 11:32:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2783238 root 20 0 128.2g 48384 34944 R 99.9 0.1 0:02.65 reactor_0' 00:30:10.828 11:32:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2783238 root 20 0 128.2g 48384 34944 R 99.9 0.1 0:02.65 reactor_0 00:30:10.828 11:32:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:10.828 11:32:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:10.828 11:32:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:30:10.828 11:32:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:30:10.828 11:32:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:30:10.828 11:32:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:30:10.828 11:32:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:30:10.828 11:32:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:10.828 11:32:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:30:10.828 11:32:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:30:10.828 11:32:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2783238 1 00:30:10.828 11:32:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2783238 1 busy 00:30:10.828 11:32:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2783238 00:30:10.828 11:32:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:30:10.828 11:32:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:30:10.828 11:32:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:30:10.828 11:32:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:10.828 11:32:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:30:10.828 11:32:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:10.828 11:32:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:10.828 11:32:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:10.828 11:32:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2783238 -w 256 00:30:10.828 11:32:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:30:11.086 11:32:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2783245 root 20 0 128.2g 48384 34944 R 93.8 0.1 0:01.34 reactor_1' 00:30:11.086 11:32:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2783245 root 20 0 128.2g 48384 34944 R 93.8 0.1 0:01.34 reactor_1 00:30:11.086 11:32:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:11.086 11:32:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:11.086 11:32:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.8 00:30:11.086 11:32:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:30:11.086 11:32:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:30:11.086 11:32:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:30:11.086 11:32:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:30:11.086 11:32:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:11.086 11:32:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2783399 00:30:21.057 Initializing NVMe Controllers 00:30:21.057 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:21.057 Controller IO queue size 256, less than required. 00:30:21.057 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:21.057 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:21.057 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:21.057 Initialization complete. Launching workers. 00:30:21.057 ======================================================== 00:30:21.057 Latency(us) 00:30:21.057 Device Information : IOPS MiB/s Average min max 00:30:21.057 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 14218.70 55.54 18014.39 4753.52 22143.58 00:30:21.057 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 14054.80 54.90 18225.32 4748.02 22391.49 00:30:21.057 ======================================================== 00:30:21.057 Total : 28273.50 110.44 18119.25 4748.02 22391.49 00:30:21.057 00:30:21.057 11:32:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:30:21.057 11:32:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2783238 0 00:30:21.057 11:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2783238 0 idle 00:30:21.057 11:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2783238 00:30:21.057 11:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2783238 -w 256 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2783238 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:20.23 reactor_0' 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2783238 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:20.23 reactor_0 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2783238 1 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2783238 1 idle 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2783238 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2783238 -w 256 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2783245 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:09.97 reactor_1' 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2783245 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:09.97 reactor_1 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:30:21.058 11:32:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:30:22.435 11:32:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:30:22.435 11:32:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:30:22.435 11:32:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:30:22.435 11:32:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:30:22.435 11:32:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:30:22.435 11:32:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:30:22.435 11:32:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:30:22.435 11:32:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2783238 0 00:30:22.435 11:32:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2783238 0 idle 00:30:22.435 11:32:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2783238 00:30:22.435 11:32:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:22.435 11:32:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:22.435 11:32:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:22.435 11:32:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:22.435 11:32:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:22.435 11:32:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:22.435 11:32:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:22.435 11:32:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:22.435 11:32:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:22.435 11:32:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2783238 -w 256 00:30:22.435 11:32:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:22.693 11:32:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2783238 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:20.33 reactor_0' 00:30:22.693 11:32:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2783238 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:20.33 reactor_0 00:30:22.693 11:32:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:22.693 11:32:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:22.693 11:32:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:22.693 11:32:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:22.693 11:32:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:22.693 11:32:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:22.693 11:32:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:22.693 11:32:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:22.693 11:32:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:30:22.693 11:32:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2783238 1 00:30:22.693 11:32:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2783238 1 idle 00:30:22.693 11:32:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2783238 00:30:22.693 11:32:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:30:22.693 11:32:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:22.693 11:32:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:22.693 11:32:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:22.693 11:32:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:22.693 11:32:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:22.693 11:32:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:22.693 11:32:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:22.693 11:32:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:22.693 11:32:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2783238 -w 256 00:30:22.693 11:32:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:30:22.694 11:32:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2783245 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:10.01 reactor_1' 00:30:22.694 11:32:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2783245 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:10.01 reactor_1 00:30:22.694 11:32:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:22.694 11:32:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:22.694 11:32:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:22.694 11:32:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:22.694 11:32:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:22.694 11:32:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:22.694 11:32:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:22.694 11:32:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:22.694 11:32:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:22.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:22.952 11:32:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:22.952 11:32:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:30:22.952 11:32:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:30:22.952 11:32:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:22.952 11:32:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:30:22.952 11:32:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:22.952 11:32:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:30:22.952 11:32:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:30:22.952 11:32:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:30:22.952 11:32:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:22.952 11:32:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:30:22.952 11:32:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:22.952 11:32:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:30:22.952 11:32:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:22.952 11:32:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:22.952 rmmod nvme_tcp 00:30:22.952 rmmod nvme_fabrics 00:30:22.952 rmmod nvme_keyring 00:30:22.952 11:32:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:22.952 11:32:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:30:22.952 11:32:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:30:22.952 11:32:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 2783238 ']' 00:30:22.952 11:32:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 2783238 00:30:22.952 11:32:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 2783238 ']' 00:30:22.952 11:32:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 2783238 00:30:22.952 11:32:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:30:22.952 11:32:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:22.952 11:32:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2783238 00:30:22.952 11:32:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:22.952 11:32:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:22.952 11:32:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2783238' 00:30:22.952 killing process with pid 2783238 00:30:22.952 11:32:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 2783238 00:30:22.952 11:32:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 2783238 00:30:23.210 11:32:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:23.210 11:32:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:23.210 11:32:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:23.210 11:32:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:30:23.210 11:32:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:30:23.210 11:32:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:23.210 11:32:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:30:23.210 11:32:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:23.210 11:32:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:23.210 11:32:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:23.210 11:32:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:23.210 11:32:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:25.750 11:32:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:25.750 00:30:25.750 real 0m19.421s 00:30:25.750 user 0m37.233s 00:30:25.750 sys 0m7.303s 00:30:25.750 11:32:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:25.750 11:32:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:25.750 ************************************ 00:30:25.750 END TEST nvmf_interrupt 00:30:25.750 ************************************ 00:30:25.750 00:30:25.750 real 25m43.547s 00:30:25.750 user 58m52.052s 00:30:25.750 sys 7m17.569s 00:30:25.750 11:32:20 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:25.750 11:32:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:25.750 ************************************ 00:30:25.750 END TEST nvmf_tcp 00:30:25.750 ************************************ 00:30:25.750 11:32:20 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:30:25.750 11:32:20 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:25.750 11:32:20 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:25.750 11:32:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:25.750 11:32:20 -- common/autotest_common.sh@10 -- # set +x 00:30:25.750 ************************************ 00:30:25.750 START TEST spdkcli_nvmf_tcp 00:30:25.750 ************************************ 00:30:25.750 11:32:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:25.750 * Looking for test storage... 00:30:25.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:30:25.750 11:32:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:25.750 11:32:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:30:25.750 11:32:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:25.750 11:32:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:25.750 11:32:20 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:25.750 11:32:20 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:25.750 11:32:20 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:25.750 11:32:20 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:30:25.750 11:32:20 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:30:25.750 11:32:20 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:30:25.750 11:32:20 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:30:25.750 11:32:20 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:30:25.750 11:32:20 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:30:25.750 11:32:20 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:30:25.750 11:32:20 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:25.750 11:32:20 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:30:25.750 11:32:20 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:30:25.750 11:32:20 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:25.750 11:32:20 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:25.750 11:32:20 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:30:25.750 11:32:20 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:30:25.750 11:32:20 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:25.750 11:32:20 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:30:25.750 11:32:20 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:30:25.750 11:32:20 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:30:25.750 11:32:20 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:30:25.750 11:32:20 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:25.750 11:32:20 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:30:25.750 11:32:20 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:30:25.750 11:32:20 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:25.750 11:32:20 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:25.750 11:32:20 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:30:25.750 11:32:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:25.750 11:32:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:25.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.750 --rc genhtml_branch_coverage=1 00:30:25.750 --rc genhtml_function_coverage=1 00:30:25.750 --rc genhtml_legend=1 00:30:25.750 --rc geninfo_all_blocks=1 00:30:25.750 --rc geninfo_unexecuted_blocks=1 00:30:25.750 00:30:25.750 ' 00:30:25.750 11:32:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:25.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.750 --rc genhtml_branch_coverage=1 00:30:25.750 --rc genhtml_function_coverage=1 00:30:25.750 --rc genhtml_legend=1 00:30:25.750 --rc geninfo_all_blocks=1 00:30:25.750 --rc geninfo_unexecuted_blocks=1 00:30:25.750 00:30:25.750 ' 00:30:25.750 11:32:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:25.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.750 --rc genhtml_branch_coverage=1 00:30:25.750 --rc genhtml_function_coverage=1 00:30:25.750 --rc genhtml_legend=1 00:30:25.750 --rc geninfo_all_blocks=1 00:30:25.750 --rc geninfo_unexecuted_blocks=1 00:30:25.750 00:30:25.750 ' 00:30:25.750 11:32:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:25.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.750 --rc genhtml_branch_coverage=1 00:30:25.750 --rc genhtml_function_coverage=1 00:30:25.750 --rc genhtml_legend=1 00:30:25.750 --rc geninfo_all_blocks=1 00:30:25.750 --rc geninfo_unexecuted_blocks=1 00:30:25.750 00:30:25.750 ' 00:30:25.750 11:32:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:30:25.750 11:32:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:30:25.750 11:32:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:30:25.750 11:32:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:25.750 11:32:20 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:30:25.750 11:32:20 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:25.750 11:32:20 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:25.750 11:32:20 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:25.751 11:32:20 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:25.751 11:32:20 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:25.751 11:32:20 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:25.751 11:32:20 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:25.751 11:32:20 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:25.751 11:32:20 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:25.751 11:32:20 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:25.751 11:32:20 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:30:25.751 11:32:20 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:30:25.751 11:32:20 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:25.751 11:32:20 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:25.751 11:32:20 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:25.751 11:32:20 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:25.751 11:32:20 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:25.751 11:32:20 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:30:25.751 11:32:20 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:25.751 11:32:20 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:25.751 11:32:20 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:25.751 11:32:20 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.751 11:32:20 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.751 11:32:20 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.751 11:32:20 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:30:25.751 11:32:20 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.751 11:32:20 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:30:25.751 11:32:20 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:25.751 11:32:20 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:25.751 11:32:20 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:25.751 11:32:20 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:25.751 11:32:20 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:25.751 11:32:20 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:25.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:25.751 11:32:20 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:25.751 11:32:20 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:25.751 11:32:20 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:25.751 11:32:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:30:25.751 11:32:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:30:25.751 11:32:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:30:25.751 11:32:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:30:25.751 11:32:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:25.751 11:32:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:25.751 11:32:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:30:25.751 11:32:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2785409 00:30:25.751 11:32:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:30:25.751 11:32:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2785409 00:30:25.751 11:32:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 2785409 ']' 00:30:25.751 11:32:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:25.751 11:32:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:25.751 11:32:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:25.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:25.751 11:32:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:25.751 11:32:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:25.751 [2024-11-19 11:32:20.941119] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:30:25.751 [2024-11-19 11:32:20.941197] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2785409 ] 00:30:25.751 [2024-11-19 11:32:21.014310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:25.751 [2024-11-19 11:32:21.072264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:25.751 [2024-11-19 11:32:21.072269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:25.751 11:32:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:25.751 11:32:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:30:25.751 11:32:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:30:25.751 11:32:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:25.751 11:32:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:25.751 11:32:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:30:25.751 11:32:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:30:25.751 11:32:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:30:25.751 11:32:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:25.751 11:32:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:25.751 11:32:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:30:25.751 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:30:25.751 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:30:25.751 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:30:25.751 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:30:25.751 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:30:25.751 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:30:25.751 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:25.751 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:30:25.751 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:30:25.751 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:25.751 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:25.751 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:30:25.751 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:25.751 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:25.751 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:30:25.751 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:25.751 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:25.751 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:25.751 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:25.751 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:30:25.751 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:30:25.751 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:25.751 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:30:25.751 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:25.751 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:30:25.751 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:30:25.751 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:30:25.751 ' 00:30:29.036 [2024-11-19 11:32:23.816831] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:29.600 [2024-11-19 11:32:25.089185] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:30:32.126 [2024-11-19 11:32:27.432490] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:30:34.024 [2024-11-19 11:32:29.482898] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:35.978 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:35.979 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:35.979 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:35.979 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:35.979 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:35.979 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:35.979 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:35.979 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:35.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:35.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:35.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:35.979 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:35.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:35.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:35.979 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:35.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:35.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:35.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:35.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:35.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:35.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:35.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:35.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:35.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:35.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:35.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:35.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:35.979 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:35.979 11:32:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:35.979 11:32:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:35.979 11:32:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:35.979 11:32:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:35.979 11:32:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:35.979 11:32:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:35.979 11:32:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:30:35.979 11:32:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:36.239 11:32:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:36.239 11:32:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:36.239 11:32:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:36.239 11:32:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:36.239 11:32:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:36.239 11:32:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:36.239 11:32:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:36.239 11:32:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:36.239 11:32:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:36.239 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:36.239 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:36.239 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:36.239 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:36.239 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:36.239 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:36.239 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:36.239 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:36.239 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:36.239 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:36.239 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:36.239 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:36.239 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:36.239 ' 00:30:41.503 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:41.503 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:41.503 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:41.503 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:41.503 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:30:41.503 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:30:41.503 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:41.503 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:41.503 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:41.503 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:41.503 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:41.503 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:41.503 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:41.503 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:41.761 11:32:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:41.761 11:32:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:41.761 11:32:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:41.761 11:32:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2785409 00:30:41.761 11:32:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2785409 ']' 00:30:41.761 11:32:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2785409 00:30:41.761 11:32:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:30:41.761 11:32:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:41.761 11:32:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2785409 00:30:41.761 11:32:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:41.761 11:32:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:41.761 11:32:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2785409' 00:30:41.761 killing process with pid 2785409 00:30:41.761 11:32:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 2785409 00:30:41.761 11:32:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 2785409 00:30:42.020 11:32:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:30:42.020 11:32:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:30:42.020 11:32:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2785409 ']' 00:30:42.020 11:32:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2785409 00:30:42.020 11:32:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2785409 ']' 00:30:42.020 11:32:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2785409 00:30:42.020 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2785409) - No such process 00:30:42.020 11:32:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 2785409 is not found' 00:30:42.020 Process with pid 2785409 is not found 00:30:42.020 11:32:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:30:42.020 11:32:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:30:42.020 11:32:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:30:42.020 00:30:42.020 real 0m16.588s 00:30:42.020 user 0m35.393s 00:30:42.020 sys 0m0.740s 00:30:42.020 11:32:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:42.020 11:32:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:42.020 ************************************ 00:30:42.020 END TEST spdkcli_nvmf_tcp 00:30:42.020 ************************************ 00:30:42.020 11:32:37 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:42.020 11:32:37 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:42.020 11:32:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:42.020 11:32:37 -- common/autotest_common.sh@10 -- # set +x 00:30:42.020 ************************************ 00:30:42.020 START TEST nvmf_identify_passthru 00:30:42.020 ************************************ 00:30:42.020 11:32:37 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:42.020 * Looking for test storage... 00:30:42.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:42.020 11:32:37 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:42.020 11:32:37 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:30:42.020 11:32:37 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:42.020 11:32:37 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:42.020 11:32:37 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:42.020 11:32:37 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:42.020 11:32:37 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:42.020 11:32:37 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:30:42.020 11:32:37 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:30:42.020 11:32:37 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:30:42.020 11:32:37 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:30:42.020 11:32:37 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:30:42.020 11:32:37 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:30:42.020 11:32:37 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:30:42.020 11:32:37 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:42.020 11:32:37 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:30:42.020 11:32:37 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:30:42.020 11:32:37 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:42.020 11:32:37 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:42.020 11:32:37 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:30:42.020 11:32:37 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:30:42.020 11:32:37 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:42.279 11:32:37 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:30:42.279 11:32:37 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:30:42.279 11:32:37 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:30:42.279 11:32:37 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:30:42.279 11:32:37 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:42.279 11:32:37 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:30:42.279 11:32:37 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:30:42.279 11:32:37 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:42.279 11:32:37 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:42.279 11:32:37 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:30:42.279 11:32:37 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:42.279 11:32:37 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:42.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.279 --rc genhtml_branch_coverage=1 00:30:42.279 --rc genhtml_function_coverage=1 00:30:42.279 --rc genhtml_legend=1 00:30:42.279 --rc geninfo_all_blocks=1 00:30:42.279 --rc geninfo_unexecuted_blocks=1 00:30:42.279 00:30:42.279 ' 00:30:42.279 11:32:37 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:42.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.279 --rc genhtml_branch_coverage=1 00:30:42.279 --rc genhtml_function_coverage=1 00:30:42.279 --rc genhtml_legend=1 00:30:42.279 --rc geninfo_all_blocks=1 00:30:42.279 --rc geninfo_unexecuted_blocks=1 00:30:42.279 00:30:42.279 ' 00:30:42.279 11:32:37 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:42.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.279 --rc genhtml_branch_coverage=1 00:30:42.279 --rc genhtml_function_coverage=1 00:30:42.279 --rc genhtml_legend=1 00:30:42.279 --rc geninfo_all_blocks=1 00:30:42.279 --rc geninfo_unexecuted_blocks=1 00:30:42.279 00:30:42.279 ' 00:30:42.279 11:32:37 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:42.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.279 --rc genhtml_branch_coverage=1 00:30:42.279 --rc genhtml_function_coverage=1 00:30:42.279 --rc genhtml_legend=1 00:30:42.279 --rc geninfo_all_blocks=1 00:30:42.279 --rc geninfo_unexecuted_blocks=1 00:30:42.279 00:30:42.279 ' 00:30:42.279 11:32:37 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:42.279 11:32:37 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:30:42.279 11:32:37 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:42.279 11:32:37 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:42.279 11:32:37 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:42.279 11:32:37 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:42.279 11:32:37 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:42.279 11:32:37 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:42.279 11:32:37 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:42.279 11:32:37 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:42.279 11:32:37 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:42.279 11:32:37 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:42.279 11:32:37 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:30:42.279 11:32:37 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:30:42.279 11:32:37 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:42.279 11:32:37 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:42.279 11:32:37 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:42.279 11:32:37 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:42.279 11:32:37 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:42.279 11:32:37 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:30:42.279 11:32:37 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:42.279 11:32:37 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:42.279 11:32:37 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:42.279 11:32:37 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.279 11:32:37 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.279 11:32:37 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.279 11:32:37 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:42.279 11:32:37 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.279 11:32:37 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:30:42.279 11:32:37 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:42.279 11:32:37 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:42.279 11:32:37 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:42.279 11:32:37 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:42.279 11:32:37 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:42.279 11:32:37 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:42.280 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:42.280 11:32:37 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:42.280 11:32:37 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:42.280 11:32:37 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:42.280 11:32:37 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:42.280 11:32:37 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:30:42.280 11:32:37 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:42.280 11:32:37 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:42.280 11:32:37 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:42.280 11:32:37 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.280 11:32:37 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.280 11:32:37 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.280 11:32:37 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:42.280 11:32:37 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.280 11:32:37 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:30:42.280 11:32:37 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:42.280 11:32:37 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:42.280 11:32:37 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:42.280 11:32:37 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:42.280 11:32:37 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:42.280 11:32:37 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:42.280 11:32:37 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:42.280 11:32:37 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:42.280 11:32:37 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:42.280 11:32:37 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:42.280 11:32:37 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:30:42.280 11:32:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:30:44.812 Found 0000:82:00.0 (0x8086 - 0x159b) 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:30:44.812 Found 0000:82:00.1 (0x8086 - 0x159b) 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:30:44.812 Found net devices under 0000:82:00.0: cvl_0_0 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:30:44.812 Found net devices under 0000:82:00.1: cvl_0_1 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:44.812 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:44.812 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:30:44.812 00:30:44.812 --- 10.0.0.2 ping statistics --- 00:30:44.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:44.812 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:30:44.812 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:45.071 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:45.071 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:30:45.071 00:30:45.071 --- 10.0.0.1 ping statistics --- 00:30:45.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.071 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:30:45.071 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:45.071 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:30:45.071 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:45.071 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:45.071 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:45.071 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:45.071 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:45.071 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:45.071 11:32:40 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:45.071 11:32:40 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:45.071 11:32:40 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:45.071 11:32:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:45.071 11:32:40 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:45.071 11:32:40 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:30:45.071 11:32:40 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:30:45.071 11:32:40 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:30:45.071 11:32:40 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:30:45.071 11:32:40 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:30:45.071 11:32:40 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:30:45.071 11:32:40 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:45.071 11:32:40 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:45.071 11:32:40 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:30:45.071 11:32:40 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:30:45.071 11:32:40 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:81:00.0 00:30:45.071 11:32:40 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:81:00.0 00:30:45.071 11:32:40 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:81:00.0 00:30:45.071 11:32:40 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:81:00.0 ']' 00:30:45.071 11:32:40 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:81:00.0' -i 0 00:30:45.071 11:32:40 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:45.071 11:32:40 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:50.339 11:32:45 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ951302VM2P0BGN 00:30:50.339 11:32:45 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:81:00.0' -i 0 00:30:50.339 11:32:45 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:50.339 11:32:45 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:55.603 11:32:50 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:30:55.603 11:32:50 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:55.603 11:32:50 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:55.603 11:32:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:55.603 11:32:50 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:55.603 11:32:50 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:55.603 11:32:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:55.603 11:32:50 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2790599 00:30:55.603 11:32:50 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:55.603 11:32:50 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:55.603 11:32:50 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2790599 00:30:55.603 11:32:50 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 2790599 ']' 00:30:55.603 11:32:50 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:55.603 11:32:50 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:55.603 11:32:50 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:55.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:55.603 11:32:50 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:55.603 11:32:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:55.603 [2024-11-19 11:32:50.655809] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:30:55.603 [2024-11-19 11:32:50.655908] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:55.603 [2024-11-19 11:32:50.742120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:55.603 [2024-11-19 11:32:50.802150] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:55.603 [2024-11-19 11:32:50.802204] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:55.603 [2024-11-19 11:32:50.802226] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:55.603 [2024-11-19 11:32:50.802241] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:55.603 [2024-11-19 11:32:50.802252] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:55.603 [2024-11-19 11:32:50.803779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:55.603 [2024-11-19 11:32:50.803842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:55.603 [2024-11-19 11:32:50.803870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:55.603 [2024-11-19 11:32:50.803873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:55.604 11:32:50 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:55.604 11:32:50 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:30:55.604 11:32:50 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:55.604 11:32:50 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.604 11:32:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:55.604 INFO: Log level set to 20 00:30:55.604 INFO: Requests: 00:30:55.604 { 00:30:55.604 "jsonrpc": "2.0", 00:30:55.604 "method": "nvmf_set_config", 00:30:55.604 "id": 1, 00:30:55.604 "params": { 00:30:55.604 "admin_cmd_passthru": { 00:30:55.604 "identify_ctrlr": true 00:30:55.604 } 00:30:55.604 } 00:30:55.604 } 00:30:55.604 00:30:55.604 INFO: response: 00:30:55.604 { 00:30:55.604 "jsonrpc": "2.0", 00:30:55.604 "id": 1, 00:30:55.604 "result": true 00:30:55.604 } 00:30:55.604 00:30:55.604 11:32:50 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.604 11:32:50 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:55.604 11:32:50 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.604 11:32:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:55.604 INFO: Setting log level to 20 00:30:55.604 INFO: Setting log level to 20 00:30:55.604 INFO: Log level set to 20 00:30:55.604 INFO: Log level set to 20 00:30:55.604 INFO: Requests: 00:30:55.604 { 00:30:55.604 "jsonrpc": "2.0", 00:30:55.604 "method": "framework_start_init", 00:30:55.604 "id": 1 00:30:55.604 } 00:30:55.604 00:30:55.604 INFO: Requests: 00:30:55.604 { 00:30:55.604 "jsonrpc": "2.0", 00:30:55.604 "method": "framework_start_init", 00:30:55.604 "id": 1 00:30:55.604 } 00:30:55.604 00:30:55.604 [2024-11-19 11:32:50.995550] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:55.604 INFO: response: 00:30:55.604 { 00:30:55.604 "jsonrpc": "2.0", 00:30:55.604 "id": 1, 00:30:55.604 "result": true 00:30:55.604 } 00:30:55.604 00:30:55.604 INFO: response: 00:30:55.604 { 00:30:55.604 "jsonrpc": "2.0", 00:30:55.604 "id": 1, 00:30:55.604 "result": true 00:30:55.604 } 00:30:55.604 00:30:55.604 11:32:51 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.604 11:32:51 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:55.604 11:32:51 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.604 11:32:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:55.604 INFO: Setting log level to 40 00:30:55.604 INFO: Setting log level to 40 00:30:55.604 INFO: Setting log level to 40 00:30:55.604 [2024-11-19 11:32:51.005727] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:55.604 11:32:51 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.604 11:32:51 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:55.604 11:32:51 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:55.604 11:32:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:55.604 11:32:51 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:81:00.0 00:30:55.604 11:32:51 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.604 11:32:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:58.886 Nvme0n1 00:30:58.886 11:32:53 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.886 11:32:53 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:58.886 11:32:53 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.886 11:32:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:58.886 11:32:53 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.886 11:32:53 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:58.886 11:32:53 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.886 11:32:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:58.886 11:32:53 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.886 11:32:53 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:58.886 11:32:53 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.886 11:32:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:58.886 [2024-11-19 11:32:53.919324] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:58.886 11:32:53 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.886 11:32:53 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:58.886 11:32:53 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.886 11:32:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:58.886 [ 00:30:58.886 { 00:30:58.886 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:58.886 "subtype": "Discovery", 00:30:58.886 "listen_addresses": [], 00:30:58.886 "allow_any_host": true, 00:30:58.886 "hosts": [] 00:30:58.886 }, 00:30:58.886 { 00:30:58.886 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:58.886 "subtype": "NVMe", 00:30:58.886 "listen_addresses": [ 00:30:58.886 { 00:30:58.886 "trtype": "TCP", 00:30:58.886 "adrfam": "IPv4", 00:30:58.886 "traddr": "10.0.0.2", 00:30:58.886 "trsvcid": "4420" 00:30:58.886 } 00:30:58.886 ], 00:30:58.886 "allow_any_host": true, 00:30:58.886 "hosts": [], 00:30:58.886 "serial_number": "SPDK00000000000001", 00:30:58.886 "model_number": "SPDK bdev Controller", 00:30:58.886 "max_namespaces": 1, 00:30:58.886 "min_cntlid": 1, 00:30:58.886 "max_cntlid": 65519, 00:30:58.886 "namespaces": [ 00:30:58.886 { 00:30:58.886 "nsid": 1, 00:30:58.886 "bdev_name": "Nvme0n1", 00:30:58.886 "name": "Nvme0n1", 00:30:58.886 "nguid": "596C7679C6F94CA082DBAD1C76779C35", 00:30:58.886 "uuid": "596c7679-c6f9-4ca0-82db-ad1c76779c35" 00:30:58.886 } 00:30:58.886 ] 00:30:58.886 } 00:30:58.886 ] 00:30:58.886 11:32:53 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.886 11:32:53 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:58.886 11:32:53 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:58.886 11:32:53 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:58.887 11:32:54 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ951302VM2P0BGN 00:30:58.887 11:32:54 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:58.887 11:32:54 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:58.887 11:32:54 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:58.887 11:32:54 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:30:58.887 11:32:54 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ951302VM2P0BGN '!=' PHLJ951302VM2P0BGN ']' 00:30:58.887 11:32:54 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:30:58.887 11:32:54 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:58.887 11:32:54 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.887 11:32:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:58.887 11:32:54 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.887 11:32:54 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:58.887 11:32:54 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:58.887 11:32:54 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:58.887 11:32:54 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:30:58.887 11:32:54 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:58.887 11:32:54 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:30:58.887 11:32:54 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:58.887 11:32:54 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:58.887 rmmod nvme_tcp 00:30:58.887 rmmod nvme_fabrics 00:30:58.887 rmmod nvme_keyring 00:30:58.887 11:32:54 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:58.887 11:32:54 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:30:58.887 11:32:54 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:30:58.887 11:32:54 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 2790599 ']' 00:30:58.887 11:32:54 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 2790599 00:30:58.887 11:32:54 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 2790599 ']' 00:30:58.887 11:32:54 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 2790599 00:30:58.887 11:32:54 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:30:58.887 11:32:54 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:58.887 11:32:54 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2790599 00:30:59.144 11:32:54 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:59.144 11:32:54 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:59.144 11:32:54 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2790599' 00:30:59.144 killing process with pid 2790599 00:30:59.144 11:32:54 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 2790599 00:30:59.144 11:32:54 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 2790599 00:31:01.675 11:32:56 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:01.675 11:32:56 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:01.675 11:32:56 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:01.675 11:32:56 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:31:01.675 11:32:56 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:31:01.675 11:32:56 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:01.675 11:32:56 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:31:01.675 11:32:56 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:01.675 11:32:56 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:01.675 11:32:56 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:01.675 11:32:56 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:01.675 11:32:56 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:03.578 11:32:58 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:03.578 00:31:03.578 real 0m21.414s 00:31:03.579 user 0m31.279s 00:31:03.579 sys 0m3.852s 00:31:03.579 11:32:58 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:03.579 11:32:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:03.579 ************************************ 00:31:03.579 END TEST nvmf_identify_passthru 00:31:03.579 ************************************ 00:31:03.579 11:32:58 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:03.579 11:32:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:03.579 11:32:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:03.579 11:32:58 -- common/autotest_common.sh@10 -- # set +x 00:31:03.579 ************************************ 00:31:03.579 START TEST nvmf_dif 00:31:03.579 ************************************ 00:31:03.579 11:32:58 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:03.579 * Looking for test storage... 00:31:03.579 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:03.579 11:32:58 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:03.579 11:32:58 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:31:03.579 11:32:58 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:03.579 11:32:58 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:03.579 11:32:58 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:03.579 11:32:58 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:03.579 11:32:58 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:03.579 11:32:58 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:31:03.579 11:32:58 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:31:03.579 11:32:58 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:31:03.579 11:32:58 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:31:03.579 11:32:58 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:31:03.579 11:32:58 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:31:03.579 11:32:58 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:31:03.579 11:32:58 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:03.579 11:32:58 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:31:03.579 11:32:58 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:31:03.579 11:32:58 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:03.579 11:32:58 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:03.579 11:32:58 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:31:03.579 11:32:58 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:31:03.579 11:32:58 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:03.579 11:32:58 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:31:03.579 11:32:58 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:31:03.579 11:32:58 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:31:03.579 11:32:58 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:31:03.579 11:32:58 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:03.579 11:32:58 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:31:03.579 11:32:58 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:31:03.579 11:32:58 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:03.579 11:32:58 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:03.579 11:32:58 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:31:03.579 11:32:58 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:03.579 11:32:58 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:03.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.579 --rc genhtml_branch_coverage=1 00:31:03.579 --rc genhtml_function_coverage=1 00:31:03.579 --rc genhtml_legend=1 00:31:03.579 --rc geninfo_all_blocks=1 00:31:03.579 --rc geninfo_unexecuted_blocks=1 00:31:03.579 00:31:03.579 ' 00:31:03.579 11:32:58 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:03.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.579 --rc genhtml_branch_coverage=1 00:31:03.579 --rc genhtml_function_coverage=1 00:31:03.579 --rc genhtml_legend=1 00:31:03.579 --rc geninfo_all_blocks=1 00:31:03.579 --rc geninfo_unexecuted_blocks=1 00:31:03.579 00:31:03.579 ' 00:31:03.579 11:32:58 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:03.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.579 --rc genhtml_branch_coverage=1 00:31:03.579 --rc genhtml_function_coverage=1 00:31:03.579 --rc genhtml_legend=1 00:31:03.579 --rc geninfo_all_blocks=1 00:31:03.579 --rc geninfo_unexecuted_blocks=1 00:31:03.579 00:31:03.579 ' 00:31:03.579 11:32:58 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:03.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.579 --rc genhtml_branch_coverage=1 00:31:03.579 --rc genhtml_function_coverage=1 00:31:03.579 --rc genhtml_legend=1 00:31:03.579 --rc geninfo_all_blocks=1 00:31:03.579 --rc geninfo_unexecuted_blocks=1 00:31:03.579 00:31:03.579 ' 00:31:03.579 11:32:58 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:03.579 11:32:58 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:31:03.579 11:32:58 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:03.579 11:32:58 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:03.579 11:32:58 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:03.579 11:32:58 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:03.579 11:32:58 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:03.579 11:32:58 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:03.579 11:32:58 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:03.579 11:32:58 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:03.579 11:32:58 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:03.579 11:32:58 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:03.579 11:32:59 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:31:03.579 11:32:59 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:31:03.579 11:32:59 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:03.579 11:32:59 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:03.579 11:32:59 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:03.579 11:32:59 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:03.579 11:32:59 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:03.579 11:32:59 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:31:03.579 11:32:59 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:03.579 11:32:59 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:03.579 11:32:59 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:03.579 11:32:59 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.579 11:32:59 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.579 11:32:59 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.579 11:32:59 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:31:03.579 11:32:59 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.579 11:32:59 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:31:03.579 11:32:59 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:03.579 11:32:59 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:03.579 11:32:59 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:03.579 11:32:59 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:03.579 11:32:59 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:03.579 11:32:59 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:03.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:03.579 11:32:59 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:03.579 11:32:59 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:03.579 11:32:59 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:03.579 11:32:59 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:31:03.579 11:32:59 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:31:03.579 11:32:59 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:31:03.579 11:32:59 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:31:03.579 11:32:59 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:31:03.579 11:32:59 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:03.579 11:32:59 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:03.579 11:32:59 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:03.579 11:32:59 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:03.579 11:32:59 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:03.579 11:32:59 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:03.580 11:32:59 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:03.580 11:32:59 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:03.580 11:32:59 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:03.580 11:32:59 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:03.580 11:32:59 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:31:03.580 11:32:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:31:06.118 Found 0000:82:00.0 (0x8086 - 0x159b) 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:31:06.118 Found 0000:82:00.1 (0x8086 - 0x159b) 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:31:06.118 Found net devices under 0000:82:00.0: cvl_0_0 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:31:06.118 Found net devices under 0000:82:00.1: cvl_0_1 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:06.118 11:33:01 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:06.119 11:33:01 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:06.119 11:33:01 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:06.119 11:33:01 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:06.119 11:33:01 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:06.119 11:33:01 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:06.119 11:33:01 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:06.119 11:33:01 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:06.119 11:33:01 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:06.119 11:33:01 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:06.378 11:33:01 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:06.378 11:33:01 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:06.378 11:33:01 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:06.378 11:33:01 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:06.378 11:33:01 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:06.378 11:33:01 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:06.378 11:33:01 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:06.378 11:33:01 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:06.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:06.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:31:06.378 00:31:06.378 --- 10.0.0.2 ping statistics --- 00:31:06.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:06.378 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:31:06.378 11:33:01 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:06.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:06.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:31:06.378 00:31:06.378 --- 10.0.0.1 ping statistics --- 00:31:06.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:06.378 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:31:06.378 11:33:01 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:06.378 11:33:01 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:31:06.378 11:33:01 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:31:06.378 11:33:01 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:07.756 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:31:07.756 0000:81:00.0 (8086 0a54): Already using the vfio-pci driver 00:31:07.756 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:31:07.756 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:31:07.756 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:31:07.756 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:31:07.756 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:31:07.756 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:31:07.756 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:31:07.756 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:31:07.756 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:31:07.756 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:31:07.756 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:31:07.756 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:31:07.756 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:31:07.756 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:31:07.756 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:31:08.046 11:33:03 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:08.046 11:33:03 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:08.046 11:33:03 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:08.046 11:33:03 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:08.046 11:33:03 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:08.046 11:33:03 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:08.046 11:33:03 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:31:08.046 11:33:03 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:31:08.046 11:33:03 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:08.046 11:33:03 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:08.047 11:33:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:08.047 11:33:03 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=2794501 00:31:08.047 11:33:03 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:31:08.047 11:33:03 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 2794501 00:31:08.047 11:33:03 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 2794501 ']' 00:31:08.047 11:33:03 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:08.047 11:33:03 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:08.047 11:33:03 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:08.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:08.047 11:33:03 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:08.047 11:33:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:08.047 [2024-11-19 11:33:03.430621] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:31:08.047 [2024-11-19 11:33:03.430715] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:08.047 [2024-11-19 11:33:03.515828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:08.312 [2024-11-19 11:33:03.578693] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:08.312 [2024-11-19 11:33:03.578751] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:08.312 [2024-11-19 11:33:03.578765] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:08.312 [2024-11-19 11:33:03.578778] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:08.312 [2024-11-19 11:33:03.578788] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:08.312 [2024-11-19 11:33:03.579443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:08.312 11:33:03 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:08.312 11:33:03 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:31:08.312 11:33:03 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:08.312 11:33:03 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:08.312 11:33:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:08.312 11:33:03 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:08.312 11:33:03 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:31:08.312 11:33:03 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:31:08.312 11:33:03 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.312 11:33:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:08.312 [2024-11-19 11:33:03.725392] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:08.312 11:33:03 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.312 11:33:03 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:31:08.312 11:33:03 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:08.312 11:33:03 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:08.312 11:33:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:08.312 ************************************ 00:31:08.312 START TEST fio_dif_1_default 00:31:08.312 ************************************ 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:08.312 bdev_null0 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:08.312 [2024-11-19 11:33:03.781674] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:08.312 { 00:31:08.312 "params": { 00:31:08.312 "name": "Nvme$subsystem", 00:31:08.312 "trtype": "$TEST_TRANSPORT", 00:31:08.312 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:08.312 "adrfam": "ipv4", 00:31:08.312 "trsvcid": "$NVMF_PORT", 00:31:08.312 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:08.312 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:08.312 "hdgst": ${hdgst:-false}, 00:31:08.312 "ddgst": ${ddgst:-false} 00:31:08.312 }, 00:31:08.312 "method": "bdev_nvme_attach_controller" 00:31:08.312 } 00:31:08.312 EOF 00:31:08.312 )") 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:31:08.312 11:33:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:08.312 "params": { 00:31:08.312 "name": "Nvme0", 00:31:08.312 "trtype": "tcp", 00:31:08.312 "traddr": "10.0.0.2", 00:31:08.312 "adrfam": "ipv4", 00:31:08.312 "trsvcid": "4420", 00:31:08.312 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:08.312 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:08.313 "hdgst": false, 00:31:08.313 "ddgst": false 00:31:08.313 }, 00:31:08.313 "method": "bdev_nvme_attach_controller" 00:31:08.313 }' 00:31:08.572 11:33:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:08.572 11:33:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:08.572 11:33:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:08.572 11:33:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:08.572 11:33:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:08.572 11:33:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:08.572 11:33:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:08.572 11:33:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:08.572 11:33:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:08.572 11:33:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:08.572 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:08.572 fio-3.35 00:31:08.572 Starting 1 thread 00:31:20.769 00:31:20.769 filename0: (groupid=0, jobs=1): err= 0: pid=2794732: Tue Nov 19 11:33:14 2024 00:31:20.769 read: IOPS=193, BW=776KiB/s (794kB/s)(7776KiB/10025msec) 00:31:20.769 slat (nsec): min=3817, max=47256, avg=8417.51, stdev=2685.52 00:31:20.769 clat (usec): min=483, max=46701, avg=20601.35, stdev=20408.74 00:31:20.769 lat (usec): min=490, max=46729, avg=20609.77, stdev=20408.52 00:31:20.769 clat percentiles (usec): 00:31:20.769 | 1.00th=[ 529], 5.00th=[ 545], 10.00th=[ 562], 20.00th=[ 586], 00:31:20.769 | 30.00th=[ 627], 40.00th=[ 668], 50.00th=[ 734], 60.00th=[41157], 00:31:20.769 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:31:20.769 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46924], 99.95th=[46924], 00:31:20.769 | 99.99th=[46924] 00:31:20.769 bw ( KiB/s): min= 704, max= 896, per=100.00%, avg=776.00, stdev=40.04, samples=20 00:31:20.769 iops : min= 176, max= 224, avg=194.00, stdev=10.01, samples=20 00:31:20.769 lat (usec) : 500=0.21%, 750=50.21%, 1000=0.62% 00:31:20.769 lat (msec) : 50=48.97% 00:31:20.769 cpu : usr=90.81%, sys=8.90%, ctx=12, majf=0, minf=9 00:31:20.769 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:20.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:20.769 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:20.769 issued rwts: total=1944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:20.769 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:20.769 00:31:20.769 Run status group 0 (all jobs): 00:31:20.769 READ: bw=776KiB/s (794kB/s), 776KiB/s-776KiB/s (794kB/s-794kB/s), io=7776KiB (7963kB), run=10025-10025msec 00:31:20.769 11:33:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:31:20.769 11:33:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:31:20.769 11:33:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:31:20.769 11:33:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:20.769 11:33:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:31:20.769 11:33:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:20.769 11:33:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.769 11:33:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:20.769 11:33:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.769 11:33:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:20.769 11:33:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.769 11:33:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:20.769 11:33:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.769 00:31:20.769 real 0m11.420s 00:31:20.769 user 0m10.462s 00:31:20.769 sys 0m1.207s 00:31:20.769 11:33:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:20.769 11:33:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:20.769 ************************************ 00:31:20.769 END TEST fio_dif_1_default 00:31:20.769 ************************************ 00:31:20.769 11:33:15 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:31:20.769 11:33:15 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:20.769 11:33:15 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:20.769 11:33:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:20.769 ************************************ 00:31:20.769 START TEST fio_dif_1_multi_subsystems 00:31:20.769 ************************************ 00:31:20.769 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:31:20.769 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:31:20.769 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:31:20.769 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:31:20.769 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:20.769 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:31:20.769 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:31:20.769 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:20.769 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.769 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:20.769 bdev_null0 00:31:20.769 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.769 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:20.769 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:20.770 [2024-11-19 11:33:15.252056] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:20.770 bdev_null1 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:20.770 { 00:31:20.770 "params": { 00:31:20.770 "name": "Nvme$subsystem", 00:31:20.770 "trtype": "$TEST_TRANSPORT", 00:31:20.770 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:20.770 "adrfam": "ipv4", 00:31:20.770 "trsvcid": "$NVMF_PORT", 00:31:20.770 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:20.770 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:20.770 "hdgst": ${hdgst:-false}, 00:31:20.770 "ddgst": ${ddgst:-false} 00:31:20.770 }, 00:31:20.770 "method": "bdev_nvme_attach_controller" 00:31:20.770 } 00:31:20.770 EOF 00:31:20.770 )") 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:20.770 { 00:31:20.770 "params": { 00:31:20.770 "name": "Nvme$subsystem", 00:31:20.770 "trtype": "$TEST_TRANSPORT", 00:31:20.770 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:20.770 "adrfam": "ipv4", 00:31:20.770 "trsvcid": "$NVMF_PORT", 00:31:20.770 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:20.770 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:20.770 "hdgst": ${hdgst:-false}, 00:31:20.770 "ddgst": ${ddgst:-false} 00:31:20.770 }, 00:31:20.770 "method": "bdev_nvme_attach_controller" 00:31:20.770 } 00:31:20.770 EOF 00:31:20.770 )") 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:20.770 "params": { 00:31:20.770 "name": "Nvme0", 00:31:20.770 "trtype": "tcp", 00:31:20.770 "traddr": "10.0.0.2", 00:31:20.770 "adrfam": "ipv4", 00:31:20.770 "trsvcid": "4420", 00:31:20.770 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:20.770 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:20.770 "hdgst": false, 00:31:20.770 "ddgst": false 00:31:20.770 }, 00:31:20.770 "method": "bdev_nvme_attach_controller" 00:31:20.770 },{ 00:31:20.770 "params": { 00:31:20.770 "name": "Nvme1", 00:31:20.770 "trtype": "tcp", 00:31:20.770 "traddr": "10.0.0.2", 00:31:20.770 "adrfam": "ipv4", 00:31:20.770 "trsvcid": "4420", 00:31:20.770 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:20.770 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:20.770 "hdgst": false, 00:31:20.770 "ddgst": false 00:31:20.770 }, 00:31:20.770 "method": "bdev_nvme_attach_controller" 00:31:20.770 }' 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:20.770 11:33:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:20.770 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:20.770 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:20.770 fio-3.35 00:31:20.770 Starting 2 threads 00:31:32.964 00:31:32.964 filename0: (groupid=0, jobs=1): err= 0: pid=2796643: Tue Nov 19 11:33:26 2024 00:31:32.964 read: IOPS=101, BW=405KiB/s (415kB/s)(4064KiB/10038msec) 00:31:32.964 slat (nsec): min=7096, max=31751, avg=10073.25, stdev=4197.29 00:31:32.964 clat (usec): min=717, max=44881, avg=39488.80, stdev=7847.64 00:31:32.964 lat (usec): min=724, max=44909, avg=39498.87, stdev=7847.47 00:31:32.964 clat percentiles (usec): 00:31:32.964 | 1.00th=[ 775], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:32.964 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:32.964 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:31:32.964 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:31:32.964 | 99.99th=[44827] 00:31:32.964 bw ( KiB/s): min= 384, max= 480, per=30.57%, avg=404.80, stdev=29.87, samples=20 00:31:32.964 iops : min= 96, max= 120, avg=101.20, stdev= 7.47, samples=20 00:31:32.964 lat (usec) : 750=0.89%, 1000=3.05% 00:31:32.964 lat (msec) : 50=96.06% 00:31:32.964 cpu : usr=94.76%, sys=4.97%, ctx=12, majf=0, minf=9 00:31:32.964 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:32.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:32.964 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:32.964 issued rwts: total=1016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:32.964 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:32.964 filename1: (groupid=0, jobs=1): err= 0: pid=2796644: Tue Nov 19 11:33:26 2024 00:31:32.964 read: IOPS=229, BW=917KiB/s (939kB/s)(9200KiB/10037msec) 00:31:32.964 slat (nsec): min=7161, max=34749, avg=9779.17, stdev=4239.45 00:31:32.964 clat (usec): min=498, max=42843, avg=17425.64, stdev=20054.74 00:31:32.964 lat (usec): min=505, max=42872, avg=17435.42, stdev=20054.56 00:31:32.964 clat percentiles (usec): 00:31:32.964 | 1.00th=[ 529], 5.00th=[ 553], 10.00th=[ 562], 20.00th=[ 586], 00:31:32.964 | 30.00th=[ 619], 40.00th=[ 660], 50.00th=[ 725], 60.00th=[40633], 00:31:32.965 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:31:32.965 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:31:32.965 | 99.99th=[42730] 00:31:32.965 bw ( KiB/s): min= 768, max= 1472, per=69.47%, avg=918.40, stdev=190.62, samples=20 00:31:32.965 iops : min= 192, max= 368, avg=229.60, stdev=47.65, samples=20 00:31:32.965 lat (usec) : 500=0.04%, 750=52.57%, 1000=5.83% 00:31:32.965 lat (msec) : 2=0.35%, 50=41.22% 00:31:32.965 cpu : usr=95.03%, sys=4.69%, ctx=17, majf=0, minf=9 00:31:32.965 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:32.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:32.965 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:32.965 issued rwts: total=2300,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:32.965 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:32.965 00:31:32.965 Run status group 0 (all jobs): 00:31:32.965 READ: bw=1321KiB/s (1353kB/s), 405KiB/s-917KiB/s (415kB/s-939kB/s), io=13.0MiB (13.6MB), run=10037-10038msec 00:31:32.965 11:33:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:31:32.965 11:33:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:31:32.965 11:33:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:32.965 11:33:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:32.965 11:33:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:31:32.965 11:33:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:32.965 11:33:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.965 11:33:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:32.965 11:33:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.965 11:33:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:32.965 11:33:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.965 11:33:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:32.965 11:33:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.965 11:33:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:32.965 11:33:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:32.965 11:33:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:31:32.965 11:33:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:32.965 11:33:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.965 11:33:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:32.965 11:33:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.965 11:33:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:32.965 11:33:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.965 11:33:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:32.965 11:33:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.965 00:31:32.965 real 0m11.391s 00:31:32.965 user 0m20.507s 00:31:32.965 sys 0m1.266s 00:31:32.965 11:33:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:32.965 11:33:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:32.965 ************************************ 00:31:32.965 END TEST fio_dif_1_multi_subsystems 00:31:32.965 ************************************ 00:31:32.965 11:33:26 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:31:32.965 11:33:26 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:32.965 11:33:26 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:32.965 11:33:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:32.965 ************************************ 00:31:32.965 START TEST fio_dif_rand_params 00:31:32.965 ************************************ 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:32.965 bdev_null0 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:32.965 [2024-11-19 11:33:26.698228] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:32.965 { 00:31:32.965 "params": { 00:31:32.965 "name": "Nvme$subsystem", 00:31:32.965 "trtype": "$TEST_TRANSPORT", 00:31:32.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:32.965 "adrfam": "ipv4", 00:31:32.965 "trsvcid": "$NVMF_PORT", 00:31:32.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:32.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:32.965 "hdgst": ${hdgst:-false}, 00:31:32.965 "ddgst": ${ddgst:-false} 00:31:32.965 }, 00:31:32.965 "method": "bdev_nvme_attach_controller" 00:31:32.965 } 00:31:32.965 EOF 00:31:32.965 )") 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:32.965 11:33:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:32.966 11:33:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:32.966 11:33:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:31:32.966 11:33:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:31:32.966 11:33:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:32.966 "params": { 00:31:32.966 "name": "Nvme0", 00:31:32.966 "trtype": "tcp", 00:31:32.966 "traddr": "10.0.0.2", 00:31:32.966 "adrfam": "ipv4", 00:31:32.966 "trsvcid": "4420", 00:31:32.966 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:32.966 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:32.966 "hdgst": false, 00:31:32.966 "ddgst": false 00:31:32.966 }, 00:31:32.966 "method": "bdev_nvme_attach_controller" 00:31:32.966 }' 00:31:32.966 11:33:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:32.966 11:33:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:32.966 11:33:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:32.966 11:33:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:32.966 11:33:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:32.966 11:33:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:32.966 11:33:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:32.966 11:33:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:32.966 11:33:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:32.966 11:33:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:32.966 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:32.966 ... 00:31:32.966 fio-3.35 00:31:32.966 Starting 3 threads 00:31:38.228 00:31:38.228 filename0: (groupid=0, jobs=1): err= 0: pid=2798044: Tue Nov 19 11:33:32 2024 00:31:38.228 read: IOPS=241, BW=30.2MiB/s (31.7MB/s)(153MiB/5045msec) 00:31:38.228 slat (nsec): min=6994, max=54881, avg=17204.77, stdev=4331.34 00:31:38.228 clat (usec): min=4323, max=51782, avg=12347.87, stdev=5165.13 00:31:38.228 lat (usec): min=4332, max=51800, avg=12365.08, stdev=5164.96 00:31:38.228 clat percentiles (usec): 00:31:38.228 | 1.00th=[ 4490], 5.00th=[ 8848], 10.00th=[ 9634], 20.00th=[10290], 00:31:38.228 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11731], 60.00th=[12256], 00:31:38.228 | 70.00th=[12911], 80.00th=[13566], 90.00th=[14484], 95.00th=[15401], 00:31:38.228 | 99.00th=[49021], 99.50th=[50594], 99.90th=[51119], 99.95th=[51643], 00:31:38.229 | 99.99th=[51643] 00:31:38.229 bw ( KiB/s): min=22272, max=34560, per=33.07%, avg=31180.80, stdev=3451.10, samples=10 00:31:38.229 iops : min= 174, max= 270, avg=243.60, stdev=26.96, samples=10 00:31:38.229 lat (msec) : 10=15.16%, 20=83.20%, 50=0.90%, 100=0.74% 00:31:38.229 cpu : usr=96.23%, sys=3.25%, ctx=12, majf=0, minf=58 00:31:38.229 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:38.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.229 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.229 issued rwts: total=1220,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.229 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:38.229 filename0: (groupid=0, jobs=1): err= 0: pid=2798045: Tue Nov 19 11:33:32 2024 00:31:38.229 read: IOPS=244, BW=30.6MiB/s (32.1MB/s)(153MiB/5003msec) 00:31:38.229 slat (nsec): min=8008, max=61444, avg=17084.07, stdev=4303.64 00:31:38.229 clat (usec): min=4389, max=57865, avg=12229.43, stdev=3944.54 00:31:38.229 lat (usec): min=4397, max=57892, avg=12246.52, stdev=3944.65 00:31:38.229 clat percentiles (usec): 00:31:38.229 | 1.00th=[ 4621], 5.00th=[ 8356], 10.00th=[ 9634], 20.00th=[10552], 00:31:38.229 | 30.00th=[11076], 40.00th=[11600], 50.00th=[11994], 60.00th=[12518], 00:31:38.229 | 70.00th=[13042], 80.00th=[13566], 90.00th=[14353], 95.00th=[15008], 00:31:38.229 | 99.00th=[16450], 99.50th=[46400], 99.90th=[56361], 99.95th=[57934], 00:31:38.229 | 99.99th=[57934] 00:31:38.229 bw ( KiB/s): min=29696, max=34048, per=33.21%, avg=31308.80, stdev=1423.05, samples=10 00:31:38.229 iops : min= 232, max= 266, avg=244.60, stdev=11.12, samples=10 00:31:38.229 lat (msec) : 10=13.22%, 20=86.04%, 50=0.24%, 100=0.49% 00:31:38.229 cpu : usr=95.60%, sys=3.88%, ctx=12, majf=0, minf=61 00:31:38.229 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:38.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.229 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.229 issued rwts: total=1225,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.229 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:38.229 filename0: (groupid=0, jobs=1): err= 0: pid=2798046: Tue Nov 19 11:33:32 2024 00:31:38.229 read: IOPS=254, BW=31.8MiB/s (33.3MB/s)(159MiB/5003msec) 00:31:38.229 slat (nsec): min=7652, max=48396, avg=19027.25, stdev=4986.94 00:31:38.229 clat (usec): min=4351, max=49789, avg=11786.18, stdev=3990.12 00:31:38.229 lat (usec): min=4363, max=49817, avg=11805.21, stdev=3990.15 00:31:38.229 clat percentiles (usec): 00:31:38.229 | 1.00th=[ 4752], 5.00th=[ 8160], 10.00th=[ 9372], 20.00th=[10159], 00:31:38.229 | 30.00th=[10552], 40.00th=[11207], 50.00th=[11600], 60.00th=[11863], 00:31:38.229 | 70.00th=[12387], 80.00th=[12911], 90.00th=[13829], 95.00th=[14353], 00:31:38.229 | 99.00th=[16319], 99.50th=[47449], 99.90th=[49546], 99.95th=[49546], 00:31:38.229 | 99.99th=[49546] 00:31:38.229 bw ( KiB/s): min=30208, max=34560, per=34.43%, avg=32460.80, stdev=1447.15, samples=10 00:31:38.229 iops : min= 236, max= 270, avg=253.60, stdev=11.31, samples=10 00:31:38.229 lat (msec) : 10=17.39%, 20=81.67%, 50=0.94% 00:31:38.229 cpu : usr=93.84%, sys=5.64%, ctx=19, majf=0, minf=59 00:31:38.229 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:38.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.229 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.229 issued rwts: total=1271,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.229 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:38.229 00:31:38.229 Run status group 0 (all jobs): 00:31:38.229 READ: bw=92.1MiB/s (96.5MB/s), 30.2MiB/s-31.8MiB/s (31.7MB/s-33.3MB/s), io=465MiB (487MB), run=5003-5045msec 00:31:38.229 11:33:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:31:38.229 11:33:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:38.229 11:33:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:38.229 11:33:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:38.229 11:33:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:38.229 11:33:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:38.229 11:33:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:38.229 11:33:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:38.229 11:33:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:38.229 11:33:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:38.229 11:33:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:38.229 11:33:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:38.229 11:33:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:38.229 11:33:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:31:38.229 11:33:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:31:38.229 11:33:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:31:38.229 11:33:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:31:38.229 11:33:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:31:38.229 11:33:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:31:38.229 11:33:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:31:38.229 11:33:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:38.229 11:33:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:38.229 11:33:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:38.229 11:33:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:38.229 11:33:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:31:38.229 11:33:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:38.229 11:33:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:38.229 bdev_null0 00:31:38.229 11:33:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:38.229 11:33:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:38.229 11:33:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:38.229 11:33:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:38.229 11:33:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:38.229 11:33:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:38.229 11:33:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:38.229 11:33:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:38.229 11:33:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:38.229 11:33:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:38.229 11:33:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:38.229 11:33:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:38.229 [2024-11-19 11:33:32.990567] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:38.229 11:33:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:38.229 11:33:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:38.229 11:33:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:38.229 11:33:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:38.229 11:33:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:31:38.229 11:33:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:38.229 11:33:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:38.229 bdev_null1 00:31:38.229 11:33:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:38.229 11:33:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:38.229 11:33:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:38.229 11:33:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:38.229 11:33:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:38.229 11:33:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:38.229 11:33:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:38.229 11:33:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:38.229 11:33:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:38.229 11:33:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:38.229 11:33:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:38.229 11:33:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:38.229 11:33:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:38.229 11:33:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:38.229 11:33:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:31:38.229 11:33:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:31:38.229 11:33:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:31:38.229 11:33:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:38.229 11:33:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:38.229 bdev_null2 00:31:38.229 11:33:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:38.229 11:33:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:38.230 { 00:31:38.230 "params": { 00:31:38.230 "name": "Nvme$subsystem", 00:31:38.230 "trtype": "$TEST_TRANSPORT", 00:31:38.230 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:38.230 "adrfam": "ipv4", 00:31:38.230 "trsvcid": "$NVMF_PORT", 00:31:38.230 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:38.230 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:38.230 "hdgst": ${hdgst:-false}, 00:31:38.230 "ddgst": ${ddgst:-false} 00:31:38.230 }, 00:31:38.230 "method": "bdev_nvme_attach_controller" 00:31:38.230 } 00:31:38.230 EOF 00:31:38.230 )") 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:38.230 { 00:31:38.230 "params": { 00:31:38.230 "name": "Nvme$subsystem", 00:31:38.230 "trtype": "$TEST_TRANSPORT", 00:31:38.230 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:38.230 "adrfam": "ipv4", 00:31:38.230 "trsvcid": "$NVMF_PORT", 00:31:38.230 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:38.230 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:38.230 "hdgst": ${hdgst:-false}, 00:31:38.230 "ddgst": ${ddgst:-false} 00:31:38.230 }, 00:31:38.230 "method": "bdev_nvme_attach_controller" 00:31:38.230 } 00:31:38.230 EOF 00:31:38.230 )") 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:38.230 { 00:31:38.230 "params": { 00:31:38.230 "name": "Nvme$subsystem", 00:31:38.230 "trtype": "$TEST_TRANSPORT", 00:31:38.230 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:38.230 "adrfam": "ipv4", 00:31:38.230 "trsvcid": "$NVMF_PORT", 00:31:38.230 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:38.230 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:38.230 "hdgst": ${hdgst:-false}, 00:31:38.230 "ddgst": ${ddgst:-false} 00:31:38.230 }, 00:31:38.230 "method": "bdev_nvme_attach_controller" 00:31:38.230 } 00:31:38.230 EOF 00:31:38.230 )") 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:38.230 "params": { 00:31:38.230 "name": "Nvme0", 00:31:38.230 "trtype": "tcp", 00:31:38.230 "traddr": "10.0.0.2", 00:31:38.230 "adrfam": "ipv4", 00:31:38.230 "trsvcid": "4420", 00:31:38.230 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:38.230 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:38.230 "hdgst": false, 00:31:38.230 "ddgst": false 00:31:38.230 }, 00:31:38.230 "method": "bdev_nvme_attach_controller" 00:31:38.230 },{ 00:31:38.230 "params": { 00:31:38.230 "name": "Nvme1", 00:31:38.230 "trtype": "tcp", 00:31:38.230 "traddr": "10.0.0.2", 00:31:38.230 "adrfam": "ipv4", 00:31:38.230 "trsvcid": "4420", 00:31:38.230 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:38.230 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:38.230 "hdgst": false, 00:31:38.230 "ddgst": false 00:31:38.230 }, 00:31:38.230 "method": "bdev_nvme_attach_controller" 00:31:38.230 },{ 00:31:38.230 "params": { 00:31:38.230 "name": "Nvme2", 00:31:38.230 "trtype": "tcp", 00:31:38.230 "traddr": "10.0.0.2", 00:31:38.230 "adrfam": "ipv4", 00:31:38.230 "trsvcid": "4420", 00:31:38.230 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:38.230 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:38.230 "hdgst": false, 00:31:38.230 "ddgst": false 00:31:38.230 }, 00:31:38.230 "method": "bdev_nvme_attach_controller" 00:31:38.230 }' 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:38.230 11:33:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:38.230 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:38.230 ... 00:31:38.230 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:38.230 ... 00:31:38.230 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:38.230 ... 00:31:38.230 fio-3.35 00:31:38.230 Starting 24 threads 00:31:50.433 00:31:50.433 filename0: (groupid=0, jobs=1): err= 0: pid=2798903: Tue Nov 19 11:33:44 2024 00:31:50.433 read: IOPS=164, BW=658KiB/s (674kB/s)(6648KiB/10100msec) 00:31:50.433 slat (nsec): min=3330, max=56104, avg=17104.42, stdev=10421.83 00:31:50.433 clat (usec): min=316, max=371672, avg=96518.19, stdev=105764.14 00:31:50.433 lat (usec): min=333, max=371682, avg=96535.30, stdev=105760.13 00:31:50.433 clat percentiles (msec): 00:31:50.433 | 1.00th=[ 3], 5.00th=[ 18], 10.00th=[ 31], 20.00th=[ 33], 00:31:50.433 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:31:50.433 | 70.00th=[ 35], 80.00th=[ 255], 90.00th=[ 275], 95.00th=[ 275], 00:31:50.433 | 99.00th=[ 288], 99.50th=[ 300], 99.90th=[ 372], 99.95th=[ 372], 00:31:50.433 | 99.99th=[ 372] 00:31:50.433 bw ( KiB/s): min= 144, max= 2544, per=4.48%, avg=658.40, stdev=773.82, samples=20 00:31:50.433 iops : min= 36, max= 636, avg=164.60, stdev=193.46, samples=20 00:31:50.433 lat (usec) : 500=0.06% 00:31:50.433 lat (msec) : 4=1.81%, 10=1.44%, 20=3.07%, 50=65.70%, 250=2.89% 00:31:50.433 lat (msec) : 500=25.03% 00:31:50.433 cpu : usr=98.30%, sys=1.29%, ctx=21, majf=0, minf=9 00:31:50.433 IO depths : 1=4.3%, 2=10.2%, 4=23.6%, 8=53.5%, 16=8.4%, 32=0.0%, >=64=0.0% 00:31:50.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.433 complete : 0=0.0%, 4=93.9%, 8=0.5%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.433 issued rwts: total=1662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.433 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:50.433 filename0: (groupid=0, jobs=1): err= 0: pid=2798904: Tue Nov 19 11:33:44 2024 00:31:50.433 read: IOPS=157, BW=629KiB/s (644kB/s)(6344KiB/10080msec) 00:31:50.433 slat (usec): min=7, max=111, avg=46.17, stdev=28.49 00:31:50.433 clat (msec): min=17, max=412, avg=101.16, stdev=108.12 00:31:50.433 lat (msec): min=17, max=412, avg=101.21, stdev=108.10 00:31:50.433 clat percentiles (msec): 00:31:50.433 | 1.00th=[ 18], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 33], 00:31:50.433 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:31:50.433 | 70.00th=[ 35], 80.00th=[ 264], 90.00th=[ 271], 95.00th=[ 275], 00:31:50.433 | 99.00th=[ 372], 99.50th=[ 388], 99.90th=[ 414], 99.95th=[ 414], 00:31:50.433 | 99.99th=[ 414] 00:31:50.433 bw ( KiB/s): min= 176, max= 2048, per=4.27%, avg=628.00, stdev=703.69, samples=20 00:31:50.433 iops : min= 44, max= 512, avg=157.00, stdev=175.92, samples=20 00:31:50.433 lat (msec) : 20=1.01%, 50=69.61%, 250=4.29%, 500=25.09% 00:31:50.433 cpu : usr=98.27%, sys=1.26%, ctx=34, majf=0, minf=9 00:31:50.433 IO depths : 1=4.7%, 2=9.6%, 4=20.9%, 8=56.9%, 16=7.8%, 32=0.0%, >=64=0.0% 00:31:50.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.433 complete : 0=0.0%, 4=92.9%, 8=1.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.433 issued rwts: total=1586,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.433 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:50.433 filename0: (groupid=0, jobs=1): err= 0: pid=2798905: Tue Nov 19 11:33:44 2024 00:31:50.433 read: IOPS=154, BW=620KiB/s (634kB/s)(6240KiB/10071msec) 00:31:50.433 slat (usec): min=7, max=116, avg=49.99, stdev=30.00 00:31:50.433 clat (msec): min=29, max=409, avg=102.59, stdev=108.15 00:31:50.433 lat (msec): min=29, max=409, avg=102.64, stdev=108.13 00:31:50.433 clat percentiles (msec): 00:31:50.433 | 1.00th=[ 31], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 33], 00:31:50.433 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:31:50.433 | 70.00th=[ 130], 80.00th=[ 264], 90.00th=[ 268], 95.00th=[ 275], 00:31:50.433 | 99.00th=[ 342], 99.50th=[ 397], 99.90th=[ 409], 99.95th=[ 409], 00:31:50.433 | 99.99th=[ 409] 00:31:50.433 bw ( KiB/s): min= 144, max= 1923, per=4.20%, avg=617.75, stdev=693.70, samples=20 00:31:50.433 iops : min= 36, max= 480, avg=154.40, stdev=173.35, samples=20 00:31:50.433 lat (msec) : 50=69.74%, 250=4.23%, 500=26.03% 00:31:50.433 cpu : usr=98.50%, sys=1.08%, ctx=16, majf=0, minf=9 00:31:50.433 IO depths : 1=4.6%, 2=9.4%, 4=20.6%, 8=57.5%, 16=8.0%, 32=0.0%, >=64=0.0% 00:31:50.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.433 complete : 0=0.0%, 4=92.8%, 8=1.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.433 issued rwts: total=1560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.433 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:50.433 filename0: (groupid=0, jobs=1): err= 0: pid=2798906: Tue Nov 19 11:33:44 2024 00:31:50.433 read: IOPS=154, BW=619KiB/s (634kB/s)(6232KiB/10070msec) 00:31:50.433 slat (usec): min=5, max=121, avg=54.79, stdev=30.71 00:31:50.433 clat (msec): min=27, max=434, avg=102.83, stdev=109.06 00:31:50.433 lat (msec): min=27, max=434, avg=102.89, stdev=109.03 00:31:50.433 clat percentiles (msec): 00:31:50.433 | 1.00th=[ 31], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 33], 00:31:50.433 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:31:50.433 | 70.00th=[ 95], 80.00th=[ 264], 90.00th=[ 268], 95.00th=[ 275], 00:31:50.433 | 99.00th=[ 368], 99.50th=[ 435], 99.90th=[ 435], 99.95th=[ 435], 00:31:50.433 | 99.99th=[ 435] 00:31:50.433 bw ( KiB/s): min= 128, max= 1920, per=4.20%, avg=616.80, stdev=693.68, samples=20 00:31:50.433 iops : min= 32, max= 480, avg=154.20, stdev=173.42, samples=20 00:31:50.433 lat (msec) : 50=68.81%, 100=1.41%, 250=4.11%, 500=25.67% 00:31:50.433 cpu : usr=98.09%, sys=1.33%, ctx=36, majf=0, minf=9 00:31:50.433 IO depths : 1=4.6%, 2=9.2%, 4=20.3%, 8=58.0%, 16=8.0%, 32=0.0%, >=64=0.0% 00:31:50.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.433 complete : 0=0.0%, 4=92.7%, 8=1.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.433 issued rwts: total=1558,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.433 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:50.433 filename0: (groupid=0, jobs=1): err= 0: pid=2798907: Tue Nov 19 11:33:44 2024 00:31:50.433 read: IOPS=155, BW=623KiB/s (638kB/s)(6280KiB/10080msec) 00:31:50.433 slat (usec): min=7, max=150, avg=34.04, stdev=20.65 00:31:50.433 clat (msec): min=17, max=424, avg=102.35, stdev=112.95 00:31:50.433 lat (msec): min=17, max=424, avg=102.38, stdev=112.95 00:31:50.433 clat percentiles (msec): 00:31:50.433 | 1.00th=[ 23], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 33], 00:31:50.433 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:31:50.433 | 70.00th=[ 35], 80.00th=[ 264], 90.00th=[ 275], 95.00th=[ 279], 00:31:50.433 | 99.00th=[ 401], 99.50th=[ 414], 99.90th=[ 426], 99.95th=[ 426], 00:31:50.433 | 99.99th=[ 426] 00:31:50.433 bw ( KiB/s): min= 128, max= 2048, per=4.23%, avg=621.60, stdev=707.86, samples=20 00:31:50.433 iops : min= 32, max= 512, avg=155.40, stdev=176.97, samples=20 00:31:50.433 lat (msec) : 20=0.89%, 50=70.45%, 100=0.38%, 250=2.55%, 500=25.73% 00:31:50.433 cpu : usr=96.92%, sys=1.88%, ctx=180, majf=0, minf=9 00:31:50.433 IO depths : 1=4.8%, 2=10.4%, 4=23.0%, 8=54.1%, 16=7.8%, 32=0.0%, >=64=0.0% 00:31:50.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.433 complete : 0=0.0%, 4=93.5%, 8=0.8%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.433 issued rwts: total=1570,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.433 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:50.433 filename0: (groupid=0, jobs=1): err= 0: pid=2798908: Tue Nov 19 11:33:44 2024 00:31:50.433 read: IOPS=158, BW=634KiB/s (649kB/s)(6400KiB/10093msec) 00:31:50.433 slat (nsec): min=4030, max=83085, avg=27956.25, stdev=14523.52 00:31:50.433 clat (msec): min=17, max=354, avg=100.11, stdev=106.16 00:31:50.433 lat (msec): min=17, max=354, avg=100.14, stdev=106.15 00:31:50.433 clat percentiles (msec): 00:31:50.433 | 1.00th=[ 18], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 33], 00:31:50.433 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:31:50.433 | 70.00th=[ 35], 80.00th=[ 255], 90.00th=[ 275], 95.00th=[ 275], 00:31:50.433 | 99.00th=[ 300], 99.50th=[ 300], 99.90th=[ 355], 99.95th=[ 355], 00:31:50.433 | 99.99th=[ 355] 00:31:50.433 bw ( KiB/s): min= 144, max= 2048, per=4.31%, avg=633.60, stdev=716.00, samples=20 00:31:50.433 iops : min= 36, max= 512, avg=158.40, stdev=179.00, samples=20 00:31:50.433 lat (msec) : 20=1.88%, 50=69.12%, 250=3.12%, 500=25.87% 00:31:50.433 cpu : usr=97.78%, sys=1.39%, ctx=82, majf=0, minf=9 00:31:50.433 IO depths : 1=4.1%, 2=10.4%, 4=25.0%, 8=52.1%, 16=8.4%, 32=0.0%, >=64=0.0% 00:31:50.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.433 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.433 issued rwts: total=1600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.433 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:50.433 filename0: (groupid=0, jobs=1): err= 0: pid=2798909: Tue Nov 19 11:33:44 2024 00:31:50.433 read: IOPS=157, BW=629KiB/s (644kB/s)(6336KiB/10077msec) 00:31:50.433 slat (nsec): min=6954, max=98493, avg=40083.49, stdev=23640.96 00:31:50.433 clat (msec): min=13, max=370, avg=100.85, stdev=106.46 00:31:50.433 lat (msec): min=13, max=370, avg=100.89, stdev=106.45 00:31:50.433 clat percentiles (msec): 00:31:50.433 | 1.00th=[ 19], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 33], 00:31:50.433 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:31:50.433 | 70.00th=[ 35], 80.00th=[ 257], 90.00th=[ 275], 95.00th=[ 275], 00:31:50.433 | 99.00th=[ 300], 99.50th=[ 300], 99.90th=[ 372], 99.95th=[ 372], 00:31:50.433 | 99.99th=[ 372] 00:31:50.434 bw ( KiB/s): min= 144, max= 2048, per=4.27%, avg=627.20, stdev=704.38, samples=20 00:31:50.434 iops : min= 36, max= 512, avg=156.80, stdev=176.09, samples=20 00:31:50.434 lat (msec) : 20=1.01%, 50=69.70%, 250=3.03%, 500=26.26% 00:31:50.434 cpu : usr=98.38%, sys=1.10%, ctx=42, majf=0, minf=9 00:31:50.434 IO depths : 1=4.2%, 2=10.5%, 4=25.0%, 8=52.0%, 16=8.3%, 32=0.0%, >=64=0.0% 00:31:50.434 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.434 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.434 issued rwts: total=1584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.434 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:50.434 filename0: (groupid=0, jobs=1): err= 0: pid=2798910: Tue Nov 19 11:33:44 2024 00:31:50.434 read: IOPS=156, BW=627KiB/s (642kB/s)(6336KiB/10110msec) 00:31:50.434 slat (nsec): min=6303, max=87773, avg=27703.25, stdev=18119.13 00:31:50.434 clat (msec): min=25, max=315, avg=101.52, stdev=105.14 00:31:50.434 lat (msec): min=25, max=315, avg=101.54, stdev=105.13 00:31:50.434 clat percentiles (msec): 00:31:50.434 | 1.00th=[ 27], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 34], 00:31:50.434 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:31:50.434 | 70.00th=[ 130], 80.00th=[ 259], 90.00th=[ 275], 95.00th=[ 275], 00:31:50.434 | 99.00th=[ 279], 99.50th=[ 279], 99.90th=[ 317], 99.95th=[ 317], 00:31:50.434 | 99.99th=[ 317] 00:31:50.434 bw ( KiB/s): min= 128, max= 1920, per=4.27%, avg=627.20, stdev=703.28, samples=20 00:31:50.434 iops : min= 32, max= 480, avg=156.80, stdev=175.82, samples=20 00:31:50.434 lat (msec) : 50=69.70%, 250=3.03%, 500=27.27% 00:31:50.434 cpu : usr=98.25%, sys=1.27%, ctx=21, majf=0, minf=9 00:31:50.434 IO depths : 1=4.7%, 2=11.0%, 4=25.0%, 8=51.5%, 16=7.8%, 32=0.0%, >=64=0.0% 00:31:50.434 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.434 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.434 issued rwts: total=1584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.434 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:50.434 filename1: (groupid=0, jobs=1): err= 0: pid=2798911: Tue Nov 19 11:33:44 2024 00:31:50.434 read: IOPS=143, BW=573KiB/s (586kB/s)(5760KiB/10058msec) 00:31:50.434 slat (usec): min=8, max=106, avg=31.35, stdev=16.84 00:31:50.434 clat (msec): min=24, max=530, avg=111.45, stdev=144.64 00:31:50.434 lat (msec): min=24, max=530, avg=111.48, stdev=144.63 00:31:50.434 clat percentiles (msec): 00:31:50.434 | 1.00th=[ 31], 5.00th=[ 32], 10.00th=[ 32], 20.00th=[ 33], 00:31:50.434 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:31:50.434 | 70.00th=[ 34], 80.00th=[ 266], 90.00th=[ 384], 95.00th=[ 422], 00:31:50.434 | 99.00th=[ 435], 99.50th=[ 435], 99.90th=[ 531], 99.95th=[ 531], 00:31:50.434 | 99.99th=[ 531] 00:31:50.434 bw ( KiB/s): min= 128, max= 1923, per=3.88%, avg=569.75, stdev=712.81, samples=20 00:31:50.434 iops : min= 32, max= 480, avg=142.40, stdev=178.13, samples=20 00:31:50.434 lat (msec) : 50=74.44%, 100=2.22%, 250=1.25%, 500=21.81%, 750=0.28% 00:31:50.434 cpu : usr=97.97%, sys=1.26%, ctx=71, majf=0, minf=9 00:31:50.434 IO depths : 1=5.7%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:31:50.434 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.434 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.434 issued rwts: total=1440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.434 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:50.434 filename1: (groupid=0, jobs=1): err= 0: pid=2798912: Tue Nov 19 11:33:44 2024 00:31:50.434 read: IOPS=139, BW=560KiB/s (573kB/s)(5632KiB/10058msec) 00:31:50.434 slat (usec): min=7, max=138, avg=46.21, stdev=31.52 00:31:50.434 clat (msec): min=29, max=546, avg=113.94, stdev=150.52 00:31:50.434 lat (msec): min=29, max=546, avg=113.99, stdev=150.50 00:31:50.434 clat percentiles (msec): 00:31:50.434 | 1.00th=[ 31], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 33], 00:31:50.434 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:31:50.434 | 70.00th=[ 35], 80.00th=[ 359], 90.00th=[ 401], 95.00th=[ 422], 00:31:50.434 | 99.00th=[ 443], 99.50th=[ 443], 99.90th=[ 550], 99.95th=[ 550], 00:31:50.434 | 99.99th=[ 550] 00:31:50.434 bw ( KiB/s): min= 128, max= 1920, per=3.79%, avg=556.95, stdev=717.82, samples=20 00:31:50.434 iops : min= 32, max= 480, avg=139.20, stdev=179.39, samples=20 00:31:50.434 lat (msec) : 50=75.99%, 100=1.14%, 250=1.42%, 500=21.16%, 750=0.28% 00:31:50.434 cpu : usr=97.63%, sys=1.64%, ctx=155, majf=0, minf=9 00:31:50.434 IO depths : 1=1.8%, 2=8.0%, 4=25.0%, 8=54.5%, 16=10.7%, 32=0.0%, >=64=0.0% 00:31:50.434 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.434 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.434 issued rwts: total=1408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.434 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:50.434 filename1: (groupid=0, jobs=1): err= 0: pid=2798913: Tue Nov 19 11:33:44 2024 00:31:50.434 read: IOPS=155, BW=623KiB/s (638kB/s)(6272KiB/10068msec) 00:31:50.434 slat (nsec): min=7810, max=55247, avg=19766.64, stdev=11017.51 00:31:50.434 clat (msec): min=30, max=346, avg=102.45, stdev=105.79 00:31:50.434 lat (msec): min=30, max=346, avg=102.47, stdev=105.78 00:31:50.434 clat percentiles (msec): 00:31:50.434 | 1.00th=[ 31], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 34], 00:31:50.434 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:31:50.434 | 70.00th=[ 113], 80.00th=[ 257], 90.00th=[ 275], 95.00th=[ 275], 00:31:50.434 | 99.00th=[ 279], 99.50th=[ 347], 99.90th=[ 347], 99.95th=[ 347], 00:31:50.434 | 99.99th=[ 347] 00:31:50.434 bw ( KiB/s): min= 144, max= 1920, per=4.22%, avg=620.80, stdev=691.25, samples=20 00:31:50.434 iops : min= 36, max= 480, avg=155.20, stdev=172.81, samples=20 00:31:50.434 lat (msec) : 50=68.37%, 100=1.02%, 250=3.44%, 500=27.17% 00:31:50.434 cpu : usr=98.51%, sys=1.05%, ctx=20, majf=0, minf=10 00:31:50.434 IO depths : 1=4.6%, 2=10.7%, 4=24.6%, 8=52.2%, 16=7.9%, 32=0.0%, >=64=0.0% 00:31:50.434 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.434 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.434 issued rwts: total=1568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.434 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:50.434 filename1: (groupid=0, jobs=1): err= 0: pid=2798914: Tue Nov 19 11:33:44 2024 00:31:50.434 read: IOPS=140, BW=560KiB/s (573kB/s)(5632KiB/10057msec) 00:31:50.434 slat (usec): min=6, max=136, avg=37.25, stdev=14.41 00:31:50.434 clat (msec): min=22, max=507, avg=113.94, stdev=150.50 00:31:50.434 lat (msec): min=22, max=507, avg=113.98, stdev=150.49 00:31:50.434 clat percentiles (msec): 00:31:50.434 | 1.00th=[ 25], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 33], 00:31:50.434 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:31:50.434 | 70.00th=[ 34], 80.00th=[ 363], 90.00th=[ 409], 95.00th=[ 426], 00:31:50.434 | 99.00th=[ 443], 99.50th=[ 443], 99.90th=[ 506], 99.95th=[ 506], 00:31:50.434 | 99.99th=[ 506] 00:31:50.434 bw ( KiB/s): min= 128, max= 1920, per=3.79%, avg=556.80, stdev=719.63, samples=20 00:31:50.434 iops : min= 32, max= 480, avg=139.20, stdev=179.91, samples=20 00:31:50.434 lat (msec) : 50=75.71%, 100=1.56%, 250=1.14%, 500=21.45%, 750=0.14% 00:31:50.434 cpu : usr=97.80%, sys=1.58%, ctx=49, majf=0, minf=9 00:31:50.434 IO depths : 1=4.8%, 2=10.9%, 4=24.7%, 8=51.8%, 16=7.7%, 32=0.0%, >=64=0.0% 00:31:50.434 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.434 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.434 issued rwts: total=1408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.434 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:50.434 filename1: (groupid=0, jobs=1): err= 0: pid=2798915: Tue Nov 19 11:33:44 2024 00:31:50.434 read: IOPS=140, BW=560KiB/s (574kB/s)(5632KiB/10056msec) 00:31:50.434 slat (usec): min=7, max=120, avg=61.48, stdev=21.44 00:31:50.434 clat (msec): min=14, max=530, avg=113.77, stdev=150.11 00:31:50.434 lat (msec): min=14, max=530, avg=113.83, stdev=150.09 00:31:50.434 clat percentiles (msec): 00:31:50.434 | 1.00th=[ 28], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 33], 00:31:50.434 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:31:50.434 | 70.00th=[ 34], 80.00th=[ 363], 90.00th=[ 384], 95.00th=[ 422], 00:31:50.434 | 99.00th=[ 443], 99.50th=[ 443], 99.90th=[ 531], 99.95th=[ 531], 00:31:50.434 | 99.99th=[ 531] 00:31:50.434 bw ( KiB/s): min= 128, max= 1936, per=3.79%, avg=556.95, stdev=719.92, samples=20 00:31:50.434 iops : min= 32, max= 484, avg=139.20, stdev=179.91, samples=20 00:31:50.434 lat (msec) : 20=0.85%, 50=74.72%, 100=1.70%, 500=22.30%, 750=0.43% 00:31:50.434 cpu : usr=97.58%, sys=1.52%, ctx=101, majf=0, minf=9 00:31:50.434 IO depths : 1=3.3%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.2%, 32=0.0%, >=64=0.0% 00:31:50.434 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.434 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.434 issued rwts: total=1408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.434 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:50.434 filename1: (groupid=0, jobs=1): err= 0: pid=2798916: Tue Nov 19 11:33:44 2024 00:31:50.434 read: IOPS=155, BW=622KiB/s (637kB/s)(6272KiB/10077msec) 00:31:50.434 slat (usec): min=8, max=122, avg=35.41, stdev=19.85 00:31:50.434 clat (msec): min=17, max=408, avg=101.90, stdev=110.94 00:31:50.434 lat (msec): min=17, max=408, avg=101.94, stdev=110.94 00:31:50.434 clat percentiles (msec): 00:31:50.434 | 1.00th=[ 19], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 33], 00:31:50.434 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:31:50.434 | 70.00th=[ 35], 80.00th=[ 262], 90.00th=[ 275], 95.00th=[ 279], 00:31:50.434 | 99.00th=[ 397], 99.50th=[ 397], 99.90th=[ 409], 99.95th=[ 409], 00:31:50.434 | 99.99th=[ 409] 00:31:50.434 bw ( KiB/s): min= 144, max= 2048, per=4.22%, avg=620.80, stdev=708.36, samples=20 00:31:50.434 iops : min= 36, max= 512, avg=155.20, stdev=177.09, samples=20 00:31:50.434 lat (msec) : 20=1.02%, 50=70.41%, 250=1.91%, 500=26.66% 00:31:50.434 cpu : usr=97.48%, sys=1.66%, ctx=144, majf=0, minf=9 00:31:50.434 IO depths : 1=4.6%, 2=10.8%, 4=25.0%, 8=51.7%, 16=7.9%, 32=0.0%, >=64=0.0% 00:31:50.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.435 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.435 issued rwts: total=1568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.435 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:50.435 filename1: (groupid=0, jobs=1): err= 0: pid=2798917: Tue Nov 19 11:33:44 2024 00:31:50.435 read: IOPS=157, BW=629KiB/s (644kB/s)(6336KiB/10077msec) 00:31:50.435 slat (nsec): min=7898, max=84008, avg=27138.48, stdev=14870.88 00:31:50.435 clat (msec): min=17, max=363, avg=100.96, stdev=106.36 00:31:50.435 lat (msec): min=17, max=363, avg=100.99, stdev=106.35 00:31:50.435 clat percentiles (msec): 00:31:50.435 | 1.00th=[ 23], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 33], 00:31:50.435 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:31:50.435 | 70.00th=[ 35], 80.00th=[ 257], 90.00th=[ 275], 95.00th=[ 275], 00:31:50.435 | 99.00th=[ 300], 99.50th=[ 300], 99.90th=[ 363], 99.95th=[ 363], 00:31:50.435 | 99.99th=[ 363] 00:31:50.435 bw ( KiB/s): min= 144, max= 2048, per=4.27%, avg=627.20, stdev=704.38, samples=20 00:31:50.435 iops : min= 36, max= 512, avg=156.80, stdev=176.09, samples=20 00:31:50.435 lat (msec) : 20=0.88%, 50=69.82%, 250=3.03%, 500=26.26% 00:31:50.435 cpu : usr=98.20%, sys=1.28%, ctx=33, majf=0, minf=9 00:31:50.435 IO depths : 1=4.5%, 2=10.7%, 4=25.0%, 8=51.8%, 16=8.0%, 32=0.0%, >=64=0.0% 00:31:50.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.435 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.435 issued rwts: total=1584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.435 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:50.435 filename1: (groupid=0, jobs=1): err= 0: pid=2798918: Tue Nov 19 11:33:44 2024 00:31:50.435 read: IOPS=155, BW=621KiB/s (636kB/s)(6256KiB/10070msec) 00:31:50.435 slat (usec): min=7, max=109, avg=33.29, stdev=19.33 00:31:50.435 clat (msec): min=29, max=397, avg=102.47, stdev=108.67 00:31:50.435 lat (msec): min=29, max=397, avg=102.50, stdev=108.66 00:31:50.435 clat percentiles (msec): 00:31:50.435 | 1.00th=[ 31], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 33], 00:31:50.435 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:31:50.435 | 70.00th=[ 39], 80.00th=[ 255], 90.00th=[ 275], 95.00th=[ 275], 00:31:50.435 | 99.00th=[ 363], 99.50th=[ 397], 99.90th=[ 397], 99.95th=[ 397], 00:31:50.435 | 99.99th=[ 397] 00:31:50.435 bw ( KiB/s): min= 144, max= 1920, per=4.23%, avg=621.60, stdev=696.81, samples=20 00:31:50.435 iops : min= 36, max= 480, avg=155.40, stdev=174.20, samples=20 00:31:50.435 lat (msec) : 50=70.59%, 250=3.96%, 500=25.45% 00:31:50.435 cpu : usr=97.83%, sys=1.51%, ctx=70, majf=0, minf=9 00:31:50.435 IO depths : 1=4.5%, 2=10.4%, 4=23.8%, 8=53.3%, 16=8.1%, 32=0.0%, >=64=0.0% 00:31:50.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.435 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.435 issued rwts: total=1564,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.435 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:50.435 filename2: (groupid=0, jobs=1): err= 0: pid=2798919: Tue Nov 19 11:33:44 2024 00:31:50.435 read: IOPS=154, BW=620KiB/s (634kB/s)(6232KiB/10058msec) 00:31:50.435 slat (usec): min=7, max=107, avg=35.58, stdev=24.59 00:31:50.435 clat (msec): min=29, max=431, avg=102.85, stdev=108.90 00:31:50.435 lat (msec): min=29, max=431, avg=102.89, stdev=108.88 00:31:50.435 clat percentiles (msec): 00:31:50.435 | 1.00th=[ 31], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 33], 00:31:50.435 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:31:50.435 | 70.00th=[ 100], 80.00th=[ 264], 90.00th=[ 268], 95.00th=[ 275], 00:31:50.435 | 99.00th=[ 368], 99.50th=[ 430], 99.90th=[ 430], 99.95th=[ 430], 00:31:50.435 | 99.99th=[ 430] 00:31:50.435 bw ( KiB/s): min= 128, max= 1920, per=4.20%, avg=616.80, stdev=693.68, samples=20 00:31:50.435 iops : min= 32, max= 480, avg=154.20, stdev=173.42, samples=20 00:31:50.435 lat (msec) : 50=69.83%, 100=0.39%, 250=3.59%, 500=26.19% 00:31:50.435 cpu : usr=98.17%, sys=1.27%, ctx=35, majf=0, minf=9 00:31:50.435 IO depths : 1=4.6%, 2=9.2%, 4=20.1%, 8=58.2%, 16=8.0%, 32=0.0%, >=64=0.0% 00:31:50.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.435 complete : 0=0.0%, 4=92.6%, 8=1.6%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.435 issued rwts: total=1558,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.435 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:50.435 filename2: (groupid=0, jobs=1): err= 0: pid=2798920: Tue Nov 19 11:33:44 2024 00:31:50.435 read: IOPS=161, BW=648KiB/s (663kB/s)(6544KiB/10101msec) 00:31:50.435 slat (nsec): min=6011, max=38407, avg=11793.60, stdev=4132.51 00:31:50.435 clat (msec): min=2, max=403, avg=98.50, stdev=108.74 00:31:50.435 lat (msec): min=2, max=403, avg=98.51, stdev=108.74 00:31:50.435 clat percentiles (msec): 00:31:50.435 | 1.00th=[ 6], 5.00th=[ 27], 10.00th=[ 32], 20.00th=[ 34], 00:31:50.435 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:31:50.435 | 70.00th=[ 35], 80.00th=[ 262], 90.00th=[ 275], 95.00th=[ 279], 00:31:50.435 | 99.00th=[ 384], 99.50th=[ 401], 99.90th=[ 405], 99.95th=[ 405], 00:31:50.435 | 99.99th=[ 405] 00:31:50.435 bw ( KiB/s): min= 176, max= 2308, per=4.41%, avg=648.20, stdev=753.68, samples=20 00:31:50.435 iops : min= 44, max= 577, avg=162.05, stdev=188.42, samples=20 00:31:50.435 lat (msec) : 4=0.86%, 10=1.10%, 20=1.65%, 50=67.79%, 250=4.52% 00:31:50.435 lat (msec) : 500=24.08% 00:31:50.435 cpu : usr=97.41%, sys=1.90%, ctx=68, majf=0, minf=9 00:31:50.435 IO depths : 1=4.4%, 2=9.0%, 4=19.7%, 8=58.7%, 16=8.2%, 32=0.0%, >=64=0.0% 00:31:50.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.435 complete : 0=0.0%, 4=92.5%, 8=1.8%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.435 issued rwts: total=1636,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.435 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:50.435 filename2: (groupid=0, jobs=1): err= 0: pid=2798921: Tue Nov 19 11:33:44 2024 00:31:50.435 read: IOPS=153, BW=613KiB/s (628kB/s)(6168KiB/10058msec) 00:31:50.435 slat (usec): min=8, max=121, avg=46.66, stdev=28.44 00:31:50.435 clat (msec): min=29, max=422, avg=103.89, stdev=113.00 00:31:50.435 lat (msec): min=29, max=422, avg=103.94, stdev=112.97 00:31:50.435 clat percentiles (msec): 00:31:50.435 | 1.00th=[ 31], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 33], 00:31:50.435 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:31:50.435 | 70.00th=[ 54], 80.00th=[ 264], 90.00th=[ 275], 95.00th=[ 279], 00:31:50.435 | 99.00th=[ 422], 99.50th=[ 422], 99.90th=[ 422], 99.95th=[ 422], 00:31:50.435 | 99.99th=[ 422] 00:31:50.435 bw ( KiB/s): min= 128, max= 1923, per=4.16%, avg=610.55, stdev=688.02, samples=20 00:31:50.435 iops : min= 32, max= 480, avg=152.60, stdev=171.93, samples=20 00:31:50.435 lat (msec) : 50=69.52%, 100=1.69%, 250=2.20%, 500=26.59% 00:31:50.435 cpu : usr=98.57%, sys=0.99%, ctx=15, majf=0, minf=9 00:31:50.435 IO depths : 1=4.9%, 2=10.2%, 4=22.0%, 8=55.3%, 16=7.6%, 32=0.0%, >=64=0.0% 00:31:50.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.435 complete : 0=0.0%, 4=93.2%, 8=1.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.435 issued rwts: total=1542,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.435 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:50.435 filename2: (groupid=0, jobs=1): err= 0: pid=2798922: Tue Nov 19 11:33:44 2024 00:31:50.435 read: IOPS=157, BW=629KiB/s (644kB/s)(6336KiB/10077msec) 00:31:50.435 slat (nsec): min=7977, max=89545, avg=27940.37, stdev=14683.38 00:31:50.435 clat (msec): min=17, max=322, avg=100.96, stdev=106.32 00:31:50.435 lat (msec): min=17, max=322, avg=100.99, stdev=106.31 00:31:50.435 clat percentiles (msec): 00:31:50.435 | 1.00th=[ 19], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 33], 00:31:50.435 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:31:50.435 | 70.00th=[ 35], 80.00th=[ 257], 90.00th=[ 275], 95.00th=[ 275], 00:31:50.435 | 99.00th=[ 300], 99.50th=[ 300], 99.90th=[ 321], 99.95th=[ 321], 00:31:50.435 | 99.99th=[ 321] 00:31:50.435 bw ( KiB/s): min= 144, max= 2048, per=4.27%, avg=627.20, stdev=704.38, samples=20 00:31:50.435 iops : min= 36, max= 512, avg=156.80, stdev=176.09, samples=20 00:31:50.435 lat (msec) : 20=1.01%, 50=69.70%, 250=2.90%, 500=26.39% 00:31:50.435 cpu : usr=98.06%, sys=1.34%, ctx=49, majf=0, minf=9 00:31:50.435 IO depths : 1=4.5%, 2=10.7%, 4=25.0%, 8=51.8%, 16=8.0%, 32=0.0%, >=64=0.0% 00:31:50.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.435 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.435 issued rwts: total=1584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.435 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:50.435 filename2: (groupid=0, jobs=1): err= 0: pid=2798923: Tue Nov 19 11:33:44 2024 00:31:50.435 read: IOPS=155, BW=623KiB/s (638kB/s)(6272KiB/10072msec) 00:31:50.435 slat (usec): min=5, max=124, avg=49.80, stdev=30.38 00:31:50.435 clat (msec): min=29, max=354, avg=101.80, stdev=106.57 00:31:50.435 lat (msec): min=29, max=354, avg=101.85, stdev=106.54 00:31:50.435 clat percentiles (msec): 00:31:50.435 | 1.00th=[ 31], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 33], 00:31:50.435 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:31:50.435 | 70.00th=[ 40], 80.00th=[ 255], 90.00th=[ 275], 95.00th=[ 275], 00:31:50.435 | 99.00th=[ 300], 99.50th=[ 300], 99.90th=[ 355], 99.95th=[ 355], 00:31:50.435 | 99.99th=[ 355] 00:31:50.435 bw ( KiB/s): min= 144, max= 1920, per=4.22%, avg=620.80, stdev=696.55, samples=20 00:31:50.435 iops : min= 36, max= 480, avg=155.20, stdev=174.14, samples=20 00:31:50.435 lat (msec) : 50=70.41%, 250=3.06%, 500=26.53% 00:31:50.435 cpu : usr=98.35%, sys=1.11%, ctx=51, majf=0, minf=9 00:31:50.435 IO depths : 1=1.5%, 2=7.7%, 4=25.0%, 8=54.8%, 16=11.0%, 32=0.0%, >=64=0.0% 00:31:50.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.435 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.435 issued rwts: total=1568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.435 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:50.435 filename2: (groupid=0, jobs=1): err= 0: pid=2798924: Tue Nov 19 11:33:44 2024 00:31:50.436 read: IOPS=139, BW=560KiB/s (573kB/s)(5632KiB/10058msec) 00:31:50.436 slat (usec): min=8, max=106, avg=45.27, stdev=27.00 00:31:50.436 clat (msec): min=14, max=530, avg=113.87, stdev=150.08 00:31:50.436 lat (msec): min=14, max=530, avg=113.92, stdev=150.07 00:31:50.436 clat percentiles (msec): 00:31:50.436 | 1.00th=[ 31], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 33], 00:31:50.436 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:31:50.436 | 70.00th=[ 35], 80.00th=[ 342], 90.00th=[ 384], 95.00th=[ 422], 00:31:50.436 | 99.00th=[ 443], 99.50th=[ 527], 99.90th=[ 531], 99.95th=[ 531], 00:31:50.436 | 99.99th=[ 531] 00:31:50.436 bw ( KiB/s): min= 128, max= 1920, per=3.79%, avg=556.95, stdev=718.76, samples=20 00:31:50.436 iops : min= 32, max= 480, avg=139.20, stdev=179.62, samples=20 00:31:50.436 lat (msec) : 20=0.28%, 50=75.43%, 100=1.42%, 250=0.14%, 500=22.02% 00:31:50.436 lat (msec) : 750=0.71% 00:31:50.436 cpu : usr=98.27%, sys=1.18%, ctx=25, majf=0, minf=9 00:31:50.436 IO depths : 1=4.8%, 2=11.0%, 4=25.0%, 8=51.5%, 16=7.7%, 32=0.0%, >=64=0.0% 00:31:50.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.436 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.436 issued rwts: total=1408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.436 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:50.436 filename2: (groupid=0, jobs=1): err= 0: pid=2798925: Tue Nov 19 11:33:44 2024 00:31:50.436 read: IOPS=155, BW=623KiB/s (638kB/s)(6272KiB/10071msec) 00:31:50.436 slat (nsec): min=7990, max=76551, avg=18658.48, stdev=10653.15 00:31:50.436 clat (msec): min=27, max=325, avg=102.58, stdev=105.21 00:31:50.436 lat (msec): min=27, max=325, avg=102.60, stdev=105.21 00:31:50.436 clat percentiles (msec): 00:31:50.436 | 1.00th=[ 30], 5.00th=[ 32], 10.00th=[ 32], 20.00th=[ 34], 00:31:50.436 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:31:50.436 | 70.00th=[ 131], 80.00th=[ 257], 90.00th=[ 275], 95.00th=[ 275], 00:31:50.436 | 99.00th=[ 279], 99.50th=[ 279], 99.90th=[ 326], 99.95th=[ 326], 00:31:50.436 | 99.99th=[ 326] 00:31:50.436 bw ( KiB/s): min= 144, max= 1920, per=4.22%, avg=620.80, stdev=691.25, samples=20 00:31:50.436 iops : min= 36, max= 480, avg=155.20, stdev=172.81, samples=20 00:31:50.436 lat (msec) : 50=68.37%, 100=1.02%, 250=3.06%, 500=27.55% 00:31:50.436 cpu : usr=98.07%, sys=1.25%, ctx=126, majf=0, minf=9 00:31:50.436 IO depths : 1=3.8%, 2=10.0%, 4=25.0%, 8=52.5%, 16=8.7%, 32=0.0%, >=64=0.0% 00:31:50.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.436 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.436 issued rwts: total=1568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.436 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:50.436 filename2: (groupid=0, jobs=1): err= 0: pid=2798926: Tue Nov 19 11:33:44 2024 00:31:50.436 read: IOPS=157, BW=629KiB/s (644kB/s)(6336KiB/10077msec) 00:31:50.436 slat (nsec): min=7944, max=68330, avg=29441.36, stdev=14705.13 00:31:50.436 clat (msec): min=13, max=381, avg=100.95, stdev=106.41 00:31:50.436 lat (msec): min=13, max=381, avg=100.98, stdev=106.40 00:31:50.436 clat percentiles (msec): 00:31:50.436 | 1.00th=[ 19], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 33], 00:31:50.436 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:31:50.436 | 70.00th=[ 35], 80.00th=[ 255], 90.00th=[ 275], 95.00th=[ 275], 00:31:50.436 | 99.00th=[ 300], 99.50th=[ 300], 99.90th=[ 384], 99.95th=[ 384], 00:31:50.436 | 99.99th=[ 384] 00:31:50.436 bw ( KiB/s): min= 144, max= 2048, per=4.27%, avg=627.20, stdev=704.38, samples=20 00:31:50.436 iops : min= 36, max= 512, avg=156.80, stdev=176.09, samples=20 00:31:50.436 lat (msec) : 20=1.01%, 50=69.70%, 250=3.03%, 500=26.26% 00:31:50.436 cpu : usr=98.12%, sys=1.39%, ctx=23, majf=0, minf=9 00:31:50.436 IO depths : 1=4.5%, 2=10.8%, 4=25.0%, 8=51.7%, 16=8.0%, 32=0.0%, >=64=0.0% 00:31:50.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.436 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.436 issued rwts: total=1584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.436 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:50.436 00:31:50.436 Run status group 0 (all jobs): 00:31:50.436 READ: bw=14.3MiB/s (15.0MB/s), 560KiB/s-658KiB/s (573kB/s-674kB/s), io=145MiB (152MB), run=10056-10110msec 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:50.436 bdev_null0 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:50.436 [2024-11-19 11:33:44.672564] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.436 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:50.437 bdev_null1 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:50.437 { 00:31:50.437 "params": { 00:31:50.437 "name": "Nvme$subsystem", 00:31:50.437 "trtype": "$TEST_TRANSPORT", 00:31:50.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:50.437 "adrfam": "ipv4", 00:31:50.437 "trsvcid": "$NVMF_PORT", 00:31:50.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:50.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:50.437 "hdgst": ${hdgst:-false}, 00:31:50.437 "ddgst": ${ddgst:-false} 00:31:50.437 }, 00:31:50.437 "method": "bdev_nvme_attach_controller" 00:31:50.437 } 00:31:50.437 EOF 00:31:50.437 )") 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:50.437 { 00:31:50.437 "params": { 00:31:50.437 "name": "Nvme$subsystem", 00:31:50.437 "trtype": "$TEST_TRANSPORT", 00:31:50.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:50.437 "adrfam": "ipv4", 00:31:50.437 "trsvcid": "$NVMF_PORT", 00:31:50.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:50.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:50.437 "hdgst": ${hdgst:-false}, 00:31:50.437 "ddgst": ${ddgst:-false} 00:31:50.437 }, 00:31:50.437 "method": "bdev_nvme_attach_controller" 00:31:50.437 } 00:31:50.437 EOF 00:31:50.437 )") 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:50.437 "params": { 00:31:50.437 "name": "Nvme0", 00:31:50.437 "trtype": "tcp", 00:31:50.437 "traddr": "10.0.0.2", 00:31:50.437 "adrfam": "ipv4", 00:31:50.437 "trsvcid": "4420", 00:31:50.437 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:50.437 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:50.437 "hdgst": false, 00:31:50.437 "ddgst": false 00:31:50.437 }, 00:31:50.437 "method": "bdev_nvme_attach_controller" 00:31:50.437 },{ 00:31:50.437 "params": { 00:31:50.437 "name": "Nvme1", 00:31:50.437 "trtype": "tcp", 00:31:50.437 "traddr": "10.0.0.2", 00:31:50.437 "adrfam": "ipv4", 00:31:50.437 "trsvcid": "4420", 00:31:50.437 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:50.437 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:50.437 "hdgst": false, 00:31:50.437 "ddgst": false 00:31:50.437 }, 00:31:50.437 "method": "bdev_nvme_attach_controller" 00:31:50.437 }' 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:50.437 11:33:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:50.437 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:50.437 ... 00:31:50.437 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:50.437 ... 00:31:50.437 fio-3.35 00:31:50.437 Starting 4 threads 00:31:55.703 00:31:55.703 filename0: (groupid=0, jobs=1): err= 0: pid=2800191: Tue Nov 19 11:33:50 2024 00:31:55.703 read: IOPS=1995, BW=15.6MiB/s (16.3MB/s)(78.0MiB/5002msec) 00:31:55.703 slat (usec): min=4, max=100, avg=16.07, stdev= 8.20 00:31:55.703 clat (usec): min=937, max=7446, avg=3950.29, stdev=552.64 00:31:55.703 lat (usec): min=952, max=7455, avg=3966.37, stdev=553.36 00:31:55.703 clat percentiles (usec): 00:31:55.703 | 1.00th=[ 2311], 5.00th=[ 3163], 10.00th=[ 3359], 20.00th=[ 3654], 00:31:55.703 | 30.00th=[ 3785], 40.00th=[ 3884], 50.00th=[ 3949], 60.00th=[ 4015], 00:31:55.703 | 70.00th=[ 4113], 80.00th=[ 4228], 90.00th=[ 4424], 95.00th=[ 4686], 00:31:55.703 | 99.00th=[ 5997], 99.50th=[ 6521], 99.90th=[ 6980], 99.95th=[ 7242], 00:31:55.703 | 99.99th=[ 7439] 00:31:55.703 bw ( KiB/s): min=15088, max=16672, per=25.52%, avg=16053.33, stdev=481.53, samples=9 00:31:55.703 iops : min= 1886, max= 2084, avg=2006.67, stdev=60.19, samples=9 00:31:55.703 lat (usec) : 1000=0.03% 00:31:55.703 lat (msec) : 2=0.58%, 4=55.50%, 10=43.89% 00:31:55.703 cpu : usr=91.16%, sys=6.24%, ctx=151, majf=0, minf=9 00:31:55.703 IO depths : 1=0.6%, 2=17.3%, 4=55.7%, 8=26.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:55.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:55.703 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:55.703 issued rwts: total=9981,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:55.703 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:55.703 filename0: (groupid=0, jobs=1): err= 0: pid=2800192: Tue Nov 19 11:33:50 2024 00:31:55.703 read: IOPS=1904, BW=14.9MiB/s (15.6MB/s)(74.4MiB/5002msec) 00:31:55.703 slat (usec): min=4, max=219, avg=15.93, stdev= 7.82 00:31:55.703 clat (usec): min=804, max=7842, avg=4144.91, stdev=703.17 00:31:55.703 lat (usec): min=818, max=7856, avg=4160.85, stdev=703.19 00:31:55.703 clat percentiles (usec): 00:31:55.703 | 1.00th=[ 1942], 5.00th=[ 3392], 10.00th=[ 3621], 20.00th=[ 3818], 00:31:55.703 | 30.00th=[ 3916], 40.00th=[ 3982], 50.00th=[ 4047], 60.00th=[ 4146], 00:31:55.703 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4752], 95.00th=[ 5407], 00:31:55.703 | 99.00th=[ 6915], 99.50th=[ 7242], 99.90th=[ 7635], 99.95th=[ 7767], 00:31:55.703 | 99.99th=[ 7832] 00:31:55.703 bw ( KiB/s): min=14560, max=15888, per=24.22%, avg=15236.30, stdev=524.37, samples=10 00:31:55.703 iops : min= 1820, max= 1986, avg=1904.40, stdev=65.64, samples=10 00:31:55.703 lat (usec) : 1000=0.15% 00:31:55.703 lat (msec) : 2=0.96%, 4=42.72%, 10=56.18% 00:31:55.703 cpu : usr=94.84%, sys=4.62%, ctx=13, majf=0, minf=0 00:31:55.703 IO depths : 1=0.1%, 2=11.7%, 4=60.6%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:55.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:55.703 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:55.703 issued rwts: total=9527,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:55.703 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:55.703 filename1: (groupid=0, jobs=1): err= 0: pid=2800193: Tue Nov 19 11:33:50 2024 00:31:55.703 read: IOPS=2020, BW=15.8MiB/s (16.6MB/s)(79.0MiB/5001msec) 00:31:55.703 slat (usec): min=4, max=111, avg=16.01, stdev= 8.01 00:31:55.703 clat (usec): min=823, max=7725, avg=3900.76, stdev=529.08 00:31:55.703 lat (usec): min=837, max=7759, avg=3916.77, stdev=529.76 00:31:55.703 clat percentiles (usec): 00:31:55.703 | 1.00th=[ 2311], 5.00th=[ 3032], 10.00th=[ 3294], 20.00th=[ 3589], 00:31:55.703 | 30.00th=[ 3752], 40.00th=[ 3851], 50.00th=[ 3949], 60.00th=[ 4015], 00:31:55.703 | 70.00th=[ 4113], 80.00th=[ 4228], 90.00th=[ 4359], 95.00th=[ 4555], 00:31:55.703 | 99.00th=[ 5538], 99.50th=[ 6259], 99.90th=[ 6915], 99.95th=[ 7046], 00:31:55.703 | 99.99th=[ 7177] 00:31:55.703 bw ( KiB/s): min=15168, max=17424, per=25.82%, avg=16241.67, stdev=679.86, samples=9 00:31:55.704 iops : min= 1896, max= 2178, avg=2030.11, stdev=85.05, samples=9 00:31:55.704 lat (usec) : 1000=0.02% 00:31:55.704 lat (msec) : 2=0.58%, 4=57.38%, 10=42.01% 00:31:55.704 cpu : usr=89.26%, sys=7.04%, ctx=356, majf=0, minf=9 00:31:55.704 IO depths : 1=0.9%, 2=16.0%, 4=57.0%, 8=26.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:55.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:55.704 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:55.704 issued rwts: total=10106,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:55.704 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:55.704 filename1: (groupid=0, jobs=1): err= 0: pid=2800194: Tue Nov 19 11:33:50 2024 00:31:55.704 read: IOPS=1942, BW=15.2MiB/s (15.9MB/s)(75.9MiB/5002msec) 00:31:55.704 slat (nsec): min=4232, max=86608, avg=17334.56, stdev=8838.79 00:31:55.704 clat (usec): min=734, max=7771, avg=4053.16, stdev=655.20 00:31:55.704 lat (usec): min=749, max=7793, avg=4070.50, stdev=655.43 00:31:55.704 clat percentiles (usec): 00:31:55.704 | 1.00th=[ 1975], 5.00th=[ 3261], 10.00th=[ 3523], 20.00th=[ 3752], 00:31:55.704 | 30.00th=[ 3851], 40.00th=[ 3916], 50.00th=[ 3982], 60.00th=[ 4080], 00:31:55.704 | 70.00th=[ 4178], 80.00th=[ 4293], 90.00th=[ 4555], 95.00th=[ 5145], 00:31:55.704 | 99.00th=[ 6652], 99.50th=[ 6980], 99.90th=[ 7504], 99.95th=[ 7570], 00:31:55.704 | 99.99th=[ 7767] 00:31:55.704 bw ( KiB/s): min=15120, max=16128, per=24.81%, avg=15607.11, stdev=380.75, samples=9 00:31:55.704 iops : min= 1890, max= 2016, avg=1950.89, stdev=47.59, samples=9 00:31:55.704 lat (usec) : 750=0.01%, 1000=0.13% 00:31:55.704 lat (msec) : 2=0.90%, 4=50.51%, 10=48.45% 00:31:55.704 cpu : usr=93.78%, sys=5.18%, ctx=166, majf=0, minf=9 00:31:55.704 IO depths : 1=0.4%, 2=16.2%, 4=56.5%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:55.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:55.704 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:55.704 issued rwts: total=9717,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:55.704 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:55.704 00:31:55.704 Run status group 0 (all jobs): 00:31:55.704 READ: bw=61.4MiB/s (64.4MB/s), 14.9MiB/s-15.8MiB/s (15.6MB/s-16.6MB/s), io=307MiB (322MB), run=5001-5002msec 00:31:55.704 11:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:55.704 11:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:55.704 11:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:55.704 11:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:55.704 11:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:55.704 11:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:55.704 11:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.704 11:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:55.704 11:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.704 11:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:55.704 11:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.704 11:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:55.704 11:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.704 11:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:55.704 11:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:55.704 11:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:55.704 11:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:55.704 11:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.704 11:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:55.704 11:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.704 11:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:55.704 11:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.704 11:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:55.704 11:33:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.704 00:31:55.704 real 0m24.338s 00:31:55.704 user 4m34.202s 00:31:55.704 sys 0m6.033s 00:31:55.704 11:33:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:55.704 11:33:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:55.704 ************************************ 00:31:55.704 END TEST fio_dif_rand_params 00:31:55.704 ************************************ 00:31:55.704 11:33:51 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:55.704 11:33:51 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:55.704 11:33:51 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:55.704 11:33:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:55.704 ************************************ 00:31:55.704 START TEST fio_dif_digest 00:31:55.704 ************************************ 00:31:55.704 11:33:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:31:55.704 11:33:51 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:31:55.704 11:33:51 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:55.704 11:33:51 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:31:55.704 11:33:51 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:31:55.704 11:33:51 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:55.704 11:33:51 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:31:55.704 11:33:51 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:31:55.704 11:33:51 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:31:55.704 11:33:51 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:31:55.704 11:33:51 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:31:55.704 11:33:51 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:31:55.704 11:33:51 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:31:55.704 11:33:51 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:31:55.704 11:33:51 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:31:55.704 11:33:51 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:31:55.704 11:33:51 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:55.704 11:33:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.704 11:33:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:55.704 bdev_null0 00:31:55.704 11:33:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.704 11:33:51 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:55.704 11:33:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.704 11:33:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:55.704 11:33:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.704 11:33:51 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:55.704 11:33:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.704 11:33:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:55.704 11:33:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.704 11:33:51 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:55.704 11:33:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.704 11:33:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:55.704 [2024-11-19 11:33:51.089276] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:55.704 11:33:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.704 11:33:51 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:55.704 11:33:51 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:55.704 11:33:51 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:55.704 11:33:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:31:55.704 11:33:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:31:55.704 11:33:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:55.704 11:33:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:55.704 { 00:31:55.704 "params": { 00:31:55.704 "name": "Nvme$subsystem", 00:31:55.704 "trtype": "$TEST_TRANSPORT", 00:31:55.704 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:55.704 "adrfam": "ipv4", 00:31:55.704 "trsvcid": "$NVMF_PORT", 00:31:55.704 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:55.704 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:55.704 "hdgst": ${hdgst:-false}, 00:31:55.704 "ddgst": ${ddgst:-false} 00:31:55.704 }, 00:31:55.704 "method": "bdev_nvme_attach_controller" 00:31:55.704 } 00:31:55.704 EOF 00:31:55.704 )") 00:31:55.704 11:33:51 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:55.704 11:33:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:55.704 11:33:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:55.704 11:33:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:55.704 11:33:51 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:31:55.704 11:33:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:55.704 11:33:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:55.704 11:33:51 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:31:55.705 11:33:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:31:55.705 11:33:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:55.705 11:33:51 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:31:55.705 11:33:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:55.705 11:33:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:31:55.705 11:33:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:55.705 11:33:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:31:55.705 11:33:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:55.705 11:33:51 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:31:55.705 11:33:51 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:31:55.705 11:33:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:31:55.705 11:33:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:31:55.705 11:33:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:55.705 "params": { 00:31:55.705 "name": "Nvme0", 00:31:55.705 "trtype": "tcp", 00:31:55.705 "traddr": "10.0.0.2", 00:31:55.705 "adrfam": "ipv4", 00:31:55.705 "trsvcid": "4420", 00:31:55.705 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:55.705 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:55.705 "hdgst": true, 00:31:55.705 "ddgst": true 00:31:55.705 }, 00:31:55.705 "method": "bdev_nvme_attach_controller" 00:31:55.705 }' 00:31:55.705 11:33:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:55.705 11:33:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:55.705 11:33:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:55.705 11:33:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:55.705 11:33:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:55.705 11:33:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:55.705 11:33:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:55.705 11:33:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:55.705 11:33:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:55.705 11:33:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:55.963 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:55.963 ... 00:31:55.963 fio-3.35 00:31:55.963 Starting 3 threads 00:32:08.222 00:32:08.222 filename0: (groupid=0, jobs=1): err= 0: pid=2801061: Tue Nov 19 11:34:01 2024 00:32:08.222 read: IOPS=208, BW=26.0MiB/s (27.3MB/s)(261MiB/10048msec) 00:32:08.222 slat (usec): min=4, max=100, avg=19.43, stdev= 5.56 00:32:08.222 clat (usec): min=9438, max=52975, avg=14371.71, stdev=1601.61 00:32:08.222 lat (usec): min=9457, max=52995, avg=14391.15, stdev=1601.27 00:32:08.222 clat percentiles (usec): 00:32:08.222 | 1.00th=[11731], 5.00th=[12518], 10.00th=[13042], 20.00th=[13435], 00:32:08.222 | 30.00th=[13698], 40.00th=[14091], 50.00th=[14353], 60.00th=[14615], 00:32:08.222 | 70.00th=[14877], 80.00th=[15270], 90.00th=[15795], 95.00th=[16319], 00:32:08.222 | 99.00th=[17171], 99.50th=[17433], 99.90th=[19006], 99.95th=[49021], 00:32:08.222 | 99.99th=[53216] 00:32:08.222 bw ( KiB/s): min=25088, max=28416, per=33.42%, avg=26726.40, stdev=977.12, samples=20 00:32:08.222 iops : min= 196, max= 222, avg=208.80, stdev= 7.63, samples=20 00:32:08.222 lat (msec) : 10=0.24%, 20=99.67%, 50=0.05%, 100=0.05% 00:32:08.222 cpu : usr=90.61%, sys=6.21%, ctx=315, majf=0, minf=142 00:32:08.222 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:08.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.222 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.222 issued rwts: total=2091,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.222 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:08.222 filename0: (groupid=0, jobs=1): err= 0: pid=2801062: Tue Nov 19 11:34:01 2024 00:32:08.222 read: IOPS=200, BW=25.0MiB/s (26.2MB/s)(252MiB/10047msec) 00:32:08.222 slat (nsec): min=4297, max=46416, avg=17702.47, stdev=5060.60 00:32:08.222 clat (usec): min=11610, max=54928, avg=14938.06, stdev=2137.56 00:32:08.222 lat (usec): min=11624, max=54945, avg=14955.76, stdev=2137.51 00:32:08.222 clat percentiles (usec): 00:32:08.222 | 1.00th=[12518], 5.00th=[13304], 10.00th=[13698], 20.00th=[14091], 00:32:08.222 | 30.00th=[14353], 40.00th=[14615], 50.00th=[14746], 60.00th=[15008], 00:32:08.222 | 70.00th=[15270], 80.00th=[15664], 90.00th=[16057], 95.00th=[16581], 00:32:08.222 | 99.00th=[17433], 99.50th=[17695], 99.90th=[54264], 99.95th=[54789], 00:32:08.222 | 99.99th=[54789] 00:32:08.222 bw ( KiB/s): min=23808, max=27648, per=32.15%, avg=25715.20, stdev=785.65, samples=20 00:32:08.222 iops : min= 186, max= 216, avg=200.90, stdev= 6.14, samples=20 00:32:08.222 lat (msec) : 20=99.75%, 50=0.05%, 100=0.20% 00:32:08.222 cpu : usr=94.07%, sys=5.41%, ctx=25, majf=0, minf=97 00:32:08.222 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:08.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.222 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.222 issued rwts: total=2012,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.222 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:08.222 filename0: (groupid=0, jobs=1): err= 0: pid=2801063: Tue Nov 19 11:34:01 2024 00:32:08.222 read: IOPS=216, BW=27.1MiB/s (28.4MB/s)(272MiB/10047msec) 00:32:08.222 slat (nsec): min=5920, max=53305, avg=19453.87, stdev=6044.01 00:32:08.222 clat (usec): min=8760, max=54293, avg=13814.70, stdev=1632.53 00:32:08.222 lat (usec): min=8779, max=54309, avg=13834.16, stdev=1632.21 00:32:08.222 clat percentiles (usec): 00:32:08.222 | 1.00th=[10814], 5.00th=[11994], 10.00th=[12387], 20.00th=[12911], 00:32:08.222 | 30.00th=[13173], 40.00th=[13435], 50.00th=[13829], 60.00th=[14091], 00:32:08.222 | 70.00th=[14353], 80.00th=[14746], 90.00th=[15139], 95.00th=[15664], 00:32:08.222 | 99.00th=[16581], 99.50th=[16909], 99.90th=[17171], 99.95th=[50594], 00:32:08.222 | 99.99th=[54264] 00:32:08.222 bw ( KiB/s): min=26112, max=30464, per=34.76%, avg=27801.60, stdev=1061.71, samples=20 00:32:08.222 iops : min= 204, max= 238, avg=217.20, stdev= 8.29, samples=20 00:32:08.222 lat (msec) : 10=0.46%, 20=99.45%, 100=0.09% 00:32:08.222 cpu : usr=94.75%, sys=4.70%, ctx=25, majf=0, minf=64 00:32:08.222 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:08.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.222 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.222 issued rwts: total=2175,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.222 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:08.222 00:32:08.222 Run status group 0 (all jobs): 00:32:08.222 READ: bw=78.1MiB/s (81.9MB/s), 25.0MiB/s-27.1MiB/s (26.2MB/s-28.4MB/s), io=785MiB (823MB), run=10047-10048msec 00:32:08.222 11:34:02 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:32:08.222 11:34:02 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:32:08.222 11:34:02 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:32:08.222 11:34:02 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:08.222 11:34:02 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:32:08.222 11:34:02 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:08.222 11:34:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.222 11:34:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:08.222 11:34:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.222 11:34:02 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:08.222 11:34:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.222 11:34:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:08.222 11:34:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.222 00:32:08.222 real 0m11.077s 00:32:08.222 user 0m29.051s 00:32:08.222 sys 0m1.919s 00:32:08.222 11:34:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:08.222 11:34:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:08.222 ************************************ 00:32:08.222 END TEST fio_dif_digest 00:32:08.222 ************************************ 00:32:08.222 11:34:02 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:32:08.222 11:34:02 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:32:08.222 11:34:02 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:08.222 11:34:02 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:32:08.222 11:34:02 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:08.222 11:34:02 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:32:08.222 11:34:02 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:08.222 11:34:02 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:08.222 rmmod nvme_tcp 00:32:08.222 rmmod nvme_fabrics 00:32:08.222 rmmod nvme_keyring 00:32:08.222 11:34:02 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:08.222 11:34:02 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:32:08.222 11:34:02 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:32:08.222 11:34:02 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 2794501 ']' 00:32:08.222 11:34:02 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 2794501 00:32:08.222 11:34:02 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 2794501 ']' 00:32:08.222 11:34:02 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 2794501 00:32:08.222 11:34:02 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:32:08.222 11:34:02 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:08.222 11:34:02 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2794501 00:32:08.222 11:34:02 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:08.222 11:34:02 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:08.222 11:34:02 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2794501' 00:32:08.222 killing process with pid 2794501 00:32:08.222 11:34:02 nvmf_dif -- common/autotest_common.sh@973 -- # kill 2794501 00:32:08.222 11:34:02 nvmf_dif -- common/autotest_common.sh@978 -- # wait 2794501 00:32:08.222 11:34:02 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:32:08.222 11:34:02 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:08.222 Waiting for block devices as requested 00:32:08.481 0000:81:00.0 (8086 0a54): vfio-pci -> nvme 00:32:08.481 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:08.481 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:08.740 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:08.740 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:08.740 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:08.999 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:08.999 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:08.999 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:08.999 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:09.258 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:09.258 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:09.258 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:09.516 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:09.516 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:09.516 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:09.516 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:09.774 11:34:05 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:09.774 11:34:05 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:09.774 11:34:05 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:32:09.774 11:34:05 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:32:09.774 11:34:05 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:32:09.774 11:34:05 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:09.774 11:34:05 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:09.774 11:34:05 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:09.774 11:34:05 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:09.774 11:34:05 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:09.774 11:34:05 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:11.677 11:34:07 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:11.677 00:32:11.677 real 1m8.302s 00:32:11.677 user 6m31.657s 00:32:11.677 sys 0m18.254s 00:32:11.677 11:34:07 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:11.677 11:34:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:11.677 ************************************ 00:32:11.677 END TEST nvmf_dif 00:32:11.677 ************************************ 00:32:11.677 11:34:07 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:11.677 11:34:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:11.677 11:34:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:11.677 11:34:07 -- common/autotest_common.sh@10 -- # set +x 00:32:11.935 ************************************ 00:32:11.935 START TEST nvmf_abort_qd_sizes 00:32:11.935 ************************************ 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:11.935 * Looking for test storage... 00:32:11.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:11.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.935 --rc genhtml_branch_coverage=1 00:32:11.935 --rc genhtml_function_coverage=1 00:32:11.935 --rc genhtml_legend=1 00:32:11.935 --rc geninfo_all_blocks=1 00:32:11.935 --rc geninfo_unexecuted_blocks=1 00:32:11.935 00:32:11.935 ' 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:11.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.935 --rc genhtml_branch_coverage=1 00:32:11.935 --rc genhtml_function_coverage=1 00:32:11.935 --rc genhtml_legend=1 00:32:11.935 --rc geninfo_all_blocks=1 00:32:11.935 --rc geninfo_unexecuted_blocks=1 00:32:11.935 00:32:11.935 ' 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:11.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.935 --rc genhtml_branch_coverage=1 00:32:11.935 --rc genhtml_function_coverage=1 00:32:11.935 --rc genhtml_legend=1 00:32:11.935 --rc geninfo_all_blocks=1 00:32:11.935 --rc geninfo_unexecuted_blocks=1 00:32:11.935 00:32:11.935 ' 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:11.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.935 --rc genhtml_branch_coverage=1 00:32:11.935 --rc genhtml_function_coverage=1 00:32:11.935 --rc genhtml_legend=1 00:32:11.935 --rc geninfo_all_blocks=1 00:32:11.935 --rc geninfo_unexecuted_blocks=1 00:32:11.935 00:32:11.935 ' 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:32:11.935 11:34:07 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:11.936 11:34:07 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:11.936 11:34:07 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:11.936 11:34:07 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.936 11:34:07 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.936 11:34:07 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.936 11:34:07 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:32:11.936 11:34:07 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.936 11:34:07 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:32:11.936 11:34:07 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:11.936 11:34:07 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:11.936 11:34:07 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:11.936 11:34:07 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:11.936 11:34:07 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:11.936 11:34:07 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:11.936 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:11.936 11:34:07 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:11.936 11:34:07 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:11.936 11:34:07 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:11.936 11:34:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:32:11.936 11:34:07 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:11.936 11:34:07 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:11.936 11:34:07 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:11.936 11:34:07 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:11.936 11:34:07 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:11.936 11:34:07 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:11.936 11:34:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:11.936 11:34:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:11.936 11:34:07 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:11.936 11:34:07 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:11.936 11:34:07 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:32:11.936 11:34:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:14.463 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:14.463 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:32:14.463 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:14.463 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:14.463 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:14.463 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:14.463 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:14.463 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:32:14.463 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:14.463 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:32:14.463 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:32:14.464 Found 0000:82:00.0 (0x8086 - 0x159b) 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:32:14.464 Found 0000:82:00.1 (0x8086 - 0x159b) 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:32:14.464 Found net devices under 0000:82:00.0: cvl_0_0 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:32:14.464 Found net devices under 0000:82:00.1: cvl_0_1 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:14.464 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:14.722 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:14.722 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:14.722 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:32:14.722 00:32:14.722 --- 10.0.0.2 ping statistics --- 00:32:14.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:14.722 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:32:14.722 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:14.722 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:14.722 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:32:14.722 00:32:14.722 --- 10.0.0.1 ping statistics --- 00:32:14.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:14.722 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:32:14.722 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:14.722 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:32:14.722 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:32:14.722 11:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:16.110 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:16.110 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:16.110 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:16.110 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:16.110 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:16.110 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:16.110 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:16.110 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:16.110 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:16.110 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:16.110 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:16.110 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:16.110 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:16.110 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:16.110 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:16.110 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:18.012 0000:81:00.0 (8086 0a54): nvme -> vfio-pci 00:32:18.012 11:34:13 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:18.012 11:34:13 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:18.012 11:34:13 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:18.012 11:34:13 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:18.012 11:34:13 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:18.012 11:34:13 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:18.012 11:34:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:32:18.012 11:34:13 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:18.012 11:34:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:18.012 11:34:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:18.012 11:34:13 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=2806485 00:32:18.012 11:34:13 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:32:18.012 11:34:13 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 2806485 00:32:18.012 11:34:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 2806485 ']' 00:32:18.012 11:34:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:18.012 11:34:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:18.012 11:34:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:18.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:18.012 11:34:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:18.012 11:34:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:18.012 [2024-11-19 11:34:13.479695] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:32:18.012 [2024-11-19 11:34:13.479765] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:18.270 [2024-11-19 11:34:13.558195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:18.270 [2024-11-19 11:34:13.614016] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:18.270 [2024-11-19 11:34:13.614071] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:18.270 [2024-11-19 11:34:13.614095] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:18.270 [2024-11-19 11:34:13.614106] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:18.270 [2024-11-19 11:34:13.614116] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:18.270 [2024-11-19 11:34:13.615554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:18.270 [2024-11-19 11:34:13.615611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:18.270 [2024-11-19 11:34:13.615719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:18.270 [2024-11-19 11:34:13.615728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:18.270 11:34:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:18.270 11:34:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:32:18.270 11:34:13 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:18.270 11:34:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:18.270 11:34:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:18.270 11:34:13 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:18.270 11:34:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:32:18.270 11:34:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:32:18.270 11:34:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:32:18.270 11:34:13 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:32:18.270 11:34:13 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:32:18.270 11:34:13 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:81:00.0 ]] 00:32:18.270 11:34:13 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:32:18.270 11:34:13 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:32:18.270 11:34:13 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:81:00.0 ]] 00:32:18.270 11:34:13 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:32:18.270 11:34:13 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:32:18.270 11:34:13 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:32:18.270 11:34:13 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:32:18.270 11:34:13 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:81:00.0 00:32:18.270 11:34:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:32:18.270 11:34:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:81:00.0 00:32:18.270 11:34:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:32:18.270 11:34:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:18.270 11:34:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:18.270 11:34:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:18.528 ************************************ 00:32:18.528 START TEST spdk_target_abort 00:32:18.528 ************************************ 00:32:18.528 11:34:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:32:18.528 11:34:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:32:18.528 11:34:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:81:00.0 -b spdk_target 00:32:18.528 11:34:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.528 11:34:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:21.805 spdk_targetn1 00:32:21.805 11:34:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.805 11:34:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:21.805 11:34:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.805 11:34:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:21.805 [2024-11-19 11:34:16.645007] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:21.805 11:34:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.805 11:34:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:32:21.805 11:34:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.805 11:34:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:21.805 11:34:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.805 11:34:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:32:21.805 11:34:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.805 11:34:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:21.805 11:34:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.805 11:34:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:32:21.805 11:34:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.805 11:34:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:21.805 [2024-11-19 11:34:16.693340] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:21.805 11:34:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.805 11:34:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:32:21.805 11:34:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:21.805 11:34:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:21.805 11:34:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:32:21.805 11:34:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:21.805 11:34:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:21.805 11:34:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:21.805 11:34:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:21.805 11:34:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:21.805 11:34:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:21.805 11:34:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:21.805 11:34:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:21.805 11:34:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:21.805 11:34:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:21.805 11:34:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:32:21.805 11:34:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:21.805 11:34:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:21.805 11:34:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:21.805 11:34:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:21.805 11:34:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:21.805 11:34:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:25.080 Initializing NVMe Controllers 00:32:25.080 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:25.080 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:25.080 Initialization complete. Launching workers. 00:32:25.080 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12126, failed: 0 00:32:25.080 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1248, failed to submit 10878 00:32:25.080 success 713, unsuccessful 535, failed 0 00:32:25.080 11:34:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:25.080 11:34:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:28.357 Initializing NVMe Controllers 00:32:28.357 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:28.357 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:28.357 Initialization complete. Launching workers. 00:32:28.357 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8638, failed: 0 00:32:28.357 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1229, failed to submit 7409 00:32:28.357 success 328, unsuccessful 901, failed 0 00:32:28.357 11:34:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:28.357 11:34:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:31.636 Initializing NVMe Controllers 00:32:31.636 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:31.636 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:31.636 Initialization complete. Launching workers. 00:32:31.636 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31364, failed: 0 00:32:31.636 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2737, failed to submit 28627 00:32:31.636 success 519, unsuccessful 2218, failed 0 00:32:31.636 11:34:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:32:31.636 11:34:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.636 11:34:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:31.636 11:34:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.636 11:34:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:32:31.636 11:34:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.636 11:34:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:33.534 11:34:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.534 11:34:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2806485 00:32:33.534 11:34:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 2806485 ']' 00:32:33.534 11:34:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 2806485 00:32:33.534 11:34:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:32:33.534 11:34:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:33.534 11:34:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2806485 00:32:33.534 11:34:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:33.534 11:34:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:33.534 11:34:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2806485' 00:32:33.534 killing process with pid 2806485 00:32:33.534 11:34:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 2806485 00:32:33.534 11:34:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 2806485 00:32:33.534 00:32:33.534 real 0m15.227s 00:32:33.534 user 0m57.386s 00:32:33.534 sys 0m3.236s 00:32:33.534 11:34:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:33.534 11:34:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:33.534 ************************************ 00:32:33.534 END TEST spdk_target_abort 00:32:33.534 ************************************ 00:32:33.798 11:34:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:32:33.798 11:34:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:33.798 11:34:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:33.798 11:34:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:33.798 ************************************ 00:32:33.798 START TEST kernel_target_abort 00:32:33.798 ************************************ 00:32:33.798 11:34:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:32:33.798 11:34:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:32:33.798 11:34:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:32:33.798 11:34:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:33.798 11:34:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:33.798 11:34:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.798 11:34:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.798 11:34:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:33.798 11:34:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.798 11:34:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:33.798 11:34:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:33.798 11:34:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:33.798 11:34:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:33.798 11:34:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:33.798 11:34:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:32:33.798 11:34:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:33.798 11:34:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:33.798 11:34:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:33.798 11:34:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:32:33.798 11:34:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:32:33.798 11:34:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:32:33.798 11:34:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:33.798 11:34:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:35.177 Waiting for block devices as requested 00:32:35.177 0000:81:00.0 (8086 0a54): vfio-pci -> nvme 00:32:35.177 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:35.466 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:35.466 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:35.466 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:35.466 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:35.724 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:35.724 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:35.724 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:35.724 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:35.982 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:35.982 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:35.982 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:35.982 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:36.239 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:36.239 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:36.239 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:36.497 11:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:32:36.497 11:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:36.497 11:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:32:36.497 11:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:32:36.497 11:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:36.497 11:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:32:36.497 11:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:32:36.497 11:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:32:36.497 11:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:36.497 No valid GPT data, bailing 00:32:36.497 11:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:36.497 11:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:32:36.497 11:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:32:36.497 11:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:32:36.497 11:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:32:36.497 11:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:36.497 11:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:36.497 11:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:36.497 11:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:36.497 11:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:32:36.497 11:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:32:36.497 11:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:32:36.497 11:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:32:36.497 11:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:32:36.497 11:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:32:36.497 11:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:32:36.497 11:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:36.497 11:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -a 10.0.0.1 -t tcp -s 4420 00:32:36.497 00:32:36.497 Discovery Log Number of Records 2, Generation counter 2 00:32:36.497 =====Discovery Log Entry 0====== 00:32:36.497 trtype: tcp 00:32:36.497 adrfam: ipv4 00:32:36.497 subtype: current discovery subsystem 00:32:36.497 treq: not specified, sq flow control disable supported 00:32:36.497 portid: 1 00:32:36.497 trsvcid: 4420 00:32:36.497 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:36.497 traddr: 10.0.0.1 00:32:36.497 eflags: none 00:32:36.497 sectype: none 00:32:36.497 =====Discovery Log Entry 1====== 00:32:36.497 trtype: tcp 00:32:36.497 adrfam: ipv4 00:32:36.497 subtype: nvme subsystem 00:32:36.497 treq: not specified, sq flow control disable supported 00:32:36.497 portid: 1 00:32:36.497 trsvcid: 4420 00:32:36.497 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:36.497 traddr: 10.0.0.1 00:32:36.497 eflags: none 00:32:36.497 sectype: none 00:32:36.497 11:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:32:36.497 11:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:36.497 11:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:36.497 11:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:32:36.497 11:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:36.497 11:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:36.497 11:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:36.497 11:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:36.497 11:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:36.497 11:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:36.497 11:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:36.497 11:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:36.497 11:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:36.497 11:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:36.497 11:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:32:36.497 11:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:36.497 11:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:32:36.497 11:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:36.497 11:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:36.497 11:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:36.497 11:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:39.777 Initializing NVMe Controllers 00:32:39.777 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:39.777 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:39.777 Initialization complete. Launching workers. 00:32:39.777 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 47964, failed: 0 00:32:39.777 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 47964, failed to submit 0 00:32:39.777 success 0, unsuccessful 47964, failed 0 00:32:39.777 11:34:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:39.777 11:34:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:43.060 Initializing NVMe Controllers 00:32:43.060 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:43.060 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:43.060 Initialization complete. Launching workers. 00:32:43.060 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 92991, failed: 0 00:32:43.060 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21094, failed to submit 71897 00:32:43.060 success 0, unsuccessful 21094, failed 0 00:32:43.060 11:34:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:43.060 11:34:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:46.339 Initializing NVMe Controllers 00:32:46.339 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:46.339 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:46.339 Initialization complete. Launching workers. 00:32:46.339 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 87281, failed: 0 00:32:46.339 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21794, failed to submit 65487 00:32:46.339 success 0, unsuccessful 21794, failed 0 00:32:46.339 11:34:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:32:46.339 11:34:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:46.339 11:34:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:32:46.339 11:34:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:46.339 11:34:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:46.339 11:34:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:46.339 11:34:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:46.339 11:34:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:32:46.339 11:34:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:32:46.339 11:34:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:47.275 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:47.275 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:47.275 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:47.275 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:47.275 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:47.275 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:47.275 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:47.534 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:47.534 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:47.534 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:47.534 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:47.534 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:47.534 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:47.534 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:47.534 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:47.534 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:49.444 0000:81:00.0 (8086 0a54): nvme -> vfio-pci 00:32:49.444 00:32:49.444 real 0m15.718s 00:32:49.444 user 0m6.249s 00:32:49.444 sys 0m3.805s 00:32:49.444 11:34:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:49.444 11:34:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:49.444 ************************************ 00:32:49.444 END TEST kernel_target_abort 00:32:49.444 ************************************ 00:32:49.444 11:34:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:49.444 11:34:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:32:49.444 11:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:49.444 11:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:32:49.444 11:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:49.444 11:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:32:49.444 11:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:49.444 11:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:49.444 rmmod nvme_tcp 00:32:49.444 rmmod nvme_fabrics 00:32:49.444 rmmod nvme_keyring 00:32:49.444 11:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:49.444 11:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:32:49.444 11:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:32:49.444 11:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 2806485 ']' 00:32:49.444 11:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 2806485 00:32:49.444 11:34:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 2806485 ']' 00:32:49.444 11:34:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 2806485 00:32:49.444 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2806485) - No such process 00:32:49.444 11:34:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 2806485 is not found' 00:32:49.444 Process with pid 2806485 is not found 00:32:49.444 11:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:32:49.444 11:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:50.823 Waiting for block devices as requested 00:32:50.823 0000:81:00.0 (8086 0a54): vfio-pci -> nvme 00:32:51.082 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:51.082 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:51.082 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:51.341 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:51.341 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:51.341 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:51.341 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:51.600 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:51.600 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:51.600 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:51.600 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:51.860 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:51.860 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:51.860 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:51.860 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:52.119 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:52.119 11:34:47 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:52.119 11:34:47 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:52.119 11:34:47 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:32:52.119 11:34:47 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:32:52.119 11:34:47 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:52.119 11:34:47 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:32:52.119 11:34:47 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:52.119 11:34:47 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:52.119 11:34:47 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:52.119 11:34:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:52.119 11:34:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:54.654 11:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:54.654 00:32:54.654 real 0m42.388s 00:32:54.654 user 1m6.278s 00:32:54.654 sys 0m11.234s 00:32:54.654 11:34:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:54.654 11:34:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:54.654 ************************************ 00:32:54.654 END TEST nvmf_abort_qd_sizes 00:32:54.654 ************************************ 00:32:54.654 11:34:49 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:54.654 11:34:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:54.654 11:34:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:54.654 11:34:49 -- common/autotest_common.sh@10 -- # set +x 00:32:54.654 ************************************ 00:32:54.654 START TEST keyring_file 00:32:54.654 ************************************ 00:32:54.654 11:34:49 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:54.654 * Looking for test storage... 00:32:54.654 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:54.654 11:34:49 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:54.654 11:34:49 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:32:54.654 11:34:49 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:54.654 11:34:49 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:54.654 11:34:49 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:54.654 11:34:49 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:54.654 11:34:49 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:54.654 11:34:49 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:32:54.654 11:34:49 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:32:54.654 11:34:49 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:32:54.654 11:34:49 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:32:54.654 11:34:49 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:32:54.655 11:34:49 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:32:54.655 11:34:49 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:32:54.655 11:34:49 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:54.655 11:34:49 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:32:54.655 11:34:49 keyring_file -- scripts/common.sh@345 -- # : 1 00:32:54.655 11:34:49 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:54.655 11:34:49 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:54.655 11:34:49 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:32:54.655 11:34:49 keyring_file -- scripts/common.sh@353 -- # local d=1 00:32:54.655 11:34:49 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:54.655 11:34:49 keyring_file -- scripts/common.sh@355 -- # echo 1 00:32:54.655 11:34:49 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:32:54.655 11:34:49 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:32:54.655 11:34:49 keyring_file -- scripts/common.sh@353 -- # local d=2 00:32:54.655 11:34:49 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:54.655 11:34:49 keyring_file -- scripts/common.sh@355 -- # echo 2 00:32:54.655 11:34:49 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:32:54.655 11:34:49 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:54.655 11:34:49 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:54.655 11:34:49 keyring_file -- scripts/common.sh@368 -- # return 0 00:32:54.655 11:34:49 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:54.655 11:34:49 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:54.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:54.655 --rc genhtml_branch_coverage=1 00:32:54.655 --rc genhtml_function_coverage=1 00:32:54.655 --rc genhtml_legend=1 00:32:54.655 --rc geninfo_all_blocks=1 00:32:54.655 --rc geninfo_unexecuted_blocks=1 00:32:54.655 00:32:54.655 ' 00:32:54.655 11:34:49 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:54.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:54.655 --rc genhtml_branch_coverage=1 00:32:54.655 --rc genhtml_function_coverage=1 00:32:54.655 --rc genhtml_legend=1 00:32:54.655 --rc geninfo_all_blocks=1 00:32:54.655 --rc geninfo_unexecuted_blocks=1 00:32:54.655 00:32:54.655 ' 00:32:54.655 11:34:49 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:54.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:54.655 --rc genhtml_branch_coverage=1 00:32:54.655 --rc genhtml_function_coverage=1 00:32:54.655 --rc genhtml_legend=1 00:32:54.655 --rc geninfo_all_blocks=1 00:32:54.655 --rc geninfo_unexecuted_blocks=1 00:32:54.655 00:32:54.655 ' 00:32:54.655 11:34:49 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:54.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:54.655 --rc genhtml_branch_coverage=1 00:32:54.655 --rc genhtml_function_coverage=1 00:32:54.655 --rc genhtml_legend=1 00:32:54.655 --rc geninfo_all_blocks=1 00:32:54.655 --rc geninfo_unexecuted_blocks=1 00:32:54.655 00:32:54.655 ' 00:32:54.655 11:34:49 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:54.655 11:34:49 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:54.655 11:34:49 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:32:54.655 11:34:49 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:54.655 11:34:49 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:54.655 11:34:49 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:54.655 11:34:49 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:54.655 11:34:49 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:54.655 11:34:49 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:54.655 11:34:49 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:54.655 11:34:49 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:54.655 11:34:49 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:54.655 11:34:49 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:54.655 11:34:49 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:32:54.655 11:34:49 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:32:54.655 11:34:49 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:54.655 11:34:49 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:54.655 11:34:49 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:54.655 11:34:49 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:54.655 11:34:49 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:54.655 11:34:49 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:32:54.655 11:34:49 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:54.655 11:34:49 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:54.655 11:34:49 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:54.655 11:34:49 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.655 11:34:49 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.655 11:34:49 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.655 11:34:49 keyring_file -- paths/export.sh@5 -- # export PATH 00:32:54.655 11:34:49 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.655 11:34:49 keyring_file -- nvmf/common.sh@51 -- # : 0 00:32:54.655 11:34:49 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:54.655 11:34:49 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:54.655 11:34:49 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:54.655 11:34:49 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:54.655 11:34:49 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:54.655 11:34:49 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:54.655 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:54.655 11:34:49 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:54.655 11:34:49 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:54.655 11:34:49 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:54.655 11:34:49 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:54.655 11:34:49 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:54.655 11:34:49 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:54.655 11:34:49 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:32:54.655 11:34:49 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:32:54.655 11:34:49 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:32:54.655 11:34:49 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:54.655 11:34:49 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:54.655 11:34:49 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:54.655 11:34:49 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:54.655 11:34:49 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:54.655 11:34:49 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:54.655 11:34:49 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.tpzEp8skJP 00:32:54.655 11:34:49 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:54.655 11:34:49 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:54.655 11:34:49 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:32:54.655 11:34:49 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:32:54.655 11:34:49 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:32:54.655 11:34:49 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:32:54.655 11:34:49 keyring_file -- nvmf/common.sh@733 -- # python - 00:32:54.655 11:34:49 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.tpzEp8skJP 00:32:54.655 11:34:49 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.tpzEp8skJP 00:32:54.655 11:34:49 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.tpzEp8skJP 00:32:54.655 11:34:49 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:32:54.655 11:34:49 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:54.655 11:34:49 keyring_file -- keyring/common.sh@17 -- # name=key1 00:32:54.655 11:34:49 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:54.655 11:34:49 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:54.655 11:34:49 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:54.655 11:34:49 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ZAC8QcgJz6 00:32:54.655 11:34:49 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:54.655 11:34:49 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:54.655 11:34:49 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:32:54.655 11:34:49 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:32:54.655 11:34:49 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:32:54.655 11:34:49 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:32:54.655 11:34:49 keyring_file -- nvmf/common.sh@733 -- # python - 00:32:54.655 11:34:49 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ZAC8QcgJz6 00:32:54.655 11:34:49 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ZAC8QcgJz6 00:32:54.655 11:34:49 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.ZAC8QcgJz6 00:32:54.655 11:34:49 keyring_file -- keyring/file.sh@30 -- # tgtpid=2812702 00:32:54.655 11:34:49 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:54.655 11:34:49 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2812702 00:32:54.655 11:34:49 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2812702 ']' 00:32:54.655 11:34:49 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:54.655 11:34:49 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:54.655 11:34:49 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:54.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:54.655 11:34:49 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:54.655 11:34:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:54.655 [2024-11-19 11:34:49.949850] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:32:54.655 [2024-11-19 11:34:49.949956] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2812702 ] 00:32:54.655 [2024-11-19 11:34:50.025782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:54.655 [2024-11-19 11:34:50.092009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:54.913 11:34:50 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:54.913 11:34:50 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:32:54.913 11:34:50 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:32:54.913 11:34:50 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.913 11:34:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:54.913 [2024-11-19 11:34:50.372026] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:54.913 null0 00:32:54.913 [2024-11-19 11:34:50.404088] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:54.913 [2024-11-19 11:34:50.404569] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:55.171 11:34:50 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.171 11:34:50 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:55.171 11:34:50 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:32:55.171 11:34:50 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:55.171 11:34:50 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:55.171 11:34:50 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:55.171 11:34:50 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:55.171 11:34:50 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:55.171 11:34:50 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:55.171 11:34:50 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.171 11:34:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:55.171 [2024-11-19 11:34:50.428142] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:32:55.171 request: 00:32:55.171 { 00:32:55.171 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:32:55.171 "secure_channel": false, 00:32:55.171 "listen_address": { 00:32:55.171 "trtype": "tcp", 00:32:55.171 "traddr": "127.0.0.1", 00:32:55.171 "trsvcid": "4420" 00:32:55.171 }, 00:32:55.171 "method": "nvmf_subsystem_add_listener", 00:32:55.171 "req_id": 1 00:32:55.171 } 00:32:55.171 Got JSON-RPC error response 00:32:55.171 response: 00:32:55.171 { 00:32:55.171 "code": -32602, 00:32:55.171 "message": "Invalid parameters" 00:32:55.171 } 00:32:55.171 11:34:50 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:55.171 11:34:50 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:32:55.171 11:34:50 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:55.171 11:34:50 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:55.171 11:34:50 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:55.171 11:34:50 keyring_file -- keyring/file.sh@47 -- # bperfpid=2812749 00:32:55.171 11:34:50 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:32:55.171 11:34:50 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2812749 /var/tmp/bperf.sock 00:32:55.171 11:34:50 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2812749 ']' 00:32:55.171 11:34:50 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:55.171 11:34:50 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:55.171 11:34:50 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:55.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:55.171 11:34:50 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:55.171 11:34:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:55.171 [2024-11-19 11:34:50.479891] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:32:55.171 [2024-11-19 11:34:50.479971] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2812749 ] 00:32:55.171 [2024-11-19 11:34:50.558120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:55.171 [2024-11-19 11:34:50.615560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:55.429 11:34:50 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:55.429 11:34:50 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:32:55.429 11:34:50 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.tpzEp8skJP 00:32:55.429 11:34:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.tpzEp8skJP 00:32:55.687 11:34:50 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.ZAC8QcgJz6 00:32:55.687 11:34:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.ZAC8QcgJz6 00:32:55.945 11:34:51 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:32:55.945 11:34:51 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:32:55.945 11:34:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:55.945 11:34:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:55.945 11:34:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:56.203 11:34:51 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.tpzEp8skJP == \/\t\m\p\/\t\m\p\.\t\p\z\E\p\8\s\k\J\P ]] 00:32:56.203 11:34:51 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:32:56.203 11:34:51 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:32:56.203 11:34:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:56.203 11:34:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:56.203 11:34:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:56.461 11:34:51 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.ZAC8QcgJz6 == \/\t\m\p\/\t\m\p\.\Z\A\C\8\Q\c\g\J\z\6 ]] 00:32:56.461 11:34:51 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:32:56.461 11:34:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:56.461 11:34:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:56.461 11:34:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:56.461 11:34:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:56.461 11:34:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:56.719 11:34:52 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:32:56.719 11:34:52 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:32:56.719 11:34:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:56.719 11:34:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:56.719 11:34:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:56.719 11:34:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:56.719 11:34:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:56.977 11:34:52 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:32:56.977 11:34:52 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:56.977 11:34:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:57.235 [2024-11-19 11:34:52.588696] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:57.235 nvme0n1 00:32:57.235 11:34:52 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:32:57.235 11:34:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:57.235 11:34:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:57.235 11:34:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:57.235 11:34:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:57.235 11:34:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:57.493 11:34:52 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:32:57.493 11:34:52 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:32:57.493 11:34:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:57.493 11:34:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:57.493 11:34:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:57.494 11:34:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:57.494 11:34:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:57.752 11:34:53 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:32:57.752 11:34:53 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:58.010 Running I/O for 1 seconds... 00:32:58.943 10423.00 IOPS, 40.71 MiB/s 00:32:58.943 Latency(us) 00:32:58.943 [2024-11-19T10:34:54.440Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:58.943 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:32:58.943 nvme0n1 : 1.05 10062.49 39.31 0.00 0.00 12394.37 4174.89 51263.72 00:32:58.943 [2024-11-19T10:34:54.440Z] =================================================================================================================== 00:32:58.943 [2024-11-19T10:34:54.440Z] Total : 10062.49 39.31 0.00 0.00 12394.37 4174.89 51263.72 00:32:58.943 { 00:32:58.943 "results": [ 00:32:58.943 { 00:32:58.943 "job": "nvme0n1", 00:32:58.943 "core_mask": "0x2", 00:32:58.943 "workload": "randrw", 00:32:58.943 "percentage": 50, 00:32:58.943 "status": "finished", 00:32:58.943 "queue_depth": 128, 00:32:58.943 "io_size": 4096, 00:32:58.943 "runtime": 1.048746, 00:32:58.943 "iops": 10062.493682931807, 00:32:58.943 "mibps": 39.30661594895237, 00:32:58.943 "io_failed": 0, 00:32:58.943 "io_timeout": 0, 00:32:58.943 "avg_latency_us": 12394.36596003945, 00:32:58.943 "min_latency_us": 4174.885925925926, 00:32:58.943 "max_latency_us": 51263.71555555556 00:32:58.943 } 00:32:58.943 ], 00:32:58.943 "core_count": 1 00:32:58.943 } 00:32:58.943 11:34:54 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:58.943 11:34:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:59.201 11:34:54 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:32:59.201 11:34:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:59.201 11:34:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:59.201 11:34:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:59.201 11:34:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:59.201 11:34:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:59.459 11:34:54 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:32:59.459 11:34:54 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:32:59.459 11:34:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:59.459 11:34:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:59.459 11:34:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:59.459 11:34:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:59.459 11:34:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:00.025 11:34:55 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:33:00.025 11:34:55 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:00.025 11:34:55 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:33:00.025 11:34:55 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:00.025 11:34:55 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:33:00.025 11:34:55 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:00.025 11:34:55 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:33:00.025 11:34:55 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:00.025 11:34:55 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:00.025 11:34:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:00.025 [2024-11-19 11:34:55.502787] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:00.025 [2024-11-19 11:34:55.502809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef6510 (107): Transport endpoint is not connected 00:33:00.025 [2024-11-19 11:34:55.503803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef6510 (9): Bad file descriptor 00:33:00.025 [2024-11-19 11:34:55.504802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:33:00.025 [2024-11-19 11:34:55.504828] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:00.025 [2024-11-19 11:34:55.504857] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:33:00.025 [2024-11-19 11:34:55.504871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:33:00.025 request: 00:33:00.025 { 00:33:00.025 "name": "nvme0", 00:33:00.025 "trtype": "tcp", 00:33:00.025 "traddr": "127.0.0.1", 00:33:00.025 "adrfam": "ipv4", 00:33:00.025 "trsvcid": "4420", 00:33:00.025 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:00.025 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:00.025 "prchk_reftag": false, 00:33:00.025 "prchk_guard": false, 00:33:00.025 "hdgst": false, 00:33:00.025 "ddgst": false, 00:33:00.025 "psk": "key1", 00:33:00.025 "allow_unrecognized_csi": false, 00:33:00.025 "method": "bdev_nvme_attach_controller", 00:33:00.025 "req_id": 1 00:33:00.025 } 00:33:00.025 Got JSON-RPC error response 00:33:00.025 response: 00:33:00.025 { 00:33:00.025 "code": -5, 00:33:00.025 "message": "Input/output error" 00:33:00.025 } 00:33:00.326 11:34:55 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:33:00.326 11:34:55 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:00.326 11:34:55 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:00.326 11:34:55 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:00.326 11:34:55 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:33:00.326 11:34:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:00.326 11:34:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:00.326 11:34:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:00.326 11:34:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:00.326 11:34:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:00.603 11:34:55 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:33:00.603 11:34:55 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:33:00.603 11:34:55 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:00.603 11:34:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:00.603 11:34:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:00.603 11:34:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:00.603 11:34:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:00.603 11:34:56 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:33:00.603 11:34:56 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:33:00.603 11:34:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:01.168 11:34:56 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:33:01.168 11:34:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:33:01.168 11:34:56 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:33:01.168 11:34:56 keyring_file -- keyring/file.sh@78 -- # jq length 00:33:01.168 11:34:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:01.426 11:34:56 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:33:01.426 11:34:56 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.tpzEp8skJP 00:33:01.426 11:34:56 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.tpzEp8skJP 00:33:01.426 11:34:56 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:33:01.426 11:34:56 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.tpzEp8skJP 00:33:01.426 11:34:56 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:33:01.426 11:34:56 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:01.426 11:34:56 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:33:01.426 11:34:56 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:01.426 11:34:56 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.tpzEp8skJP 00:33:01.426 11:34:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.tpzEp8skJP 00:33:01.684 [2024-11-19 11:34:57.155909] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.tpzEp8skJP': 0100660 00:33:01.684 [2024-11-19 11:34:57.155942] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:33:01.684 request: 00:33:01.684 { 00:33:01.684 "name": "key0", 00:33:01.684 "path": "/tmp/tmp.tpzEp8skJP", 00:33:01.684 "method": "keyring_file_add_key", 00:33:01.684 "req_id": 1 00:33:01.684 } 00:33:01.684 Got JSON-RPC error response 00:33:01.684 response: 00:33:01.684 { 00:33:01.684 "code": -1, 00:33:01.684 "message": "Operation not permitted" 00:33:01.684 } 00:33:01.684 11:34:57 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:33:01.684 11:34:57 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:01.684 11:34:57 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:01.684 11:34:57 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:01.684 11:34:57 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.tpzEp8skJP 00:33:01.684 11:34:57 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.tpzEp8skJP 00:33:01.684 11:34:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.tpzEp8skJP 00:33:02.249 11:34:57 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.tpzEp8skJP 00:33:02.249 11:34:57 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:33:02.249 11:34:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:02.249 11:34:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:02.249 11:34:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:02.249 11:34:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:02.249 11:34:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:02.249 11:34:57 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:33:02.249 11:34:57 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:02.249 11:34:57 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:33:02.249 11:34:57 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:02.249 11:34:57 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:33:02.249 11:34:57 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:02.249 11:34:57 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:33:02.249 11:34:57 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:02.249 11:34:57 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:02.249 11:34:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:02.506 [2024-11-19 11:34:57.998260] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.tpzEp8skJP': No such file or directory 00:33:02.506 [2024-11-19 11:34:57.998306] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:33:02.506 [2024-11-19 11:34:57.998331] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:33:02.506 [2024-11-19 11:34:57.998344] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:33:02.506 [2024-11-19 11:34:57.998374] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:02.506 [2024-11-19 11:34:57.998389] bdev_nvme.c:6763:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:33:02.765 request: 00:33:02.765 { 00:33:02.765 "name": "nvme0", 00:33:02.765 "trtype": "tcp", 00:33:02.765 "traddr": "127.0.0.1", 00:33:02.765 "adrfam": "ipv4", 00:33:02.765 "trsvcid": "4420", 00:33:02.765 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:02.765 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:02.765 "prchk_reftag": false, 00:33:02.765 "prchk_guard": false, 00:33:02.765 "hdgst": false, 00:33:02.765 "ddgst": false, 00:33:02.765 "psk": "key0", 00:33:02.765 "allow_unrecognized_csi": false, 00:33:02.765 "method": "bdev_nvme_attach_controller", 00:33:02.765 "req_id": 1 00:33:02.765 } 00:33:02.765 Got JSON-RPC error response 00:33:02.765 response: 00:33:02.765 { 00:33:02.765 "code": -19, 00:33:02.765 "message": "No such device" 00:33:02.765 } 00:33:02.765 11:34:58 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:33:02.765 11:34:58 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:02.765 11:34:58 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:02.765 11:34:58 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:02.765 11:34:58 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:33:02.765 11:34:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:03.023 11:34:58 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:03.023 11:34:58 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:03.023 11:34:58 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:03.023 11:34:58 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:03.023 11:34:58 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:03.023 11:34:58 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:03.023 11:34:58 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.nEvGUUXRSP 00:33:03.023 11:34:58 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:03.023 11:34:58 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:03.023 11:34:58 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:33:03.023 11:34:58 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:33:03.023 11:34:58 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:33:03.023 11:34:58 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:33:03.023 11:34:58 keyring_file -- nvmf/common.sh@733 -- # python - 00:33:03.023 11:34:58 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.nEvGUUXRSP 00:33:03.023 11:34:58 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.nEvGUUXRSP 00:33:03.023 11:34:58 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.nEvGUUXRSP 00:33:03.023 11:34:58 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nEvGUUXRSP 00:33:03.023 11:34:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nEvGUUXRSP 00:33:03.281 11:34:58 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:03.281 11:34:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:03.539 nvme0n1 00:33:03.539 11:34:58 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:33:03.539 11:34:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:03.539 11:34:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:03.539 11:34:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:03.539 11:34:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:03.539 11:34:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:03.797 11:34:59 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:33:03.797 11:34:59 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:33:03.797 11:34:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:04.055 11:34:59 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:33:04.055 11:34:59 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:33:04.055 11:34:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:04.055 11:34:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:04.055 11:34:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:04.313 11:34:59 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:33:04.313 11:34:59 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:33:04.313 11:34:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:04.313 11:34:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:04.313 11:34:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:04.313 11:34:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:04.313 11:34:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:04.572 11:35:00 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:33:04.572 11:35:00 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:04.572 11:35:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:04.829 11:35:00 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:33:04.829 11:35:00 keyring_file -- keyring/file.sh@105 -- # jq length 00:33:04.829 11:35:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:05.395 11:35:00 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:33:05.395 11:35:00 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nEvGUUXRSP 00:33:05.395 11:35:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nEvGUUXRSP 00:33:05.395 11:35:00 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.ZAC8QcgJz6 00:33:05.395 11:35:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.ZAC8QcgJz6 00:33:05.653 11:35:01 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:05.653 11:35:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:06.218 nvme0n1 00:33:06.218 11:35:01 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:33:06.218 11:35:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:33:06.477 11:35:01 keyring_file -- keyring/file.sh@113 -- # config='{ 00:33:06.477 "subsystems": [ 00:33:06.477 { 00:33:06.477 "subsystem": "keyring", 00:33:06.477 "config": [ 00:33:06.477 { 00:33:06.477 "method": "keyring_file_add_key", 00:33:06.477 "params": { 00:33:06.477 "name": "key0", 00:33:06.477 "path": "/tmp/tmp.nEvGUUXRSP" 00:33:06.477 } 00:33:06.477 }, 00:33:06.477 { 00:33:06.477 "method": "keyring_file_add_key", 00:33:06.477 "params": { 00:33:06.477 "name": "key1", 00:33:06.477 "path": "/tmp/tmp.ZAC8QcgJz6" 00:33:06.477 } 00:33:06.477 } 00:33:06.477 ] 00:33:06.477 }, 00:33:06.477 { 00:33:06.477 "subsystem": "iobuf", 00:33:06.477 "config": [ 00:33:06.477 { 00:33:06.477 "method": "iobuf_set_options", 00:33:06.477 "params": { 00:33:06.477 "small_pool_count": 8192, 00:33:06.477 "large_pool_count": 1024, 00:33:06.477 "small_bufsize": 8192, 00:33:06.477 "large_bufsize": 135168, 00:33:06.477 "enable_numa": false 00:33:06.477 } 00:33:06.477 } 00:33:06.477 ] 00:33:06.477 }, 00:33:06.477 { 00:33:06.477 "subsystem": "sock", 00:33:06.477 "config": [ 00:33:06.477 { 00:33:06.477 "method": "sock_set_default_impl", 00:33:06.477 "params": { 00:33:06.477 "impl_name": "posix" 00:33:06.477 } 00:33:06.477 }, 00:33:06.477 { 00:33:06.477 "method": "sock_impl_set_options", 00:33:06.477 "params": { 00:33:06.477 "impl_name": "ssl", 00:33:06.477 "recv_buf_size": 4096, 00:33:06.477 "send_buf_size": 4096, 00:33:06.477 "enable_recv_pipe": true, 00:33:06.477 "enable_quickack": false, 00:33:06.477 "enable_placement_id": 0, 00:33:06.477 "enable_zerocopy_send_server": true, 00:33:06.477 "enable_zerocopy_send_client": false, 00:33:06.477 "zerocopy_threshold": 0, 00:33:06.477 "tls_version": 0, 00:33:06.477 "enable_ktls": false 00:33:06.477 } 00:33:06.477 }, 00:33:06.477 { 00:33:06.477 "method": "sock_impl_set_options", 00:33:06.477 "params": { 00:33:06.477 "impl_name": "posix", 00:33:06.477 "recv_buf_size": 2097152, 00:33:06.477 "send_buf_size": 2097152, 00:33:06.477 "enable_recv_pipe": true, 00:33:06.477 "enable_quickack": false, 00:33:06.477 "enable_placement_id": 0, 00:33:06.477 "enable_zerocopy_send_server": true, 00:33:06.477 "enable_zerocopy_send_client": false, 00:33:06.477 "zerocopy_threshold": 0, 00:33:06.477 "tls_version": 0, 00:33:06.477 "enable_ktls": false 00:33:06.477 } 00:33:06.477 } 00:33:06.477 ] 00:33:06.477 }, 00:33:06.477 { 00:33:06.477 "subsystem": "vmd", 00:33:06.477 "config": [] 00:33:06.477 }, 00:33:06.477 { 00:33:06.477 "subsystem": "accel", 00:33:06.477 "config": [ 00:33:06.477 { 00:33:06.477 "method": "accel_set_options", 00:33:06.477 "params": { 00:33:06.477 "small_cache_size": 128, 00:33:06.477 "large_cache_size": 16, 00:33:06.477 "task_count": 2048, 00:33:06.477 "sequence_count": 2048, 00:33:06.477 "buf_count": 2048 00:33:06.477 } 00:33:06.477 } 00:33:06.477 ] 00:33:06.477 }, 00:33:06.477 { 00:33:06.477 "subsystem": "bdev", 00:33:06.477 "config": [ 00:33:06.477 { 00:33:06.477 "method": "bdev_set_options", 00:33:06.477 "params": { 00:33:06.477 "bdev_io_pool_size": 65535, 00:33:06.477 "bdev_io_cache_size": 256, 00:33:06.477 "bdev_auto_examine": true, 00:33:06.477 "iobuf_small_cache_size": 128, 00:33:06.477 "iobuf_large_cache_size": 16 00:33:06.477 } 00:33:06.477 }, 00:33:06.477 { 00:33:06.477 "method": "bdev_raid_set_options", 00:33:06.477 "params": { 00:33:06.477 "process_window_size_kb": 1024, 00:33:06.477 "process_max_bandwidth_mb_sec": 0 00:33:06.477 } 00:33:06.477 }, 00:33:06.477 { 00:33:06.477 "method": "bdev_iscsi_set_options", 00:33:06.477 "params": { 00:33:06.477 "timeout_sec": 30 00:33:06.477 } 00:33:06.477 }, 00:33:06.477 { 00:33:06.477 "method": "bdev_nvme_set_options", 00:33:06.477 "params": { 00:33:06.477 "action_on_timeout": "none", 00:33:06.477 "timeout_us": 0, 00:33:06.477 "timeout_admin_us": 0, 00:33:06.477 "keep_alive_timeout_ms": 10000, 00:33:06.477 "arbitration_burst": 0, 00:33:06.477 "low_priority_weight": 0, 00:33:06.477 "medium_priority_weight": 0, 00:33:06.477 "high_priority_weight": 0, 00:33:06.477 "nvme_adminq_poll_period_us": 10000, 00:33:06.477 "nvme_ioq_poll_period_us": 0, 00:33:06.477 "io_queue_requests": 512, 00:33:06.477 "delay_cmd_submit": true, 00:33:06.477 "transport_retry_count": 4, 00:33:06.477 "bdev_retry_count": 3, 00:33:06.477 "transport_ack_timeout": 0, 00:33:06.477 "ctrlr_loss_timeout_sec": 0, 00:33:06.477 "reconnect_delay_sec": 0, 00:33:06.477 "fast_io_fail_timeout_sec": 0, 00:33:06.477 "disable_auto_failback": false, 00:33:06.477 "generate_uuids": false, 00:33:06.477 "transport_tos": 0, 00:33:06.477 "nvme_error_stat": false, 00:33:06.477 "rdma_srq_size": 0, 00:33:06.477 "io_path_stat": false, 00:33:06.477 "allow_accel_sequence": false, 00:33:06.477 "rdma_max_cq_size": 0, 00:33:06.477 "rdma_cm_event_timeout_ms": 0, 00:33:06.477 "dhchap_digests": [ 00:33:06.477 "sha256", 00:33:06.477 "sha384", 00:33:06.477 "sha512" 00:33:06.477 ], 00:33:06.477 "dhchap_dhgroups": [ 00:33:06.477 "null", 00:33:06.477 "ffdhe2048", 00:33:06.477 "ffdhe3072", 00:33:06.477 "ffdhe4096", 00:33:06.477 "ffdhe6144", 00:33:06.477 "ffdhe8192" 00:33:06.477 ] 00:33:06.477 } 00:33:06.477 }, 00:33:06.477 { 00:33:06.478 "method": "bdev_nvme_attach_controller", 00:33:06.478 "params": { 00:33:06.478 "name": "nvme0", 00:33:06.478 "trtype": "TCP", 00:33:06.478 "adrfam": "IPv4", 00:33:06.478 "traddr": "127.0.0.1", 00:33:06.478 "trsvcid": "4420", 00:33:06.478 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:06.478 "prchk_reftag": false, 00:33:06.478 "prchk_guard": false, 00:33:06.478 "ctrlr_loss_timeout_sec": 0, 00:33:06.478 "reconnect_delay_sec": 0, 00:33:06.478 "fast_io_fail_timeout_sec": 0, 00:33:06.478 "psk": "key0", 00:33:06.478 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:06.478 "hdgst": false, 00:33:06.478 "ddgst": false, 00:33:06.478 "multipath": "multipath" 00:33:06.478 } 00:33:06.478 }, 00:33:06.478 { 00:33:06.478 "method": "bdev_nvme_set_hotplug", 00:33:06.478 "params": { 00:33:06.478 "period_us": 100000, 00:33:06.478 "enable": false 00:33:06.478 } 00:33:06.478 }, 00:33:06.478 { 00:33:06.478 "method": "bdev_wait_for_examine" 00:33:06.478 } 00:33:06.478 ] 00:33:06.478 }, 00:33:06.478 { 00:33:06.478 "subsystem": "nbd", 00:33:06.478 "config": [] 00:33:06.478 } 00:33:06.478 ] 00:33:06.478 }' 00:33:06.478 11:35:01 keyring_file -- keyring/file.sh@115 -- # killprocess 2812749 00:33:06.478 11:35:01 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2812749 ']' 00:33:06.478 11:35:01 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2812749 00:33:06.478 11:35:01 keyring_file -- common/autotest_common.sh@959 -- # uname 00:33:06.478 11:35:01 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:06.478 11:35:01 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2812749 00:33:06.478 11:35:01 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:06.478 11:35:01 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:06.478 11:35:01 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2812749' 00:33:06.478 killing process with pid 2812749 00:33:06.478 11:35:01 keyring_file -- common/autotest_common.sh@973 -- # kill 2812749 00:33:06.478 Received shutdown signal, test time was about 1.000000 seconds 00:33:06.478 00:33:06.478 Latency(us) 00:33:06.478 [2024-11-19T10:35:01.975Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:06.478 [2024-11-19T10:35:01.975Z] =================================================================================================================== 00:33:06.478 [2024-11-19T10:35:01.975Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:06.478 11:35:01 keyring_file -- common/autotest_common.sh@978 -- # wait 2812749 00:33:06.737 11:35:02 keyring_file -- keyring/file.sh@118 -- # bperfpid=2814300 00:33:06.737 11:35:02 keyring_file -- keyring/file.sh@120 -- # waitforlisten 2814300 /var/tmp/bperf.sock 00:33:06.737 11:35:02 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2814300 ']' 00:33:06.737 11:35:02 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:06.737 11:35:02 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:33:06.737 11:35:02 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:06.737 11:35:02 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:06.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:06.737 11:35:02 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:06.737 11:35:02 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:33:06.737 "subsystems": [ 00:33:06.737 { 00:33:06.737 "subsystem": "keyring", 00:33:06.737 "config": [ 00:33:06.737 { 00:33:06.737 "method": "keyring_file_add_key", 00:33:06.737 "params": { 00:33:06.737 "name": "key0", 00:33:06.737 "path": "/tmp/tmp.nEvGUUXRSP" 00:33:06.737 } 00:33:06.737 }, 00:33:06.737 { 00:33:06.737 "method": "keyring_file_add_key", 00:33:06.737 "params": { 00:33:06.737 "name": "key1", 00:33:06.737 "path": "/tmp/tmp.ZAC8QcgJz6" 00:33:06.737 } 00:33:06.737 } 00:33:06.737 ] 00:33:06.737 }, 00:33:06.737 { 00:33:06.737 "subsystem": "iobuf", 00:33:06.737 "config": [ 00:33:06.737 { 00:33:06.737 "method": "iobuf_set_options", 00:33:06.737 "params": { 00:33:06.737 "small_pool_count": 8192, 00:33:06.737 "large_pool_count": 1024, 00:33:06.737 "small_bufsize": 8192, 00:33:06.737 "large_bufsize": 135168, 00:33:06.737 "enable_numa": false 00:33:06.737 } 00:33:06.737 } 00:33:06.737 ] 00:33:06.737 }, 00:33:06.737 { 00:33:06.737 "subsystem": "sock", 00:33:06.737 "config": [ 00:33:06.737 { 00:33:06.737 "method": "sock_set_default_impl", 00:33:06.737 "params": { 00:33:06.737 "impl_name": "posix" 00:33:06.737 } 00:33:06.737 }, 00:33:06.737 { 00:33:06.737 "method": "sock_impl_set_options", 00:33:06.737 "params": { 00:33:06.737 "impl_name": "ssl", 00:33:06.737 "recv_buf_size": 4096, 00:33:06.737 "send_buf_size": 4096, 00:33:06.737 "enable_recv_pipe": true, 00:33:06.737 "enable_quickack": false, 00:33:06.737 "enable_placement_id": 0, 00:33:06.737 "enable_zerocopy_send_server": true, 00:33:06.737 "enable_zerocopy_send_client": false, 00:33:06.737 "zerocopy_threshold": 0, 00:33:06.737 "tls_version": 0, 00:33:06.737 "enable_ktls": false 00:33:06.737 } 00:33:06.737 }, 00:33:06.737 { 00:33:06.737 "method": "sock_impl_set_options", 00:33:06.737 "params": { 00:33:06.737 "impl_name": "posix", 00:33:06.737 "recv_buf_size": 2097152, 00:33:06.737 "send_buf_size": 2097152, 00:33:06.737 "enable_recv_pipe": true, 00:33:06.737 "enable_quickack": false, 00:33:06.737 "enable_placement_id": 0, 00:33:06.737 "enable_zerocopy_send_server": true, 00:33:06.737 "enable_zerocopy_send_client": false, 00:33:06.737 "zerocopy_threshold": 0, 00:33:06.737 "tls_version": 0, 00:33:06.737 "enable_ktls": false 00:33:06.737 } 00:33:06.737 } 00:33:06.737 ] 00:33:06.737 }, 00:33:06.737 { 00:33:06.737 "subsystem": "vmd", 00:33:06.737 "config": [] 00:33:06.737 }, 00:33:06.737 { 00:33:06.737 "subsystem": "accel", 00:33:06.737 "config": [ 00:33:06.737 { 00:33:06.737 "method": "accel_set_options", 00:33:06.737 "params": { 00:33:06.737 "small_cache_size": 128, 00:33:06.737 "large_cache_size": 16, 00:33:06.737 "task_count": 2048, 00:33:06.737 "sequence_count": 2048, 00:33:06.737 "buf_count": 2048 00:33:06.737 } 00:33:06.737 } 00:33:06.737 ] 00:33:06.737 }, 00:33:06.737 { 00:33:06.737 "subsystem": "bdev", 00:33:06.737 "config": [ 00:33:06.737 { 00:33:06.737 "method": "bdev_set_options", 00:33:06.737 "params": { 00:33:06.737 "bdev_io_pool_size": 65535, 00:33:06.737 "bdev_io_cache_size": 256, 00:33:06.737 "bdev_auto_examine": true, 00:33:06.737 "iobuf_small_cache_size": 128, 00:33:06.737 "iobuf_large_cache_size": 16 00:33:06.737 } 00:33:06.737 }, 00:33:06.737 { 00:33:06.737 "method": "bdev_raid_set_options", 00:33:06.737 "params": { 00:33:06.738 "process_window_size_kb": 1024, 00:33:06.738 "process_max_bandwidth_mb_sec": 0 00:33:06.738 } 00:33:06.738 }, 00:33:06.738 { 00:33:06.738 "method": "bdev_iscsi_set_options", 00:33:06.738 "params": { 00:33:06.738 "timeout_sec": 30 00:33:06.738 } 00:33:06.738 }, 00:33:06.738 { 00:33:06.738 "method": "bdev_nvme_set_options", 00:33:06.738 "params": { 00:33:06.738 "action_on_timeout": "none", 00:33:06.738 "timeout_us": 0, 00:33:06.738 "timeout_admin_us": 0, 00:33:06.738 "keep_alive_timeout_ms": 10000, 00:33:06.738 "arbitration_burst": 0, 00:33:06.738 "low_priority_weight": 0, 00:33:06.738 "medium_priority_weight": 0, 00:33:06.738 "high_priority_weight": 0, 00:33:06.738 "nvme_adminq_poll_period_us": 10000, 00:33:06.738 "nvme_ioq_poll_period_us": 0, 00:33:06.738 "io_queue_requests": 512, 00:33:06.738 "delay_cmd_submit": true, 00:33:06.738 "transport_retry_count": 4, 00:33:06.738 "bdev_retry_count": 3, 00:33:06.738 "transport_ack_timeout": 0, 00:33:06.738 "ctrlr_loss_timeout_sec": 0, 00:33:06.738 "reconnect_delay_sec": 0, 00:33:06.738 "fast_io_fail_timeout_sec": 0, 00:33:06.738 "disable_auto_failback": false, 00:33:06.738 "generate_uuids": false, 00:33:06.738 "transport_tos": 0, 00:33:06.738 "nvme_error_stat": false, 00:33:06.738 "rdma_srq_size": 0, 00:33:06.738 "io_path_stat": false, 00:33:06.738 "allow_accel_sequence": false, 00:33:06.738 "rdma_max_cq_size": 0, 00:33:06.738 "rdma_cm_event_timeout_ms": 0, 00:33:06.738 "dhchap_digests": [ 00:33:06.738 "sha256", 00:33:06.738 "sha384", 00:33:06.738 "sha512" 00:33:06.738 ], 00:33:06.738 "dhchap_dhgroups": [ 00:33:06.738 "null", 00:33:06.738 "ffdhe2048", 00:33:06.738 "ffdhe3072", 00:33:06.738 "ffdhe4096", 00:33:06.738 "ffdhe6144", 00:33:06.738 "ffdhe8192" 00:33:06.738 ] 00:33:06.738 } 00:33:06.738 }, 00:33:06.738 { 00:33:06.738 "method": "bdev_nvme_attach_controller", 00:33:06.738 "params": { 00:33:06.738 "name": "nvme0", 00:33:06.738 "trtype": "TCP", 00:33:06.738 "adrfam": "IPv4", 00:33:06.738 "traddr": "127.0.0.1", 00:33:06.738 "trsvcid": "4420", 00:33:06.738 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:06.738 "prchk_reftag": false, 00:33:06.738 "prchk_guard": false, 00:33:06.738 "ctrlr_loss_timeout_sec": 0, 00:33:06.738 "reconnect_delay_sec": 0, 00:33:06.738 "fast_io_fail_timeout_sec": 0, 00:33:06.738 "psk": "key0", 00:33:06.738 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:06.738 "hdgst": false, 00:33:06.738 "ddgst": false, 00:33:06.738 "multipath": "multipath" 00:33:06.738 } 00:33:06.738 }, 00:33:06.738 { 00:33:06.738 "method": "bdev_nvme_set_hotplug", 00:33:06.738 "params": { 00:33:06.738 "period_us": 100000, 00:33:06.738 "enable": false 00:33:06.738 } 00:33:06.738 }, 00:33:06.738 { 00:33:06.738 "method": "bdev_wait_for_examine" 00:33:06.738 } 00:33:06.738 ] 00:33:06.738 }, 00:33:06.738 { 00:33:06.738 "subsystem": "nbd", 00:33:06.738 "config": [] 00:33:06.738 } 00:33:06.738 ] 00:33:06.738 }' 00:33:06.738 11:35:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:06.738 [2024-11-19 11:35:02.088790] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:33:06.738 [2024-11-19 11:35:02.088870] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2814300 ] 00:33:06.738 [2024-11-19 11:35:02.164330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:06.738 [2024-11-19 11:35:02.225283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:06.996 [2024-11-19 11:35:02.410536] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:07.255 11:35:02 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:07.255 11:35:02 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:33:07.255 11:35:02 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:33:07.255 11:35:02 keyring_file -- keyring/file.sh@121 -- # jq length 00:33:07.255 11:35:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:07.513 11:35:02 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:33:07.513 11:35:02 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:33:07.513 11:35:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:07.513 11:35:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:07.513 11:35:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:07.513 11:35:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:07.513 11:35:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:07.771 11:35:03 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:33:07.771 11:35:03 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:33:07.771 11:35:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:07.771 11:35:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:07.771 11:35:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:07.771 11:35:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:07.771 11:35:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:08.029 11:35:03 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:33:08.029 11:35:03 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:33:08.029 11:35:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:33:08.029 11:35:03 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:33:08.287 11:35:03 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:33:08.287 11:35:03 keyring_file -- keyring/file.sh@1 -- # cleanup 00:33:08.287 11:35:03 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.nEvGUUXRSP /tmp/tmp.ZAC8QcgJz6 00:33:08.287 11:35:03 keyring_file -- keyring/file.sh@20 -- # killprocess 2814300 00:33:08.287 11:35:03 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2814300 ']' 00:33:08.287 11:35:03 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2814300 00:33:08.287 11:35:03 keyring_file -- common/autotest_common.sh@959 -- # uname 00:33:08.287 11:35:03 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:08.287 11:35:03 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2814300 00:33:08.287 11:35:03 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:08.287 11:35:03 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:08.287 11:35:03 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2814300' 00:33:08.287 killing process with pid 2814300 00:33:08.287 11:35:03 keyring_file -- common/autotest_common.sh@973 -- # kill 2814300 00:33:08.287 Received shutdown signal, test time was about 1.000000 seconds 00:33:08.287 00:33:08.287 Latency(us) 00:33:08.287 [2024-11-19T10:35:03.784Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:08.287 [2024-11-19T10:35:03.784Z] =================================================================================================================== 00:33:08.287 [2024-11-19T10:35:03.784Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:08.287 11:35:03 keyring_file -- common/autotest_common.sh@978 -- # wait 2814300 00:33:08.545 11:35:03 keyring_file -- keyring/file.sh@21 -- # killprocess 2812702 00:33:08.545 11:35:03 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2812702 ']' 00:33:08.545 11:35:03 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2812702 00:33:08.545 11:35:03 keyring_file -- common/autotest_common.sh@959 -- # uname 00:33:08.545 11:35:03 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:08.545 11:35:03 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2812702 00:33:08.545 11:35:03 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:08.545 11:35:03 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:08.545 11:35:03 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2812702' 00:33:08.545 killing process with pid 2812702 00:33:08.545 11:35:03 keyring_file -- common/autotest_common.sh@973 -- # kill 2812702 00:33:08.545 11:35:03 keyring_file -- common/autotest_common.sh@978 -- # wait 2812702 00:33:08.805 00:33:08.805 real 0m14.639s 00:33:08.805 user 0m37.172s 00:33:08.805 sys 0m3.305s 00:33:08.805 11:35:04 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:08.805 11:35:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:08.805 ************************************ 00:33:08.805 END TEST keyring_file 00:33:08.805 ************************************ 00:33:09.064 11:35:04 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:33:09.064 11:35:04 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:09.064 11:35:04 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:09.064 11:35:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:09.064 11:35:04 -- common/autotest_common.sh@10 -- # set +x 00:33:09.064 ************************************ 00:33:09.064 START TEST keyring_linux 00:33:09.064 ************************************ 00:33:09.064 11:35:04 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:09.064 Joined session keyring: 636054638 00:33:09.064 * Looking for test storage... 00:33:09.064 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:09.064 11:35:04 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:09.064 11:35:04 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:33:09.064 11:35:04 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:09.064 11:35:04 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:09.064 11:35:04 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:09.064 11:35:04 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:09.064 11:35:04 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:09.064 11:35:04 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:33:09.064 11:35:04 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:33:09.064 11:35:04 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:33:09.064 11:35:04 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:33:09.064 11:35:04 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:33:09.064 11:35:04 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:33:09.064 11:35:04 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:33:09.064 11:35:04 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:09.064 11:35:04 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:33:09.065 11:35:04 keyring_linux -- scripts/common.sh@345 -- # : 1 00:33:09.065 11:35:04 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:09.065 11:35:04 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:09.065 11:35:04 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:33:09.065 11:35:04 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:33:09.065 11:35:04 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:09.065 11:35:04 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:33:09.065 11:35:04 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:33:09.065 11:35:04 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:33:09.065 11:35:04 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:33:09.065 11:35:04 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:09.065 11:35:04 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:33:09.065 11:35:04 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:33:09.065 11:35:04 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:09.065 11:35:04 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:09.065 11:35:04 keyring_linux -- scripts/common.sh@368 -- # return 0 00:33:09.065 11:35:04 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:09.065 11:35:04 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:09.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.065 --rc genhtml_branch_coverage=1 00:33:09.065 --rc genhtml_function_coverage=1 00:33:09.065 --rc genhtml_legend=1 00:33:09.065 --rc geninfo_all_blocks=1 00:33:09.065 --rc geninfo_unexecuted_blocks=1 00:33:09.065 00:33:09.065 ' 00:33:09.065 11:35:04 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:09.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.065 --rc genhtml_branch_coverage=1 00:33:09.065 --rc genhtml_function_coverage=1 00:33:09.065 --rc genhtml_legend=1 00:33:09.065 --rc geninfo_all_blocks=1 00:33:09.065 --rc geninfo_unexecuted_blocks=1 00:33:09.065 00:33:09.065 ' 00:33:09.065 11:35:04 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:09.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.065 --rc genhtml_branch_coverage=1 00:33:09.065 --rc genhtml_function_coverage=1 00:33:09.065 --rc genhtml_legend=1 00:33:09.065 --rc geninfo_all_blocks=1 00:33:09.065 --rc geninfo_unexecuted_blocks=1 00:33:09.065 00:33:09.065 ' 00:33:09.065 11:35:04 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:09.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.065 --rc genhtml_branch_coverage=1 00:33:09.065 --rc genhtml_function_coverage=1 00:33:09.065 --rc genhtml_legend=1 00:33:09.065 --rc geninfo_all_blocks=1 00:33:09.065 --rc geninfo_unexecuted_blocks=1 00:33:09.065 00:33:09.065 ' 00:33:09.065 11:35:04 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:09.065 11:35:04 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:09.065 11:35:04 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:33:09.065 11:35:04 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:09.065 11:35:04 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:09.065 11:35:04 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:09.065 11:35:04 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:09.065 11:35:04 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:09.065 11:35:04 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:09.065 11:35:04 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:09.065 11:35:04 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:09.065 11:35:04 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:09.065 11:35:04 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:09.065 11:35:04 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:33:09.065 11:35:04 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:33:09.065 11:35:04 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:09.065 11:35:04 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:09.065 11:35:04 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:09.065 11:35:04 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:09.065 11:35:04 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:09.065 11:35:04 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:33:09.065 11:35:04 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:09.065 11:35:04 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:09.065 11:35:04 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:09.065 11:35:04 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.065 11:35:04 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.065 11:35:04 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.065 11:35:04 keyring_linux -- paths/export.sh@5 -- # export PATH 00:33:09.065 11:35:04 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.065 11:35:04 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:33:09.065 11:35:04 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:09.065 11:35:04 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:09.065 11:35:04 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:09.065 11:35:04 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:09.065 11:35:04 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:09.065 11:35:04 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:09.065 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:09.065 11:35:04 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:09.065 11:35:04 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:09.065 11:35:04 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:09.065 11:35:04 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:09.065 11:35:04 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:09.065 11:35:04 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:09.065 11:35:04 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:33:09.065 11:35:04 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:33:09.065 11:35:04 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:33:09.065 11:35:04 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:33:09.065 11:35:04 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:09.065 11:35:04 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:33:09.065 11:35:04 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:09.065 11:35:04 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:09.065 11:35:04 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:33:09.065 11:35:04 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:09.065 11:35:04 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:09.065 11:35:04 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:33:09.065 11:35:04 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:33:09.065 11:35:04 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:33:09.065 11:35:04 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:33:09.065 11:35:04 keyring_linux -- nvmf/common.sh@733 -- # python - 00:33:09.065 11:35:04 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:33:09.065 11:35:04 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:33:09.065 /tmp/:spdk-test:key0 00:33:09.065 11:35:04 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:33:09.065 11:35:04 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:09.065 11:35:04 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:33:09.065 11:35:04 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:09.065 11:35:04 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:09.065 11:35:04 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:33:09.065 11:35:04 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:09.065 11:35:04 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:09.065 11:35:04 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:33:09.065 11:35:04 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:33:09.065 11:35:04 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:33:09.065 11:35:04 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:33:09.065 11:35:04 keyring_linux -- nvmf/common.sh@733 -- # python - 00:33:09.324 11:35:04 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:33:09.324 11:35:04 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:33:09.324 /tmp/:spdk-test:key1 00:33:09.324 11:35:04 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2814668 00:33:09.324 11:35:04 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:09.324 11:35:04 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2814668 00:33:09.324 11:35:04 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2814668 ']' 00:33:09.324 11:35:04 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:09.324 11:35:04 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:09.324 11:35:04 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:09.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:09.324 11:35:04 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:09.324 11:35:04 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:09.324 [2024-11-19 11:35:04.641525] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:33:09.324 [2024-11-19 11:35:04.641612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2814668 ] 00:33:09.324 [2024-11-19 11:35:04.717379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:09.324 [2024-11-19 11:35:04.770761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:09.582 11:35:05 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:09.582 11:35:05 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:33:09.582 11:35:05 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:33:09.582 11:35:05 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.582 11:35:05 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:09.582 [2024-11-19 11:35:05.031293] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:09.582 null0 00:33:09.582 [2024-11-19 11:35:05.063371] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:09.582 [2024-11-19 11:35:05.063877] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:09.841 11:35:05 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.841 11:35:05 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:33:09.841 388075358 00:33:09.841 11:35:05 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:33:09.841 229021563 00:33:09.841 11:35:05 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2814676 00:33:09.841 11:35:05 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:33:09.841 11:35:05 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2814676 /var/tmp/bperf.sock 00:33:09.841 11:35:05 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2814676 ']' 00:33:09.841 11:35:05 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:09.841 11:35:05 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:09.841 11:35:05 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:09.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:09.841 11:35:05 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:09.841 11:35:05 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:09.841 [2024-11-19 11:35:05.130763] Starting SPDK v25.01-pre git sha1 73f18e890 / DPDK 24.03.0 initialization... 00:33:09.841 [2024-11-19 11:35:05.130842] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2814676 ] 00:33:09.841 [2024-11-19 11:35:05.206560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:09.841 [2024-11-19 11:35:05.269485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:10.099 11:35:05 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:10.099 11:35:05 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:33:10.099 11:35:05 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:33:10.099 11:35:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:33:10.358 11:35:05 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:33:10.358 11:35:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:10.616 11:35:06 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:10.616 11:35:06 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:10.874 [2024-11-19 11:35:06.256800] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:10.874 nvme0n1 00:33:10.874 11:35:06 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:33:10.874 11:35:06 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:33:10.874 11:35:06 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:10.874 11:35:06 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:10.874 11:35:06 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:10.874 11:35:06 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:11.132 11:35:06 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:33:11.132 11:35:06 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:11.389 11:35:06 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:33:11.389 11:35:06 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:33:11.390 11:35:06 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:11.390 11:35:06 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:11.390 11:35:06 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:33:11.648 11:35:06 keyring_linux -- keyring/linux.sh@25 -- # sn=388075358 00:33:11.648 11:35:06 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:33:11.648 11:35:06 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:11.648 11:35:06 keyring_linux -- keyring/linux.sh@26 -- # [[ 388075358 == \3\8\8\0\7\5\3\5\8 ]] 00:33:11.648 11:35:06 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 388075358 00:33:11.648 11:35:06 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:33:11.648 11:35:06 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:11.648 Running I/O for 1 seconds... 00:33:12.581 11338.00 IOPS, 44.29 MiB/s 00:33:12.581 Latency(us) 00:33:12.581 [2024-11-19T10:35:08.078Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:12.581 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:12.581 nvme0n1 : 1.01 11341.46 44.30 0.00 0.00 11217.78 3155.44 14272.28 00:33:12.581 [2024-11-19T10:35:08.078Z] =================================================================================================================== 00:33:12.581 [2024-11-19T10:35:08.078Z] Total : 11341.46 44.30 0.00 0.00 11217.78 3155.44 14272.28 00:33:12.581 { 00:33:12.581 "results": [ 00:33:12.581 { 00:33:12.581 "job": "nvme0n1", 00:33:12.581 "core_mask": "0x2", 00:33:12.581 "workload": "randread", 00:33:12.581 "status": "finished", 00:33:12.581 "queue_depth": 128, 00:33:12.581 "io_size": 4096, 00:33:12.581 "runtime": 1.011069, 00:33:12.581 "iops": 11341.461364160112, 00:33:12.581 "mibps": 44.30258345375044, 00:33:12.581 "io_failed": 0, 00:33:12.581 "io_timeout": 0, 00:33:12.581 "avg_latency_us": 11217.776475490053, 00:33:12.581 "min_latency_us": 3155.437037037037, 00:33:12.581 "max_latency_us": 14272.284444444444 00:33:12.581 } 00:33:12.581 ], 00:33:12.581 "core_count": 1 00:33:12.581 } 00:33:12.581 11:35:08 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:12.581 11:35:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:12.839 11:35:08 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:33:12.839 11:35:08 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:33:12.839 11:35:08 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:12.839 11:35:08 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:12.839 11:35:08 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:12.839 11:35:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:13.097 11:35:08 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:33:13.097 11:35:08 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:13.097 11:35:08 keyring_linux -- keyring/linux.sh@23 -- # return 00:33:13.097 11:35:08 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:13.097 11:35:08 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:33:13.097 11:35:08 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:13.097 11:35:08 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:33:13.097 11:35:08 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:13.097 11:35:08 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:33:13.097 11:35:08 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:13.097 11:35:08 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:13.097 11:35:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:13.355 [2024-11-19 11:35:08.848000] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:13.355 [2024-11-19 11:35:08.848632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x159dbc0 (107): Transport endpoint is not connected 00:33:13.355 [2024-11-19 11:35:08.849625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x159dbc0 (9): Bad file descriptor 00:33:13.355 [2024-11-19 11:35:08.850624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:33:13.355 [2024-11-19 11:35:08.850665] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:13.355 [2024-11-19 11:35:08.850680] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:33:13.355 [2024-11-19 11:35:08.850695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:33:13.613 request: 00:33:13.613 { 00:33:13.613 "name": "nvme0", 00:33:13.613 "trtype": "tcp", 00:33:13.613 "traddr": "127.0.0.1", 00:33:13.613 "adrfam": "ipv4", 00:33:13.613 "trsvcid": "4420", 00:33:13.613 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:13.613 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:13.613 "prchk_reftag": false, 00:33:13.613 "prchk_guard": false, 00:33:13.613 "hdgst": false, 00:33:13.613 "ddgst": false, 00:33:13.613 "psk": ":spdk-test:key1", 00:33:13.613 "allow_unrecognized_csi": false, 00:33:13.613 "method": "bdev_nvme_attach_controller", 00:33:13.613 "req_id": 1 00:33:13.613 } 00:33:13.613 Got JSON-RPC error response 00:33:13.613 response: 00:33:13.613 { 00:33:13.613 "code": -5, 00:33:13.613 "message": "Input/output error" 00:33:13.613 } 00:33:13.613 11:35:08 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:33:13.613 11:35:08 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:13.613 11:35:08 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:13.613 11:35:08 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:13.613 11:35:08 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:33:13.613 11:35:08 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:13.613 11:35:08 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:33:13.613 11:35:08 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:33:13.613 11:35:08 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:33:13.613 11:35:08 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:13.613 11:35:08 keyring_linux -- keyring/linux.sh@33 -- # sn=388075358 00:33:13.613 11:35:08 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 388075358 00:33:13.613 1 links removed 00:33:13.613 11:35:08 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:13.613 11:35:08 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:33:13.613 11:35:08 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:33:13.613 11:35:08 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:33:13.613 11:35:08 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:33:13.613 11:35:08 keyring_linux -- keyring/linux.sh@33 -- # sn=229021563 00:33:13.613 11:35:08 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 229021563 00:33:13.613 1 links removed 00:33:13.613 11:35:08 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2814676 00:33:13.613 11:35:08 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2814676 ']' 00:33:13.613 11:35:08 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2814676 00:33:13.613 11:35:08 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:33:13.613 11:35:08 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:13.613 11:35:08 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2814676 00:33:13.613 11:35:08 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:13.613 11:35:08 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:13.613 11:35:08 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2814676' 00:33:13.613 killing process with pid 2814676 00:33:13.613 11:35:08 keyring_linux -- common/autotest_common.sh@973 -- # kill 2814676 00:33:13.613 Received shutdown signal, test time was about 1.000000 seconds 00:33:13.613 00:33:13.613 Latency(us) 00:33:13.613 [2024-11-19T10:35:09.110Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:13.613 [2024-11-19T10:35:09.110Z] =================================================================================================================== 00:33:13.613 [2024-11-19T10:35:09.110Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:13.613 11:35:08 keyring_linux -- common/autotest_common.sh@978 -- # wait 2814676 00:33:13.874 11:35:09 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2814668 00:33:13.874 11:35:09 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2814668 ']' 00:33:13.874 11:35:09 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2814668 00:33:13.875 11:35:09 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:33:13.875 11:35:09 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:13.875 11:35:09 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2814668 00:33:13.875 11:35:09 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:13.875 11:35:09 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:13.875 11:35:09 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2814668' 00:33:13.875 killing process with pid 2814668 00:33:13.875 11:35:09 keyring_linux -- common/autotest_common.sh@973 -- # kill 2814668 00:33:13.875 11:35:09 keyring_linux -- common/autotest_common.sh@978 -- # wait 2814668 00:33:14.133 00:33:14.133 real 0m5.271s 00:33:14.133 user 0m10.471s 00:33:14.133 sys 0m1.601s 00:33:14.133 11:35:09 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:14.133 11:35:09 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:14.133 ************************************ 00:33:14.133 END TEST keyring_linux 00:33:14.133 ************************************ 00:33:14.133 11:35:09 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:33:14.133 11:35:09 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:33:14.133 11:35:09 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:33:14.133 11:35:09 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:33:14.133 11:35:09 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:33:14.390 11:35:09 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:33:14.390 11:35:09 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:33:14.390 11:35:09 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:33:14.390 11:35:09 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:33:14.390 11:35:09 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:33:14.390 11:35:09 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:33:14.390 11:35:09 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:33:14.390 11:35:09 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:33:14.390 11:35:09 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:33:14.390 11:35:09 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:33:14.390 11:35:09 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:33:14.390 11:35:09 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:33:14.390 11:35:09 -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:14.390 11:35:09 -- common/autotest_common.sh@10 -- # set +x 00:33:14.390 11:35:09 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:33:14.390 11:35:09 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:33:14.390 11:35:09 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:33:14.390 11:35:09 -- common/autotest_common.sh@10 -- # set +x 00:33:16.293 INFO: APP EXITING 00:33:16.293 INFO: killing all VMs 00:33:16.293 INFO: killing vhost app 00:33:16.293 INFO: EXIT DONE 00:33:17.670 0000:81:00.0 (8086 0a54): Already using the nvme driver 00:33:17.670 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:33:17.670 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:33:17.670 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:33:17.670 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:33:17.670 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:33:17.670 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:33:17.670 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:33:17.670 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:33:17.670 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:33:17.670 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:33:17.670 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:33:17.930 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:33:17.930 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:33:17.930 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:33:17.930 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:33:17.931 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:33:19.305 Cleaning 00:33:19.305 Removing: /var/run/dpdk/spdk0/config 00:33:19.305 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:19.305 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:19.305 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:19.305 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:19.305 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:33:19.305 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:33:19.305 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:33:19.305 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:33:19.305 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:19.305 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:19.305 Removing: /var/run/dpdk/spdk1/config 00:33:19.305 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:33:19.305 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:33:19.305 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:33:19.305 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:33:19.305 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:33:19.305 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:33:19.305 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:33:19.305 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:33:19.305 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:33:19.305 Removing: /var/run/dpdk/spdk1/hugepage_info 00:33:19.305 Removing: /var/run/dpdk/spdk2/config 00:33:19.305 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:33:19.305 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:33:19.305 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:33:19.305 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:33:19.305 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:33:19.305 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:33:19.305 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:33:19.564 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:33:19.564 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:33:19.564 Removing: /var/run/dpdk/spdk2/hugepage_info 00:33:19.564 Removing: /var/run/dpdk/spdk3/config 00:33:19.564 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:33:19.564 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:33:19.564 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:33:19.564 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:33:19.564 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:33:19.564 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:33:19.564 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:33:19.564 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:33:19.564 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:33:19.564 Removing: /var/run/dpdk/spdk3/hugepage_info 00:33:19.564 Removing: /var/run/dpdk/spdk4/config 00:33:19.564 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:33:19.564 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:33:19.564 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:33:19.564 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:33:19.564 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:33:19.564 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:33:19.564 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:33:19.564 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:33:19.564 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:33:19.564 Removing: /var/run/dpdk/spdk4/hugepage_info 00:33:19.564 Removing: /dev/shm/bdev_svc_trace.1 00:33:19.564 Removing: /dev/shm/nvmf_trace.0 00:33:19.564 Removing: /dev/shm/spdk_tgt_trace.pid2466973 00:33:19.564 Removing: /var/run/dpdk/spdk0 00:33:19.564 Removing: /var/run/dpdk/spdk1 00:33:19.564 Removing: /var/run/dpdk/spdk2 00:33:19.564 Removing: /var/run/dpdk/spdk3 00:33:19.564 Removing: /var/run/dpdk/spdk4 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2465145 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2466014 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2466973 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2467415 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2468107 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2468253 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2468965 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2468976 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2469236 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2470686 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2471737 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2471940 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2472135 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2472458 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2472666 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2472828 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2472980 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2473168 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2473485 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2476591 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2476753 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2476917 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2476926 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2477357 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2477360 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2477791 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2477794 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2478089 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2478094 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2478262 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2478389 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2478769 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2478924 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2479248 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2481777 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2484701 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2491993 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2492475 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2495336 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2495616 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2498547 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2502572 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2504764 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2512512 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2518464 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2519784 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2520460 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2531840 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2534534 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2565414 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2569104 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2573254 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2577937 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2577939 00:33:19.564 Removing: /var/run/dpdk/spdk_pid2578600 00:33:19.565 Removing: /var/run/dpdk/spdk_pid2579142 00:33:19.565 Removing: /var/run/dpdk/spdk_pid2579796 00:33:19.565 Removing: /var/run/dpdk/spdk_pid2580192 00:33:19.565 Removing: /var/run/dpdk/spdk_pid2580223 00:33:19.565 Removing: /var/run/dpdk/spdk_pid2580456 00:33:19.565 Removing: /var/run/dpdk/spdk_pid2580593 00:33:19.565 Removing: /var/run/dpdk/spdk_pid2580601 00:33:19.565 Removing: /var/run/dpdk/spdk_pid2581261 00:33:19.565 Removing: /var/run/dpdk/spdk_pid2581797 00:33:19.565 Removing: /var/run/dpdk/spdk_pid2582450 00:33:19.565 Removing: /var/run/dpdk/spdk_pid2582857 00:33:19.565 Removing: /var/run/dpdk/spdk_pid2582872 00:33:19.565 Removing: /var/run/dpdk/spdk_pid2583120 00:33:19.565 Removing: /var/run/dpdk/spdk_pid2584030 00:33:19.565 Removing: /var/run/dpdk/spdk_pid2584831 00:33:19.565 Removing: /var/run/dpdk/spdk_pid2591128 00:33:19.565 Removing: /var/run/dpdk/spdk_pid2619460 00:33:19.565 Removing: /var/run/dpdk/spdk_pid2622685 00:33:19.565 Removing: /var/run/dpdk/spdk_pid2623860 00:33:19.565 Removing: /var/run/dpdk/spdk_pid2625182 00:33:19.565 Removing: /var/run/dpdk/spdk_pid2625327 00:33:19.565 Removing: /var/run/dpdk/spdk_pid2625466 00:33:19.565 Removing: /var/run/dpdk/spdk_pid2625607 00:33:19.565 Removing: /var/run/dpdk/spdk_pid2626054 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2627385 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2628245 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2628676 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2630300 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2630722 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2631168 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2633970 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2637673 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2637674 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2637675 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2640183 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2646355 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2649007 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2653205 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2654152 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2655247 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2656216 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2659386 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2662260 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2664927 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2669861 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2669866 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2673067 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2673203 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2673448 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2673716 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2673721 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2676907 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2677249 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2680332 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2682397 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2686648 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2690526 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2697589 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2702574 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2702578 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2716462 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2716872 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2717397 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2717807 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2718498 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2718910 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2719819 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2720346 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2723149 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2723291 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2727497 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2727576 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2731332 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2734234 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2741483 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2741967 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2744770 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2744979 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2747920 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2751941 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2754212 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2761733 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2767682 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2768900 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2769563 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2780734 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2783399 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2785409 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2790994 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2791016 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2794559 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2796463 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2797860 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2798722 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2800129 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2800882 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2806861 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2807177 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2807568 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2809351 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2809754 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2810031 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2812702 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2812749 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2814300 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2814668 00:33:19.826 Removing: /var/run/dpdk/spdk_pid2814676 00:33:19.826 Clean 00:33:20.084 11:35:15 -- common/autotest_common.sh@1453 -- # return 0 00:33:20.084 11:35:15 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:33:20.084 11:35:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:20.084 11:35:15 -- common/autotest_common.sh@10 -- # set +x 00:33:20.084 11:35:15 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:33:20.084 11:35:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:20.084 11:35:15 -- common/autotest_common.sh@10 -- # set +x 00:33:20.084 11:35:15 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:20.084 11:35:15 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:33:20.084 11:35:15 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:33:20.084 11:35:15 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:33:20.084 11:35:15 -- spdk/autotest.sh@398 -- # hostname 00:33:20.084 11:35:15 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:33:20.342 geninfo: WARNING: invalid characters removed from testname! 00:33:52.425 11:35:45 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:54.330 11:35:49 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:57.612 11:35:52 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:00.890 11:35:55 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:03.416 11:35:58 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:06.695 11:36:01 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:09.976 11:36:04 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:09.976 11:36:04 -- spdk/autorun.sh@1 -- $ timing_finish 00:34:09.976 11:36:04 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:34:09.976 11:36:04 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:09.976 11:36:04 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:34:09.976 11:36:04 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:09.976 + [[ -n 2394142 ]] 00:34:09.976 + sudo kill 2394142 00:34:09.987 [Pipeline] } 00:34:10.003 [Pipeline] // stage 00:34:10.009 [Pipeline] } 00:34:10.025 [Pipeline] // timeout 00:34:10.030 [Pipeline] } 00:34:10.045 [Pipeline] // catchError 00:34:10.049 [Pipeline] } 00:34:10.060 [Pipeline] // wrap 00:34:10.065 [Pipeline] } 00:34:10.075 [Pipeline] // catchError 00:34:10.083 [Pipeline] stage 00:34:10.085 [Pipeline] { (Epilogue) 00:34:10.097 [Pipeline] catchError 00:34:10.099 [Pipeline] { 00:34:10.107 [Pipeline] echo 00:34:10.109 Cleanup processes 00:34:10.113 [Pipeline] sh 00:34:10.395 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:10.395 2825874 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:10.409 [Pipeline] sh 00:34:10.694 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:10.694 ++ grep -v 'sudo pgrep' 00:34:10.694 ++ awk '{print $1}' 00:34:10.694 + sudo kill -9 00:34:10.694 + true 00:34:10.706 [Pipeline] sh 00:34:11.062 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:21.046 [Pipeline] sh 00:34:21.333 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:21.333 Artifacts sizes are good 00:34:21.349 [Pipeline] archiveArtifacts 00:34:21.356 Archiving artifacts 00:34:21.497 [Pipeline] sh 00:34:21.783 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:34:21.799 [Pipeline] cleanWs 00:34:21.810 [WS-CLEANUP] Deleting project workspace... 00:34:21.810 [WS-CLEANUP] Deferred wipeout is used... 00:34:21.817 [WS-CLEANUP] done 00:34:21.819 [Pipeline] } 00:34:21.836 [Pipeline] // catchError 00:34:21.849 [Pipeline] sh 00:34:22.131 + logger -p user.info -t JENKINS-CI 00:34:22.140 [Pipeline] } 00:34:22.153 [Pipeline] // stage 00:34:22.158 [Pipeline] } 00:34:22.171 [Pipeline] // node 00:34:22.176 [Pipeline] End of Pipeline 00:34:22.212 Finished: SUCCESS